Benchmark-DKbench
view release on metacpan or search on metacpan
lib/Benchmark/DKbench.pm view on Meta::CPAN
=head2 C<benchmark_list>
my $benchmarks = Benchmark::DKbench::benchmark_list($extra_benchmarks_hashref);
Returns a hashref with all the suite's benchmarks listed above (optionally adding
the ones you pass as an argument) in the format C<Benchmark::MCE::suite_run> expects
as the C<bench> parameter.
=head1 EXPORTED FUNCTIONS
The exported functions are the same as L<Benchmark::MCE> and are used by the C<dkbench>
script. You will normally use these functions from L<Benchmark::MCE> if you want to
build your own benchmark suite, except if you want to extend DKbench by adding your
own modules, and/or replacing the C<dkbench> script:
=head2 C<system_identity>
This is now imported directly from L<Benchmark::MCE>, see POD for that module.
=head2 C<suite_run>
This is now imported from L<Benchmark::MCE>, with the addition of C<extra_bench>,
see POD for that module and L<CUSTOM BENCHMARKS> below.
=head2 C<calc_scalability>
This is now an alias to L<Benchmark::MCE::calc_scalability>, see POD for that module.
=head2 C<suite_calc>
This is now imported directly from L<Benchmark::MCE>, with the addition of C<extra_bench>,
see POD for that module and L<CUSTOM BENCHMARKS> below.
=head1 CUSTOM BENCHMARKS
Version 2.5 introduced the ability to add custom benchmarks to be run along any
of the included ones of the suite. This allows you to create a suite that is more
relevant to you, by including the actual code you will be running on the systems
you are benchmarking.
Here is an example of adding a benchmark to the test suite and running it together
with the default benchmarks:
use Benchmark::DKbench;
use Math::Trig qw/:great_circle :pi/;
sub great_circle {
my $iter = shift || 1; # Optionally have an argument that scales the workload
my $dist = 0;
$dist +=
great_circle_distance(rand(pi), rand(2 * pi), rand(pi), rand(2 * pi))
for 1 .. $iter;
return $dist; # Returning something is optional, but is used to Fail bench on no match
}
my %stats = suite_run({
extra_bench => { 'Math::Trig' => # A unique name for the benchmark
[
\&great_circle, # Reference to bench function
'3144042.81433949', # Output for your reference Perl - determines Pass/Fail (optional)
5.5, # Seconds to complete in normal mode for score = 1000 (optional)
1000000, # Argument to pass for --quick mode (optional)
5000000 # Argument to pass for normal mode (optional)
]},
}
);
You can use a prefix for the naming of your custom benchmarks and make use of the
C<include> argument to run only the custom benchmarks. Here is an example, where
a custom test is defined inline, without any of the optional arguments and specified
to run by itself:
my %stats = suite_run({
include => 'custom',
extra_bench => { custom1 => [sub {my @a=split(//, 'x'x$_) for 1..10000}] }
}
);
If you want to do a multi-threaded run as well and then calculate scalability:
my %stats_multi = suite_run({
threads => system_identity(1),
include => 'custom',
extra_bench => { custom1 => [sub {my @a=split(//, 'x'x$_) for 1..10000}] }
}
);
my %scal = calc_scalability(\%stats, \%stats_multi);
Or, with a single call via the convenience function L<suite_calc>:
my ($stats, $stats_multi, $scal) = suite_calc({
include => 'custom',
extra_bench => { custom1 => [sub {my @a=split(//, 'x'x$_) for 1..10000}] }
}
);
For creating your own benchmark suite from scratch, instead of extending the existing
ones, please see L<Benchmark::MCE> instead which is only the harness without benchmarks.
=head1 NOTES
The benchmark suite was created to compare the performance of various cloud offerings.
You can see the L<original perl blog post|http://blogs.perl.org/users/dimitrios_kechagias/2022/03/cloud-provider-performance-comparison-gcp-aws-azure-perl.html>
as well as the L<2023 follow-up|https://dev.to/dkechag/cloud-vm-performance-value-comparison-2023-perl-more-1kpp>.
The benchmarks for the first version were more tuned to what I would expect to run
on the servers I was testing, in order to choose the optimal types for the company
I was working for. The second version has expanded a bit over that, and is friendlier
to use.
Although this benchmark is in general a good indicator of general CPU performance
and can be customized to your needs, no benchmark is as good as running your own
actual workload (which can be done via the L<CUSTOM BENCHMARKS> functionality).
=head2 SCORES
Some sample DKbench score results from various systems for comparison (all on
reference setup with Perl 5.36.0 thread-multi):
( run in 0.713 second using v1.01-cache-2.11-cpan-39bf76dae61 )