Benchmark-Lab
view release on metacpan or search on metacpan
229230231232233234235236237238239240241242243244245246247248249
*
"percentiles"
– hash reference
with
1, 5, 10, 25, 50, 75, 90, 95 and
99th percentile iteration
times
. There may be duplicates
if
there
were fewer than 100 iterations.
*
"median_rate"
– the inverse of the 50th percentile
time
.
*
"timing"
– array reference
with
individual iteration
times
as
(floating point) seconds.
CAVEATS
If the
"do_task"
executes in less
time
than the timer granularity, an
error will be thrown. For benchmarks that
do
not have
before
/
after
functions, just repeating the function under test in
"do_task"
will be
sufficient.
RATIONALE
I believe most approaches to benchmarking are flawed, primarily because
they focus on finding a
*single
* measurement. Single metrics are easy to
grok and easy to compare (
"foo was 13% faster than bar!"
), but they
obscure the full distribution of timing data and (as a result) are often
unstable.
lib/Benchmark/Lab.pm view on Meta::CPAN
538539540541542543544545546547548549550551552553554555556557=item *
C<timing> – array reference with individual iteration times as (floating point) seconds.
=back
=for Pod::Coverage BUILD
=head1 CAVEATS
If the C<do_task> executes in less time than the timer granularity, an
error will be thrown. For benchmarks that do not have before/after functions,
just repeating the function under test in C<do_task> will be sufficient.
=head1 RATIONALE
I believe most approaches to benchmarking are flawed, primarily because
they focus on finding a I<single> measurement. Single metrics are easy to
grok and easy to compare ("foo was 13% faster than bar!"), but they obscure
the full distribution of timing data and (as a result) are often unstable.
( run in 0.479 second using v1.01-cache-2.11-cpan-d6f9594c0a5 )