Benchmark-Lab

 view release on metacpan or  search on metacpan

README  view on Meta::CPAN

    *   "percentiles" – hash reference with 1, 5, 10, 25, 50, 75, 90, 95 and
        99th percentile iteration times. There may be duplicates if there
        were fewer than 100 iterations.

    *   "median_rate" – the inverse of the 50th percentile time.

    *   "timing" – array reference with individual iteration times as
        (floating point) seconds.

CAVEATS
    If the "do_task" executes in less time than the timer granularity, an
    error will be thrown. For benchmarks that do not have before/after
    functions, just repeating the function under test in "do_task" will be
    sufficient.

RATIONALE
    I believe most approaches to benchmarking are flawed, primarily because
    they focus on finding a *single* measurement. Single metrics are easy to
    grok and easy to compare ("foo was 13% faster than bar!"), but they
    obscure the full distribution of timing data and (as a result) are often
    unstable.

lib/Benchmark/Lab.pm  view on Meta::CPAN

=item *

C<timing> – array reference with individual iteration times as (floating point) seconds.

=back

=for Pod::Coverage BUILD

=head1 CAVEATS

If the C<do_task> executes in less time than the timer granularity, an
error will be thrown.  For benchmarks that do not have before/after functions,
just repeating the function under test in C<do_task> will be sufficient.

=head1 RATIONALE

I believe most approaches to benchmarking are flawed, primarily because
they focus on finding a I<single> measurement.  Single metrics are easy to
grok and easy to compare ("foo was 13% faster than bar!"), but they obscure
the full distribution of timing data and (as a result) are often unstable.



( run in 0.341 second using v1.01-cache-2.11-cpan-87723dcf8b7 )