Benchmark-Lab

 view release on metacpan or  search on metacpan

README  view on Meta::CPAN

229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
    *   "percentiles" – hash reference with 1, 5, 10, 25, 50, 75, 90, 95 and
        99th percentile iteration times. There may be duplicates if there
        were fewer than 100 iterations.
 
    *   "median_rate" – the inverse of the 50th percentile time.
 
    *   "timing" – array reference with individual iteration times as
        (floating point) seconds.
 
CAVEATS
    If the "do_task" executes in less time than the timer granularity, an
    error will be thrown. For benchmarks that do not have before/after
    functions, just repeating the function under test in "do_task" will be
    sufficient.
 
RATIONALE
    I believe most approaches to benchmarking are flawed, primarily because
    they focus on finding a *single* measurement. Single metrics are easy to
    grok and easy to compare ("foo was 13% faster than bar!"), but they
    obscure the full distribution of timing data and (as a result) are often
    unstable.

lib/Benchmark/Lab.pm  view on Meta::CPAN

538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
=item *
 
C<timing> – array reference with individual iteration times as (floating point) seconds.
 
=back
 
=for Pod::Coverage BUILD
 
=head1 CAVEATS
 
If the C<do_task> executes in less time than the timer granularity, an
error will be thrown.  For benchmarks that do not have before/after functions,
just repeating the function under test in C<do_task> will be sufficient.
 
=head1 RATIONALE
 
I believe most approaches to benchmarking are flawed, primarily because
they focus on finding a I<single> measurement.  Single metrics are easy to
grok and easy to compare ("foo was 13% faster than bar!"), but they obscure
the full distribution of timing data and (as a result) are often unstable.



( run in 0.479 second using v1.01-cache-2.11-cpan-d6f9594c0a5 )