Bencher-Scenario-ExceptionHandling
view release on metacpan or search on metacpan
"develop" : {
"requires" : {
"Pod::Coverage::TrustPod" : "0",
"Test::Perl::Critic" : "0",
"Test::Pod" : "1.41",
"Test::Pod::Coverage" : "1.08"
}
},
"runtime" : {
"requires" : {
"Try::Tiny" : "0",
"perl" : "5.034",
"strict" : "0",
"warnings" : "0"
}
},
"test" : {
"requires" : {
"Bencher::Backend" : "1.063",
"File::Spec" : "0",
"IO::Handle" : "0",
"IPC::Open3" : "0",
"Test::More" : "0"
}
},
"x_benchmarks" : {
"requires" : {
"Try::Tiny" : "0"
},
"x_benchmarks" : {
"Try::Tiny" : "0"
}
}
},
"provides" : {
"Bencher::Scenario::ExceptionHandling" : {
"file" : "lib/Bencher/Scenario/ExceptionHandling.pm",
"version" : "0.001"
}
},
"release_status" : "stable",
license: perl
meta-spec:
url: http://module-build.sourceforge.net/META-spec-v1.4.html
version: '1.4'
name: Bencher-Scenario-ExceptionHandling
provides:
Bencher::Scenario::ExceptionHandling:
file: lib/Bencher/Scenario/ExceptionHandling.pm
version: '0.001'
requires:
Try::Tiny: '0'
perl: '5.034'
strict: '0'
warnings: '0'
resources:
bugtracker: https://rt.cpan.org/Public/Dist/Display.html?Name=Bencher-Scenario-ExceptionHandling
homepage: https://metacpan.org/release/Bencher-Scenario-ExceptionHandling
repository: git://github.com/perlancar/perl-Bencher-Scenario-ExceptionHandling.git
version: '0.001'
x_Dist_Zilla:
perl:
Makefile.PL view on Meta::CPAN
"AUTHOR" => "perlancar <perlancar\@cpan.org>",
"CONFIGURE_REQUIRES" => {
"ExtUtils::MakeMaker" => 0,
"File::ShareDir::Install" => "0.06"
},
"DISTNAME" => "Bencher-Scenario-ExceptionHandling",
"LICENSE" => "perl",
"MIN_PERL_VERSION" => "5.034",
"NAME" => "Bencher::Scenario::ExceptionHandling",
"PREREQ_PM" => {
"Try::Tiny" => 0,
"strict" => 0,
"warnings" => 0
},
"TEST_REQUIRES" => {
"Bencher::Backend" => "1.063",
"File::Spec" => 0,
"IO::Handle" => 0,
"IPC::Open3" => 0,
"Test::More" => 0
},
Makefile.PL view on Meta::CPAN
}
);
my %FallbackPrereqs = (
"Bencher::Backend" => "1.063",
"File::Spec" => 0,
"IO::Handle" => 0,
"IPC::Open3" => 0,
"Test::More" => 0,
"Try::Tiny" => 0,
"strict" => 0,
"warnings" => 0
);
unless ( eval { ExtUtils::MakeMaker->VERSION(6.63_03) } ) {
delete $WriteMakefileArgs{TEST_REQUIRES};
delete $WriteMakefileArgs{BUILD_REQUIRES};
$WriteMakefileArgs{PREREQ_PM} = \%FallbackPrereqs;
}
DESCRIPTION
Keywords: try-catch, eval, die
TODO: benchmark other try-catch modules.
BENCHMARKED MODULES
Version numbers shown below are the versions used when running the
sample benchmark.
Try::Tiny 0.31
BENCHMARK PARTICIPANTS
* builtin-try (perl_code)
Code template:
use feature 'try'; use experimental 'try'; try { <code_try:raw> } catch($e) { <code_catch:raw> }
Requires perl 5.34+.
Code template:
eval { <code_try:raw> }; if ($@) { <code_catch:raw> }
* eval-localize-die-signal-and-eval-error (perl_code)
Code template:
{ local $@; local $SIG{__DIE__}; eval { <code_try:raw> }; if ($@) { <code_catch:raw> } }
* Try::Tiny (perl_code)
Code template:
use Try::Tiny; try { <code_try:raw> } catch { <code_catch:raw> }
BENCHMARK DATASETS
* empty try, empty catch
* die in try, empty catch
BENCHMARK SAMPLE RESULTS
Sample benchmark #1
Run on: perl: *v5.38.2*, CPU: *Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
(2 cores)*, OS: *GNU/Linux Ubuntu version 20.04*, OS kernel: *Linux
% bencher -m ExceptionHandling
Result formatted as table (split, part 1 of 2):
#table1#
{dataset=>"die in try, empty catch"}
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
| participant | rate (/s) | time (üs) | pct_faster_vs_slowest | pct_slower_vs_fastest | errors | samples |
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
| Try::Tiny | 310000 | 3.23 | 0.00% | 602.22% | 2.1e-09 | 8 |
| eval-localize-die-signal-and-eval-error | 1000000 | 1 | 207.56% | 128.32% | 1.2e-08 | 9 |
| builtin-try | 1700000 | 0.6 | 438.20% | 30.48% | 1.1e-09 | 7 |
| naive-eval | 2170000 | 0.46 | 602.22% | 0.00% | 3.6e-10 | 8 |
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
The above result formatted in Benchmark.pm style:
Rate T:T e b n
T:T 310000/s -- -69% -81% -85%
e 1000000/s 223% -- -40% -54%
b 1700000/s 438% 66% -- -23%
n 2170000/s 602% 117% 30% --
Legends:
T:T: participant=Try::Tiny
b: participant=builtin-try
e: participant=eval-localize-die-signal-and-eval-error
n: participant=naive-eval
The above result presented as chart:
#IMAGE:
share/images/bencher-result-1.png|/tmp/VHOUgvh_oa/bencher-result-1.png
Result formatted as table (split, part 2 of 2):
#table2#
{dataset=>"empty try, empty catch"}
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
| participant | rate (/s) | time (üs) | pct_faster_vs_slowest | pct_slower_vs_fastest | errors | samples |
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
| Try::Tiny | 397000 | 2.52 | 0.00% | 6147.90% | 5.8e-10 | 8 |
| eval-localize-die-signal-and-eval-error | 1700000 | 0.59 | 328.63% | 1357.65% | 6.5e-10 | 7 |
| builtin-try | 8400000 | 0.12 | 2022.06% | 194.43% | 1.9e-10 | 7 |
| naive-eval | 25000000 | 0.04 | 6147.90% | 0.00% | 8e-11 | 7 |
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
The above result formatted in Benchmark.pm style:
Rate T:T e b n
T:T 397000/s -- -76% -95% -98%
e 1700000/s 327% -- -79% -93%
b 8400000/s 2000% 391% -- -66%
n 25000000/s 6200% 1374% 200% --
Legends:
T:T: participant=Try::Tiny
b: participant=builtin-try
e: participant=eval-localize-die-signal-and-eval-error
n: participant=naive-eval
The above result presented as chart:
#IMAGE:
share/images/bencher-result-2.png|/tmp/VHOUgvh_oa/bencher-result-2.png
Sample benchmark #2
Benchmark command (benchmarking module startup overhead):
% bencher -m ExceptionHandling --module-startup
Result formatted as table:
#table3#
+---------------------+-----------+-------------------+-----------------------+-----------------------+---------+---------+
| participant | time (ms) | mod_overhead_time | pct_faster_vs_slowest | pct_slower_vs_fastest | errors | samples |
+---------------------+-----------+-------------------+-----------------------+-----------------------+---------+---------+
| Try::Tiny | 14 | 8.4 | 0.00% | 145.39% | 1.8e-05 | 7 |
| perl -e1 (baseline) | 5.6 | 0 | 145.39% | 0.00% | 1.6e-05 | 7 |
+---------------------+-----------+-------------------+-----------------------+-----------------------+---------+---------+
The above result formatted in Benchmark.pm style:
Rate T:T perl -e1 (baseline)
T:T 71.4/s -- -60%
perl -e1 (baseline) 178.6/s 150% --
Legends:
T:T: mod_overhead_time=8.4 participant=Try::Tiny
perl -e1 (baseline): mod_overhead_time=0 participant=perl -e1 (baseline)
The above result presented as chart:
#IMAGE:
share/images/bencher-result-3.png|/tmp/VHOUgvh_oa/bencher-result-3.png
To display as an interactive HTML table on a browser, you can add option
"--format html+datatables".
lib/Bencher/Scenario/ExceptionHandling.pm view on Meta::CPAN
{
name => "eval-localize-die-signal-and-eval-error",
description => <<'MARKDOWN',
MARKDOWN
code_template => q|{ local $@; local $SIG{__DIE__}; eval { <code_try:raw> }; if ($@) { <code_catch:raw> } }|,
},
{
name => "Try::Tiny",
module => 'Try::Tiny',
description => <<'MARKDOWN',
MARKDOWN
code_template => q|use Try::Tiny; try { <code_try:raw> } catch { <code_catch:raw> }|,
},
],
precision => 7,
datasets => [
{name=>'empty try, empty catch', args=>{code_try=>'', code_catch=>''}},
{name=>'die in try, empty catch', args=>{code_try=>'die', code_catch=>''}},
],
};
1;
lib/Bencher/Scenario/ExceptionHandling.pm view on Meta::CPAN
=head1 DESCRIPTION
Keywords: try-catch, eval, die
TODO: benchmark other try-catch modules.
=head1 BENCHMARKED MODULES
Version numbers shown below are the versions used when running the sample benchmark.
L<Try::Tiny> 0.31
=head1 BENCHMARK PARTICIPANTS
=over
=item * builtin-try (perl_code)
Code template:
use feature 'try'; use experimental 'try'; try { <code_try:raw> } catch($e) { <code_catch:raw> }
lib/Bencher/Scenario/ExceptionHandling.pm view on Meta::CPAN
Code template:
{ local $@; local $SIG{__DIE__}; eval { <code_try:raw> }; if ($@) { <code_catch:raw> } }
=item * Try::Tiny (perl_code)
Code template:
use Try::Tiny; try { <code_try:raw> } catch { <code_catch:raw> }
=back
=head1 BENCHMARK DATASETS
lib/Bencher/Scenario/ExceptionHandling.pm view on Meta::CPAN
% bencher -m ExceptionHandling
Result formatted as table (split, part 1 of 2):
#table1#
{dataset=>"die in try, empty catch"}
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
| participant | rate (/s) | time (μs) | pct_faster_vs_slowest | pct_slower_vs_fastest | errors | samples |
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
| Try::Tiny | 310000 | 3.23 | 0.00% | 602.22% | 2.1e-09 | 8 |
| eval-localize-die-signal-and-eval-error | 1000000 | 1 | 207.56% | 128.32% | 1.2e-08 | 9 |
| builtin-try | 1700000 | 0.6 | 438.20% | 30.48% | 1.1e-09 | 7 |
| naive-eval | 2170000 | 0.46 | 602.22% | 0.00% | 3.6e-10 | 8 |
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
The above result formatted in L<Benchmark.pm|Benchmark> style:
Rate T:T e b n
T:T 310000/s -- -69% -81% -85%
e 1000000/s 223% -- -40% -54%
b 1700000/s 438% 66% -- -23%
n 2170000/s 602% 117% 30% --
Legends:
T:T: participant=Try::Tiny
b: participant=builtin-try
e: participant=eval-localize-die-signal-and-eval-error
n: participant=naive-eval
The above result presented as chart:
#IMAGE: share/images/bencher-result-1.png|/tmp/VHOUgvh_oa/bencher-result-1.png
Result formatted as table (split, part 2 of 2):
#table2#
{dataset=>"empty try, empty catch"}
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
| participant | rate (/s) | time (μs) | pct_faster_vs_slowest | pct_slower_vs_fastest | errors | samples |
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
| Try::Tiny | 397000 | 2.52 | 0.00% | 6147.90% | 5.8e-10 | 8 |
| eval-localize-die-signal-and-eval-error | 1700000 | 0.59 | 328.63% | 1357.65% | 6.5e-10 | 7 |
| builtin-try | 8400000 | 0.12 | 2022.06% | 194.43% | 1.9e-10 | 7 |
| naive-eval | 25000000 | 0.04 | 6147.90% | 0.00% | 8e-11 | 7 |
+-----------------------------------------+-----------+-----------+-----------------------+-----------------------+---------+---------+
The above result formatted in L<Benchmark.pm|Benchmark> style:
Rate T:T e b n
T:T 397000/s -- -76% -95% -98%
e 1700000/s 327% -- -79% -93%
b 8400000/s 2000% 391% -- -66%
n 25000000/s 6200% 1374% 200% --
Legends:
T:T: participant=Try::Tiny
b: participant=builtin-try
e: participant=eval-localize-die-signal-and-eval-error
n: participant=naive-eval
The above result presented as chart:
#IMAGE: share/images/bencher-result-2.png|/tmp/VHOUgvh_oa/bencher-result-2.png
=head2 Sample benchmark #2
lib/Bencher/Scenario/ExceptionHandling.pm view on Meta::CPAN
Benchmark command (benchmarking module startup overhead):
% bencher -m ExceptionHandling --module-startup
Result formatted as table:
#table3#
+---------------------+-----------+-------------------+-----------------------+-----------------------+---------+---------+
| participant | time (ms) | mod_overhead_time | pct_faster_vs_slowest | pct_slower_vs_fastest | errors | samples |
+---------------------+-----------+-------------------+-----------------------+-----------------------+---------+---------+
| Try::Tiny | 14 | 8.4 | 0.00% | 145.39% | 1.8e-05 | 7 |
| perl -e1 (baseline) | 5.6 | 0 | 145.39% | 0.00% | 1.6e-05 | 7 |
+---------------------+-----------+-------------------+-----------------------+-----------------------+---------+---------+
The above result formatted in L<Benchmark.pm|Benchmark> style:
Rate T:T perl -e1 (baseline)
T:T 71.4/s -- -60%
perl -e1 (baseline) 178.6/s 150% --
Legends:
T:T: mod_overhead_time=8.4 participant=Try::Tiny
perl -e1 (baseline): mod_overhead_time=0 participant=perl -e1 (baseline)
The above result presented as chart:
#IMAGE: share/images/bencher-result-3.png|/tmp/VHOUgvh_oa/bencher-result-3.png
To display as an interactive HTML table on a browser, you can add option C<--format html+datatables>.
=head1 HOMEPAGE
lib/Bencher/ScenarioR/ExceptionHandling.pm view on Meta::CPAN
## no critic
package Bencher::ScenarioR::ExceptionHandling;
our $VERSION = 0.001; # VERSION
our $results = [[200,"OK",[{_name=>"participant=Try::Tiny",_succinct_name=>"T:T",errors=>2.1e-09,participant=>"Try::Tiny",pct_faster_vs_slowest=>0,pct_slower_vs_fastest=>6.02173913043478,rate=>310000,samples=>8,time=>3.23},{_name=>"participant=eval-l...
1;
# ABSTRACT: Benchmark various ways to do exception handling in Perl
=head1 DESCRIPTION
This module is automatically generated by Pod::Weaver::Plugin::Bencher::Scenario during distribution build.
A Bencher::ScenarioR::* module contains the raw result of sample benchmark and might be useful for some stuffs later.
( run in 0.829 second using v1.01-cache-2.11-cpan-05444aca049 )