DBIx-TxnPool
view release on metacpan or search on metacpan
Now it's the string "try restarting transaction" because some time not only deadlocks occur but some other locks
0.05 2013-10-24
- Removed MYMETA.* files from MANIFEST and added to exclude lists (.gitignore & MANIFEST.SKIP)
(Thanks to dolmen@cpan.org)
0.04 2013-08-31
- Added 'txn_commit' shortcut as 'commit' callback
- Some code improvements
- Added 'dbh' accessor
- All callbacks now get a $pool object and may be $item as second argument
0.03 2013-08-28
- Some doc fixes
0.02 2013-08-27
- Some fixes
0.01 2013-08-27
- Final release for CPAN
to one transaction and has the deadlock and signal solution.
DETAILS
If you make alone insert/delete/update statement in the InnoDB engine,
MySQL server does fsync (data flushing to disk) after each statement. It
can be very slow for many updates. The best solution can be to wrap some
insert/delete/update statements in one transaction for example. But this
raises a new problem - deadlocks. If a deadlock occurs a DBI module
throws exceptions and ideal way to repeat SQL statements again. This
module helps to make it. It has a pool inside for data (FIFO buffer) and
calls your callbacks for each pushed item. When your pool to be fed by
your data, it wraps data in one transaction up to the maximum defined
size or up to the finish method. If deadlock occurs a pool repeats your
callbacks for every item again. You can define a second callback which
will be executed for every item after wrapped transaction. For example
there can be non-SQL statements, for example a deleting files, cleanups
and etc.
CONSTRUCTOR
Please to see "SYNOPSIS" section
Shortcuts:
The "txn_item" should be first. Other sortcuts can follow in any order.
Parameters should be the last.
lib/DBIx/TxnPool.pm view on Meta::CPAN
transaction and has the deadlock and signal solution.
=head1 DETAILS
If you make alone insert/delete/update statement in the InnoDB engine, MySQL
server does fsync (data flushing to disk) after each statement. It can be very
slow for many updates. The best solution can be to wrap some
insert/delete/update statements in one transaction for example. But this raises
a new problem - deadlocks. If a deadlock occurs a DBI module throws exceptions
and ideal way to repeat SQL statements again. This module helps to make it. It
has a pool inside for data (FIFO buffer) and calls your callbacks for each
pushed item. When your pool to be fed by your data, it wraps data in one
transaction up to the maximum defined size or up to the finish method. If
deadlock occurs a pool repeats your callbacks for every item again. You can
define a second callback which will be executed for every item after wrapped
transaction. For example there can be non-SQL statements, for example a deleting
files, cleanups and etc.
=head1 CONSTRUCTOR
Please to see L</SYNOPSIS> section
=head2 Shortcuts:
xt/03_deadlock_solution.t view on Meta::CPAN
$dbh->do( "INSERT INTO $table SET a=$_" ) for ( 1 .. 10 );
ok( $dbh->selectrow_array( "SELECT COUNT(*) FROM $table" ) == 10 );
b:
ok( ! defined $dbh );
$dbh = dbi_connect;
ok( $dbh->selectrow_array( "SELECT COUNT(*) FROM $table" ) == 10 );
ab:
my $commit_callbacks = 0;
$pool = txn_item {
$dbh->do( "UPDATE $table SET b=? WHERE a=?", undef, $_->{b}, $_->{a} );
}
txn_commit {
$commit_callbacks++;
} dbh => $dbh;
a:
@data = ( { b => 1, a => 1 }, { b => 1, a => 2 } );
b:
@data = ( { b => 2, a => 2 }, { b => 2, a => 1 } );
ab:
for ( my $i = 0; $i < AMOUNT_TESTS; $i++ ) {
foreach my $item ( @data ) {
$pool->add( $item );
}
$pool->finish;
}
ab:
diag "The amount deadlocks is " . $pool->amount_deadlocks;
ok( $commit_callbacks == AMOUNT_TESTS );
done_testing;
xt/04_deadlock_sort_solution.t view on Meta::CPAN
$dbh->do( "INSERT INTO $table SET a=$_" ) for ( 1 .. 10 );
ok( $dbh->selectrow_array( "SELECT COUNT(*) FROM $table" ) == 10 );
b:
ok( ! defined $dbh );
$dbh = dbi_connect;
ok( $dbh->selectrow_array( "SELECT COUNT(*) FROM $table" ) == 10 );
ab:
my $commit_callbacks = 0;
my $sorted_calls = 0;
$pool = txn_item {
$dbh->do( "UPDATE $table SET b=? WHERE a=?", undef, $_->{b}, $_->{a} );
}
txn_sort {
# sorting by updated key prevent deadlocks of transactions do same SQL statements
$sorted_calls++;
# to check if sort works fine - $a & $b should be normal aliased
ok ref $a eq 'HASH'
&& ref $b eq 'HASH'
&& exists $a->{a}
&& exists $b->{a}
&& $a->{a} > 0
&& $b->{a} > 0;
$a->{a} <=> $b->{a};
}
txn_commit {
$commit_callbacks++;
} dbh => $dbh;
a:
@data = ( { b => 1, a => 1 }, { b => 1, a => 2 } );
b:
@data = ( { b => 2, a => 2 }, { b => 2, a => 1 } );
ab:
for ( my $i = 0; $i < AMOUNT_TESTS; $i++ ) {
foreach my $item ( @data ) {
$pool->add( $item );
}
$pool->finish;
}
ab:
ok $pool->amount_deadlocks == 0;
ok $sorted_calls > 0;
ok( $commit_callbacks == AMOUNT_TESTS );
done_testing;
xt/05_block_signal.t view on Meta::CPAN
$dbh->do( "INSERT INTO $table SET a=$_" ) for ( 1 .. 10 );
ok( $dbh->selectrow_array( "SELECT COUNT(*) FROM $table" ) == 10 );
b:
ok( ! defined $dbh );
$dbh = dbi_connect;
ok( $dbh->selectrow_array( "SELECT COUNT(*) FROM $table" ) == 10 );
ab:
my $commit_callbacks = 0;
$amount_TERM_signals = 0;
$pool = txn_item {
$dbh->do( "UPDATE $table SET b=? WHERE a=?", undef, $_->{b}, $_->{a} );
kill 15, defined $pid_a ? $pid_a : $pid_b;
}
txn_commit {
$commit_callbacks++;
} dbh => $dbh;
$SIG{TERM} = sub { $amount_TERM_signals++ };
a:
@data = ( { b => 1, a => 1 }, { b => 1, a => 2 } );
b:
@data = ( { b => 2, a => 2 }, { b => 2, a => 1 } );
xt/05_block_signal.t view on Meta::CPAN
for ( my $i = 0; $i < AMOUNT_TESTS; $i++ ) {
foreach my $item ( @data ) {
$pool->add( $item );
}
$pool->finish;
}
};
ok ! $Signal::Mask{TERM};
ab:
ok( $commit_callbacks == AMOUNT_TESTS );
#diag "pid $$, caught number TERM signals: $amount_TERM_signals";
ok $amount_TERM_signals > 0;
ab:
{
local $Signal::Mask{TERM} = 1;
$amount_TERM_signals = 0;
ok $Signal::Mask{TERM};
( run in 0.363 second using v1.01-cache-2.11-cpan-9b1e4054eb1 )