view release on metacpan or search on metacpan
0.805 2017-02-13
* New parameter to Server constructor: name. This will be used
to set the process name via $0 so you can see which process is
which in the ps(1) output.
0.804 2017-02-12
* Bugfix for hung worker timeout not destroying object quickly
0.803 2017-02-12
* Bugfix for timeout errors thrown by checkout with Log::Defer
objects installed.
0.802 2015-10-23
* Stop using AnyEvent::Strict in the tests. It creates an
AE timer at compile time, therefore initializing the event
loop before we've had a chance to fork our server.
* Before forking in fork_task_server, assert that the AE
loop hasn't been initialized.
0.801 2014-02-15
* Bugfix: Fix memory leak of client objects.
* Change: Make hung worker timeout actually terminate the
worker to free up resources immediately.
0.800 2014-02-15
* Backwards-incompatible change: When multiple requests are
queued up on a checkout, if one of the requests throws an
error all the pending requests are removed from the queue.
This makes a non-nested sequence of method-calls on a
checkout less dangerous and more like the synchronous code
it is mimicing.
* Removed vestigal parts of an undocumented feature that was
broken several releases ago: In non-void context, methods
on a checkout used to return a guard that when destroyed
would cancel the remote method call. Instead, now you should
use the throw_fatal_error method on the checkout. The checkout
will then throw errors every time it is accessed and should
be discarded.
* Documented max_checkouts feature for coping with memory leaks
* Major documentation updates
0.750 2013-04-08
* Backwards-incompatible change: The behaviour enabled by the
undocumented client option added in the previous release,
refork_after_error, is now the default behaviour. Instead
there is a new option called dont_refork_after_error to get
back the original behaviour.
The checkout object is an object that proxies its method calls to a
worker process or a function that does the same. The arguments to this
method/function are the arguments you wish to send to the worker process
followed by a callback to run when the operation completes. The callback
will be passed two arguments: the original checkout object and the value
returned by the worker process. The checkout object is passed into the
callback as a convenience just in case you no longer have the original
checkout available lexically.
In the event of an exception thrown by the worker process, a timeout, or
some other unexpected condition, an error is raised in the dynamic
context of the callback (see the "ERROR HANDLING" section).
DESIGN
Both client and server are of course built with AnyEvent. However,
workers can't use AnyEvent (yet). I've never found a need to do event
processing in the worker since if the library you wish to use is already
AnyEvent-compatible you can simply use the library in the client
process. If the client process is too over-loaded, it may make sense to
run multiple client processes.
associated with that checkout until that checkout is garbage collected
which in perl means as soon as it is no longer needed. Each checkout
also maintains a queue of requested method-calls so that as soon as a
worker process is allocated to a checkout, any queued method calls are
filled in order.
"timeout" can be passed as a keyword argument to "checkout". Once a
request is queued up on that checkout, a timer of "timout" seconds
(default is 30, undef means infinity) is started. If the request
completes during this timeframe, the timer is cancelled. If the timer
expires, the worker connection is terminated and an exception is thrown
in the dynamic context of the callback (see the "ERROR HANDLING"
section).
Note that since timeouts are associated with a checkout, checkouts can
be created before the server is started. As long as the server is
running within "timeout" seconds, no error will be thrown and no
requests will be lost. The client will continually try to acquire worker
processes until a server is available, and once one is available it will
attempt to allocate all queued checkouts.
Because of checkout queuing, the maximum number of worker processes a
client will attempt to obtain can be limited with the "max_workers"
argument when creating a client object. If there are more live checkouts
than "max_workers", the remaining checkouts will have to wait until one
of the other workers becomes available. Because of timeouts, some
checkouts may never be serviced if the system can't handle the load (the
],
'timers' => {
'computing some operation' => [
'0.024089061050415',
'1.02470206105041'
]
}
};
ERROR HANDLING
In a synchronous program, if you expected some operation to throw an
exception you might wrap it in "eval" like this:
my $crypted;
eval {
$crypted = hash('secret');
};
if ($@) {
say "hash failed: $@";
} else {
say "hashed password is $crypted";
}
But in an asynchronous program, typically "hash" would initiate some
kind of asynchronous operation and then return immediately, allowing the
program to go about other tasks while waiting for the result. Since the
error might come back at any time in the future, the program needs a way
to map the exception that is thrown back to the original context.
AnyEvent::Task accomplishes this mapping with Callback::Frame.
Callback::Frame lets you preserve error handlers (and "local" variables)
across asynchronous callbacks. Callback::Frame is not tied to
AnyEvent::Task, AnyEvent or any other async framework and can be used
with almost all callback-based libraries.
However, when using AnyEvent::Task, libraries that you use in the client
must be AnyEvent compatible. This restriction obviously does not apply
to your server code, that being the main purpose of this module:
accessing blocking resources from an asynchronous program. In your
server code, when there is an error condition you should simply "die" or
"croak" as in a synchronous program.
As an example usage of Callback::Frame, here is how we would handle
errors thrown from a worker process running the "hash" method in an
asychronous client program:
use Callback::Frame;
frame(code => sub {
$client->checkout->hash('secret', sub {
my ($checkout, $crypted) = @_;
say "Hashed password is $crypted";
});
my $back_trace = shift;
say "Error is: $@";
say "Full back-trace: $back_trace";
})->(); ## <-- frame is created and then immediately executed
Of course if "hash" is something like a bcrypt hash function it is
unlikely to raise an exception so maybe that's a bad example. On the
other hand, maybe it's a really good example: In addition to errors that
occur while running your callbacks, AnyEvent::Task uses Callback::Frame
to throw errors if the worker process times out, so if the bcrypt "cost"
is really cranked up it might hit the default 30 second time limit.
Rationale for Callback::Frame
Why not just call the callback but set $@ and indicate an error has
occurred? This is the approach taken with AnyEvent::DBI for example. I
believe the Callback::Frame interface is superior to this method. In a
synchronous program, exceptions are out-of-band messages and code
doesn't need to locally handle them. It can let them "bubble up" the
stack, perhaps to a top-level error handler. Invoking the callback when
an error occurs forces exceptions to be handled in-band.
It's important that all callbacks be created with "fub" (or "frame")
even if you don't expect them to fail so that the dynamic context is
preserved for nested callbacks that may. An exception is the callbacks
provided to AnyEvent::Task checkouts: These are automatically wrapped in
frames for you (although explicitly passing in fubs is fine too).
The Callback::Frame documentation explains how this works in much more
detail.
Reforking of workers after errors
If a worker throws an error, the client receives the error but the
worker process stays running. As long as the client has a reference to
the checkout (and as long as the exception wasn't "fatal" -- see below),
it can still be used to communicate with that worker so you can access
error states, rollback transactions, or do any sort of required
clean-up.
However, once the checkout object is destroyed, by default the worker
will be shutdown instead of returning to the client's worker pool as in
the normal case where no errors were thrown. This is a "safe-by-default"
behaviour that may help in the event that an exception thrown by a
worker leaves the worker process in a broken/inconsistent state for some
reason (for example a DBI connection died). This can be overridden by
setting the "dont_refork_after_error" option to 1 in the client
constructor. This will only matter if errors are being thrown frequently
and your "setup" routines take a long time (aside from the setup
routine, creating new workers is quite fast since the server has already
compiled all the application code and just has to fork).
There are cases where workers will never be returned to the worker pool:
workers that have thrown fatal errors such as loss of worker connection
or hung worker timeout errors. These errors are stored in the checkout
and for as long as the checkout exists any methods on the checkout will
immediately return the stored fatal error. Your client process can
invoke this behaviour manually by calling the "throw_fatal_error" method
on a checkout object to cancel an operation and force-terminate a
worker.
Another reason that a worker might not be returned to the worker pool is
if it has been checked out "max_checkouts" times. If "max_checkouts" is
specified as an argument to the Client constructor, then workers will be
destroyed and reforked after being checked out this number of times.
When not specified, workers are never re-forked for this reason. This
parameter is useful for coping with libraries that leak memory or
otherwise become slower/more resource-hungry over time.
policy: It is like a CGI server in that each process it forks is
guaranteed to be handling only one connection at once so it can perform
blocking operations without worrying about holding up other connections.
But since a single process can handle many requests in a row without
exiting, they are more like persistent FastCGI processes. The difference
however is that while a client holds a checkout it is guaranteed an
exclusive lock on that process (useful for supporting DB transactions
for example). With a FastCGI server it is assumed that requests are
stateless so you can't necessarily be sure you'll get the same process
for two consecutive requests. In fact, if an error is thrown in the
FastCGI handler you may never get the same process back again,
preventing you from being able to recover from the error, retry, or at
least collect process state for logging reasons.
The fundamental difference between the AnyEvent::Task protocol and HTTP
is that in AnyEvent::Task the client is the dominant protocol
orchestrator whereas in HTTP it is the server.
In AnyEvent::Task, the client manages the worker pool and the client
decides if/when worker processes should terminate. In the normal case, a
dismisses it.
The client decides the timeout for each checkout and different clients
can have different timeouts while connecting to the same server.
Client processes can be started and checkouts can be obtained before the
server is even started. The client will continue trying to connect to
the server to obtain worker processes until either the server starts or
the checkout's timeout period lapses. As well as freeing you from having
to start your services in the "right" order, this also means servers can
be restarted without throwing any errors (aka "zero-downtime restarts").
The client even decides how many minimum workers should be in the pool
upon start-up and how many maximum workers to acquire before checkout
creation requests are queued. The server is really just a dumb
fork-on-demand server and most of the sophistication is in the
asynchronous client.
SEE ALSO
The AnyEvent::Task github repo
<https://github.com/hoytech/AnyEvent-Task>
lib/AnyEvent/Task.pm view on Meta::CPAN
A server is started with C<< AnyEvent::Task::Server->new >>. This constructor should be passed in at least the C<listen> and C<interface> arguments. Keep the returned server object around for as long as you want the server to be running. C<listen> is...
A client is started with C<< AnyEvent::Task::Client->new >>. You only need to pass C<connect> to this constructor which is an array ref containing the host and service options to be passed to L<AnyEvent::Socket>'s C<tcp_connect>. Keep the returned cl...
After the server and client are initialised, each process must enter AnyEvent's "main loop" in some way, possibly just C<< AE::cv->recv >>. The C<run> method on the server object is a convenient short-cut for this.
To acquire a worker process you call the C<checkout> method on the client object. The C<checkout> method doesn't need any arguments, but several optional ones such as C<timeout> are described below. As long as the checkout object is around, this chec...
The checkout object is an object that proxies its method calls to a worker process or a function that does the same. The arguments to this method/function are the arguments you wish to send to the worker process followed by a callback to run when the...
In the event of an exception thrown by the worker process, a timeout, or some other unexpected condition, an error is raised in the dynamic context of the callback (see the L<ERROR HANDLING> section).
=head1 DESIGN
Both client and server are of course built with L<AnyEvent>. However, workers can't use AnyEvent (yet). I've never found a need to do event processing in the worker since if the library you wish to use is already AnyEvent-compatible you can simply us...
Each client maintains a "pool" of connections to worker processes. Every time a checkout is requested, the request is placed into a first-come, first-serve queue. Once a worker process becomes available, it is associated with that checkout until that...
C<timeout> can be passed as a keyword argument to C<checkout>. Once a request is queued up on that checkout, a timer of C<timout> seconds (default is 30, undef means infinity) is started. If the request completes during this timeframe, the timer is c...
Note that since timeouts are associated with a checkout, checkouts can be created before the server is started. As long as the server is running within C<timeout> seconds, no error will be thrown and no requests will be lost. The client will continua...
Because of checkout queuing, the maximum number of worker processes a client will attempt to obtain can be limited with the C<max_workers> argument when creating a client object. If there are more live checkouts than C<max_workers>, the remaining che...
The C<min_workers> argument determines how many "hot-standby" workers should be pre-forked when creating the client. The default is 2 though note that this may change to 0 in the future.
=head1 STARTING THE SERVER
lib/AnyEvent/Task.pm view on Meta::CPAN
'1.02470206105041'
]
}
};
=head1 ERROR HANDLING
In a synchronous program, if you expected some operation to throw an exception you might wrap it in C<eval> like this:
my $crypted;
eval {
$crypted = hash('secret');
};
if ($@) {
say "hash failed: $@";
} else {
say "hashed password is $crypted";
}
But in an asynchronous program, typically C<hash> would initiate some kind of asynchronous operation and then return immediately, allowing the program to go about other tasks while waiting for the result. Since the error might come back at any time i...
AnyEvent::Task accomplishes this mapping with L<Callback::Frame>.
Callback::Frame lets you preserve error handlers (and C<local> variables) across asynchronous callbacks. Callback::Frame is not tied to AnyEvent::Task, AnyEvent or any other async framework and can be used with almost all callback-based libraries.
However, when using AnyEvent::Task, libraries that you use in the client must be L<AnyEvent> compatible. This restriction obviously does not apply to your server code, that being the main purpose of this module: accessing blocking resources from an a...
As an example usage of Callback::Frame, here is how we would handle errors thrown from a worker process running the C<hash> method in an asychronous client program:
use Callback::Frame;
frame(code => sub {
$client->checkout->hash('secret', sub {
my ($checkout, $crypted) = @_;
say "Hashed password is $crypted";
});
}, catch => sub {
my $back_trace = shift;
say "Error is: $@";
say "Full back-trace: $back_trace";
})->(); ## <-- frame is created and then immediately executed
Of course if C<hash> is something like a bcrypt hash function it is unlikely to raise an exception so maybe that's a bad example. On the other hand, maybe it's a really good example: In addition to errors that occur while running your callbacks, L<An...
=head2 Rationale for Callback::Frame
Why not just call the callback but set C<$@> and indicate an error has occurred? This is the approach taken with L<AnyEvent::DBI> for example. I believe the L<Callback::Frame> interface is superior to this method. In a synchronous program, exceptions...
How about having AnyEvent::Task expose an error callback? This is the approach taken by L<AnyEvent::Handle> for example. I believe Callback::Frame is superior to this method also. Although separate callbacks are (sort of) out-of-band, you still have ...
In servers, Callback::Frame helps you maintain the "dynamic state" (error handlers and dynamic variables) installed for a single connection. In other words, any errors that occur while servicing that connection will be able to be caught by an error h...
lib/AnyEvent/Task.pm view on Meta::CPAN
Callback::Frame is designed to be easily used with callback-based libraries that don't know about Callback::Frame. C<fub> is a shortcut for C<frame> with just the C<code> argument. Instead of passing C<sub { ... }> into libraries you can pass in C<fu...
It's important that all callbacks be created with C<fub> (or C<frame>) even if you don't expect them to fail so that the dynamic context is preserved for nested callbacks that may. An exception is the callbacks provided to AnyEvent::Task checkouts: T...
The L<Callback::Frame> documentation explains how this works in much more detail.
=head2 Reforking of workers after errors
If a worker throws an error, the client receives the error but the worker process stays running. As long as the client has a reference to the checkout (and as long as the exception wasn't "fatal" -- see below), it can still be used to communicate wit...
However, once the checkout object is destroyed, by default the worker will be shutdown instead of returning to the client's worker pool as in the normal case where no errors were thrown. This is a "safe-by-default" behaviour that may help in the even...
There are cases where workers will never be returned to the worker pool: workers that have thrown fatal errors such as loss of worker connection or hung worker timeout errors. These errors are stored in the checkout and for as long as the checkout ex...
Another reason that a worker might not be returned to the worker pool is if it has been checked out C<max_checkouts> times. If C<max_checkouts> is specified as an argument to the Client constructor, then workers will be destroyed and reforked after b...
=head1 COMPARISON WITH HTTP
Why a custom protocol, client, and server? Can't we just use something like HTTP?
It depends.
AnyEvent::Task clients send discrete messages and receive ordered replies from workers, much like HTTP. The AnyEvent::Task protocol can be extended in a backwards-compatible manner like HTTP. AnyEvent::Task communication can be pipelined and possibly...
The current AnyEvent::Task server obeys a very specific implementation policy: It is like a CGI server in that each process it forks is guaranteed to be handling only one connection at once so it can perform blocking operations without worrying about...
But since a single process can handle many requests in a row without exiting, they are more like persistent FastCGI processes. The difference however is that while a client holds a checkout it is guaranteed an exclusive lock on that process (useful f...
The fundamental difference between the AnyEvent::Task protocol and HTTP is that in AnyEvent::Task the client is the dominant protocol orchestrator whereas in HTTP it is the server.
In AnyEvent::Task, the client manages the worker pool and the client decides if/when worker processes should terminate. In the normal case, a client will just return the worker to its worker pool. A worker is supposed to accept commands for as long a...
The client decides the timeout for each checkout and different clients can have different timeouts while connecting to the same server.
Client processes can be started and checkouts can be obtained before the server is even started. The client will continue trying to connect to the server to obtain worker processes until either the server starts or the checkout's timeout period lapse...
The client even decides how many minimum workers should be in the pool upon start-up and how many maximum workers to acquire before checkout creation requests are queued. The server is really just a dumb fork-on-demand server and most of the sophisti...
=head1 SEE ALSO
L<The AnyEvent::Task github repo|https://github.com/hoytech/AnyEvent-Task>
lib/AnyEvent/Task/Client.pm view on Meta::CPAN
my $worker; $worker = new AnyEvent::Handle
fh => $fh,
on_read => sub { }, ## So we always have a read watcher and can instantly detect worker deaths
on_error => sub {
my ($worker, $fatal, $message) = @_;
my $checkout = $self->{workers_to_checkouts}->{0 + $worker};
$checkout->{timeout_timer} = undef; ## timer keeps a circular reference
$checkout->throw_fatal_error('worker connection suddenly died') if $checkout;
$self->destroy_worker($worker);
$self->populate_workers;
};
$self->{worker_checkout_counts}->{0 + $worker} = 0;
$self->make_worker_available($worker);
$self->try_to_fill_pending_checkouts;
lib/AnyEvent/Task/Client/Checkout.pm view on Meta::CPAN
$self->{timeout_timer} = AE::timer $self->{timeout}, 0, sub {
delete $self->{timeout_timer};
$self->{client}->remove_pending_checkout($self);
if (exists $self->{worker}) {
$self->{client}->destroy_worker($self->{worker});
delete $self->{worker};
}
$self->throw_fatal_error("timed out after $self->{timeout} seconds");
};
}
sub _throw_error {
my ($self, $err) = @_;
$self->{error_occurred} = 1;
my $current_cb;
if ($self->{current_cb}) {
$current_cb = $self->{current_cb};
} elsif (@{$self->{pending_requests}}) {
$current_cb = $self->{pending_requests}->[0]->[-1];
} else {
die "_throw_error called but no callback installed. Error thrown was: $err";
}
$self->{pending_requests} = undef;
if ($current_cb) {
frame(existing_frame => $current_cb,
code => sub {
die $err;
})->();
}
$self->{cmd_handler} = undef;
}
sub throw_fatal_error {
my ($self, $err) = @_;
$self->{fatal_error} = $err;
$self->_throw_error($err);
}
sub _try_to_fill_requests {
my ($self) = @_;
return unless exists $self->{worker};
return unless @{$self->{pending_requests}};
my $request = shift @{$self->{pending_requests}};
my $cb = pop @{$request};
$self->{current_cb} = $cb;
Scalar::Util::weaken($self->{current_cb});
if ($self->{fatal_error}) {
$self->_throw_error($self->{fatal_error});
return;
}
my $method_name = $request->[0];
if (!defined $method_name) {
$method_name = '->()';
shift @$request;
}
lib/AnyEvent/Task/Client/Checkout.pm view on Meta::CPAN
my ($response_code, $meta, $response_value) = @$response;
if ($self->{log_defer_object} && $meta->{ld}) {
$self->{log_defer_object}->merge($meta->{ld});
}
if ($response_code eq 'ok') {
local $@ = undef;
$cb->($self, $response_value);
} elsif ($response_code eq 'er') {
$self->_throw_error($response_value);
} else {
die "Unrecognized response_code: $response_code";
}
delete $self->{timeout_timer};
delete $self->{cmd_handler};
$self->_try_to_fill_requests;
};
t/dont_refork_after_error.t view on Meta::CPAN
use AnyEvent::Task::Client;
use Test::More tests => 7;
## This test verifies that the dont_refork_after_error client stops the
## worker process from being killed off after a checkout is released
## where the worker threw an error in its lifetime.
## Note that a checkout's methods can still be called after an error
## is thrown but before the checkout is released, perhaps to access
## error states or to rollback a transaction.
AnyEvent::Task::Server::fork_task_server(
listen => ['unix/', '/tmp/anyevent-task-test.socket'],
interface => {
get_pid => sub { return $$ },
throw => sub { my ($err) = @_; die $err; },
},
);
my $client = AnyEvent::Task::Client->new(
connect => ['unix/', '/tmp/anyevent-task-test.socket'],
max_workers => 1,
dont_refork_after_error => 1,
);
t/dont_refork_after_error.t view on Meta::CPAN
$checkout->get_pid(sub {
my ($checkout, $ret) = @_;
$pid = $ret;
like($pid, qr/^\d+$/, "got PID");
$checkout->get_pid(sub {
my ($checkout, $ret) = @_;
is($pid, $ret, "PID didn't change in same checkout");
$checkout->throw("BLAH", frame(code => sub {
die "throw method didn't return error";
}, catch => sub {
my $err = $@;
like($err, qr/BLAH/, "caught BLAH error");
$checkout->get_pid(sub {
my ($checkout, $ret) = @_;
is($pid, $ret, "PID didn't change even after error");
$checkout->throw("OUCH", frame(code => sub {
die "throw method didn't return error 2";
}, catch => sub {
my $err = $@;
like($err, qr/OUCH/, "caught OUCH error");
$checkout->get_pid(sub {
my ($checkout, $ret) = @_;
is($pid, $ret, "PID didn't change even after second error");
});
}));
});
t/error-clears-checkout-queue.t view on Meta::CPAN
use Callback::Frame;
use AnyEvent::Util;
use AnyEvent::Task::Server;
use AnyEvent::Task::Client;
use Test::More tests => 3;
## The point of this test is to verify that method calls can queue
## up on a checkout and that if any errors are thrown by one of
## the queued methods, then all the other method calls are removed
## from the checkout's queue.
AnyEvent::Task::Server::fork_task_server(
listen => ['unix/', '/tmp/anyevent-task-test.socket'],
interface => {
die => sub { die "ouch"; },
success => sub { 1 },
t/error-clears-checkout-queue.t view on Meta::CPAN
$checkout->success(sub { ok(1, "first in checkout queue") });
$checkout->success(sub { ok(1, "second in checkout queue") });
$checkout->die(sub { die "exception should have been caught instead of calling this" });
$checkout->success(sub { die "this should have been removed from the queue" });
$checkout->success(sub { die "should have been removed" });
}, catch => sub {
$num_exceptions_caught++;
die "multiple exceptions thrown" unless $num_exceptions_caught == 1;
ok(1, "caught exception");
})->();
$cv->recv;
$client->checkout(log_defer_object => $log_defer_object)->normal(sub {
my ($checkout, $ret) = @_;
$log_defer_object->info("after");
$checkout->sleep(0.1, sub {});
$checkout->sleep(0.1, sub {});
$checkout->error(frame(code => sub {
die "error not thrown?";
}, catch => sub {
ok(1, 'error caught');
$cv->send;
}));
});
$cv->recv;
$cv = AE::cv;
$log_defer_object = Log::Defer->new(sub {
my $msg = shift;
is($msg->{timers}->[0]->[0], '->()', "didn't leak first arg when called as code ref");
});
$client->checkout(log_defer_object => $log_defer_object)->('first arg', frame(code => sub {
die "error not thrown by calling interface as a sub?";
}, catch => sub {
ok(1, 'error caught');
$cv->send;
}));
$cv->recv;
t/manual-request-abort.t view on Meta::CPAN
use AnyEvent::Util;
use AnyEvent::Task::Server;
use AnyEvent::Task::Client;
use Test::More tests => 5;
## The point of this test is to verify that fatal errors cut off
## the worker and permanently disable the checkout. If methods are
## called again on the checkout they will continue to throw the
## fatal error.
AnyEvent::Task::Server::fork_task_server(
listen => ['unix/', '/tmp/anyevent-task-test.socket'],
interface => {
sleep_die => sub {
select undef, undef, undef, 1;
die "shouldn't get here";
t/manual-request-abort.t view on Meta::CPAN
die "shouldn't get here";
}, catch => sub {
my $err = $@;
like($err, qr/manual request abort/, "continue to get manual abort error because error was fatal");
$cv->send;
}));
}));
$checkout->throw_fatal_error("manual request abort");
$cv->recv;
t/setup-errors.t view on Meta::CPAN
use Callback::Frame;
use AnyEvent::Util;
use AnyEvent::Task::Server;
use AnyEvent::Task::Client;
use Test::More tests => 2;
## The point of this test is to verify that exceptions thrown in
## setup callbacks are propagated to the client. It also validates
## that by default workers are restarted on setup errors.
my $attempt = 0;
AnyEvent::Task::Server::fork_task_server(
listen => ['unix/', '/tmp/anyevent-task-test.socket'],
setup => sub {
$attempt++;
t/timeout-log-defer.t view on Meta::CPAN
use Log::Defer;
use Data::Dumper;
use AnyEvent::Util;
use AnyEvent::Task::Server;
use AnyEvent::Task::Client;
use Test::More tests => 3;
## The point of this test is to verify that if a timeout error is thrown
## from a checkout with a log_defer_object then a reference to the Log::Defer
## object is not kept alive by the cmd_handler closure of the checkout. This was
## a bug in AE::T 0.802.
AnyEvent::Task::Server::fork_task_server(
listen => ['unix/', '/tmp/anyevent-task-test.socket'],
interface => sub {
select undef, undef, undef, 0.4;
die "shouldn't get here";
},
);
my $client = AnyEvent::Task::Client->new(
connect => ['unix/', '/tmp/anyevent-task-test.socket'],
);
my $error_thrown = 0;
my $cv = AE::cv;
{
my $ld = Log::Defer->new(sub {
ok($error_thrown, 'log defer obj destroyed after error handler ran');
$cv->send;
});
frame_try {
$client->checkout( timeout => 0.2, log_defer_object => $ld )->(sub {
$ld->warn("keep alive 1");
die "checkout was serviced?";
});
} frame_catch {
$ld->warn("keep alive 2");
my $err = $@;
ok(1, "timeout hit");
ok($err =~ /timed out after/, 'correct err msg');
$error_thrown = 1;
};
}
my $timer = AE::timer 1, 0, sub {
fail("log defer object destroyed");
$cv->send;
};
$cv->recv;