AnyEvent-Fork-Pool
view release on metacpan or search on metacpan
and if either is too high, the worker could request to be retired,
to avoid memory leaks to accumulate.
Example: retire a worker after it has handled roughly 100 requests.
It doesn't matter whether you retire at the beginning or end of your
request, as the worker will continue to handle some outstanding
requests. Likewise, it's ok to call retire multiple times.
my $count = 0;
sub my::worker {
++$count == 100
and AnyEvent::Fork::Pool::retire ();
... normal code goes here
}
POOL PARAMETERS RECIPES
This section describes some recipes for pool parameters. These are
mostly meant for the synchronous RPC backend, as the asynchronous RPC
backend changes the rules considerably, making workers themselves
responsible for their scheduling.
low latency - set load = 1
If you need a deterministic low latency, you should set the "load"
parameter to 1. This ensures that never more than one job is sent to
each worker. This avoids having to wait for a previous job to
finish.
This makes most sense with the synchronous (default) backend, as the
asynchronous backend can handle multiple requests concurrently.
lowest latency - set load = 1 and idle = max
To achieve the lowest latency, you additionally should disable any
dynamic resizing of the pool by setting "idle" to the same value as
"max".
high throughput, cpu bound jobs - set load >= 2, max = #cpus
To get high throughput with cpu-bound jobs, you should set the
maximum pool size to the number of cpus in your system, and "load"
to at least 2, to make sure there can be another job waiting for the
worker when it has finished one.
The value of 2 for "load" is the minimum value that *can* achieve
100% throughput, but if your parent process itself is sometimes
busy, you might need higher values. Also there is a limit on the
amount of data that can be "in flight" to the worker, so if you send
big blobs of data to your worker, "load" might have much less of an
effect.
high throughput, I/O bound jobs - set load >= 2, max = 1, or very high
When your jobs are I/O bound, using more workers usually boils down
to higher throughput, depending very much on your actual workload -
sometimes having only one worker is best, for example, when you read
or write big files at maximum speed, as a second worker will
increase seek times.
EXCEPTIONS
The same "policy" as with AnyEvent::Fork::RPC applies - exceptions will
not be caught, and exceptions in both worker and in callbacks causes
undesirable or undefined behaviour.
SEE ALSO
AnyEvent::Fork, to create the processes in the first place.
AnyEvent::Fork::Remote, likewise, but helpful for remote processes.
AnyEvent::Fork::RPC, which implements the RPC protocol and API.
AUTHOR AND CONTACT INFORMATION
Marc Lehmann <schmorp@schmorp.de>
http://software.schmorp.de/pkg/AnyEvent-Fork-Pool
( run in 0.927 second using v1.01-cache-2.11-cpan-39bf76dae61 )