AnyEvent-Fork-Pool

 view release on metacpan or  search on metacpan

Pool.pm  view on Meta::CPAN

the other parameters say otherwise.

Setting this to a very high value means that workers stay around longer,
even when they have nothing to do, which can be good as they don't have to
be started on the netx load spike again.

Setting this to a lower value can be useful to avoid memory or simply
process table wastage.

Usually, setting this to a time longer than the time between load spikes
is best - if you expect a lot of requests every minute and little work
in between, setting this to longer than a minute avoids having to stop
and start workers. On the other hand, you have to ask yourself if letting
workers run idle is a good use of your resources. Try to find a good
balance between resource usage of your workers and the time to start new
workers - the processes created by L<AnyEvent::Fork> itself is fats at
creating workers while not using much memory for them, so most of the
overhead is likely from your own code.

=item on_destroy => $callback->() (default: none)

Pool.pm  view on Meta::CPAN

callback, which will be called when the pool has been destroyed.

=back

=item AnyEvent::Fork::RPC Parameters

These parameters are all passed more or less directly to
L<AnyEvent::Fork::RPC>. They are only briefly mentioned here, for
their full documentation please refer to the L<AnyEvent::Fork::RPC>
documentation. Also, the default values mentioned here are only documented
as a best effort - the L<AnyEvent::Fork::RPC> documentation is binding.

=over 4

=item async => $boolean (default: 0)

Whether to use the synchronous or asynchronous RPC backend.

=item on_error => $callback->($message) (default: die with message)

The callback to call on any (fatal) errors.

Pool.pm  view on Meta::CPAN

The value of C<2> for C<load> is the minimum value that I<can> achieve
100% throughput, but if your parent process itself is sometimes busy, you
might need higher values. Also there is a limit on the amount of data that
can be "in flight" to the worker, so if you send big blobs of data to your
worker, C<load> might have much less of an effect.

=item high throughput, I/O bound jobs - set load >= 2, max = 1, or very high

When your jobs are I/O bound, using more workers usually boils down to
higher throughput, depending very much on your actual workload - sometimes
having only one worker is best, for example, when you read or write big
files at maximum speed, as a second worker will increase seek times.

=back

=head1 EXCEPTIONS

The same "policy" as with L<AnyEvent::Fork::RPC> applies - exceptions
will not be caught, and exceptions in both worker and in callbacks causes
undesirable or undefined behaviour.

README  view on Meta::CPAN


                Setting this to a very high value means that workers stay
                around longer, even when they have nothing to do, which can
                be good as they don't have to be started on the netx load
                spike again.

                Setting this to a lower value can be useful to avoid memory
                or simply process table wastage.

                Usually, setting this to a time longer than the time between
                load spikes is best - if you expect a lot of requests every
                minute and little work in between, setting this to longer
                than a minute avoids having to stop and start workers. On
                the other hand, you have to ask yourself if letting workers
                run idle is a good use of your resources. Try to find a good
                balance between resource usage of your workers and the time
                to start new workers - the processes created by
                AnyEvent::Fork itself is fats at creating workers while not
                using much memory for them, so most of the overhead is
                likely from your own code.

README  view on Meta::CPAN


                To find out when a pool *really* has finished its work, you
                can set this callback, which will be called when the pool
                has been destroyed.

        AnyEvent::Fork::RPC Parameters
            These parameters are all passed more or less directly to
            AnyEvent::Fork::RPC. They are only briefly mentioned here, for
            their full documentation please refer to the AnyEvent::Fork::RPC
            documentation. Also, the default values mentioned here are only
            documented as a best effort - the AnyEvent::Fork::RPC
            documentation is binding.

            async => $boolean (default: 0)
                Whether to use the synchronous or asynchronous RPC backend.

            on_error => $callback->($message) (default: die with message)
                The callback to call on any (fatal) errors.

            on_event => $callback->(...) (default: "sub { }", unlike
            AnyEvent::Fork::RPC)

README  view on Meta::CPAN

        The value of 2 for "load" is the minimum value that *can* achieve
        100% throughput, but if your parent process itself is sometimes
        busy, you might need higher values. Also there is a limit on the
        amount of data that can be "in flight" to the worker, so if you send
        big blobs of data to your worker, "load" might have much less of an
        effect.

    high throughput, I/O bound jobs - set load >= 2, max = 1, or very high
        When your jobs are I/O bound, using more workers usually boils down
        to higher throughput, depending very much on your actual workload -
        sometimes having only one worker is best, for example, when you read
        or write big files at maximum speed, as a second worker will
        increase seek times.

EXCEPTIONS
    The same "policy" as with AnyEvent::Fork::RPC applies - exceptions will
    not be caught, and exceptions in both worker and in callbacks causes
    undesirable or undefined behaviour.

SEE ALSO
    AnyEvent::Fork, to create the processes in the first place.



( run in 0.370 second using v1.01-cache-2.11-cpan-4e96b696675 )