EV

 view release on metacpan or  search on metacpan

libev/ev.pod  view on Meta::CPAN


=item C<EVBACKEND_SELECT>  (value 1, portable select backend)

This is your standard select(2) backend. Not I<completely> standard, as
libev tries to roll its own fd_set with no limits on the number of fds,
but if that fails, expect a fairly low limit on the number of fds when
using this backend. It doesn't scale too well (O(highest_fd)), but its
usually the fastest backend for a low number of (low-numbered :) fds.

To get good performance out of this backend you need a high amount of
parallelism (most of the file descriptors should be busy). If you are
writing a server, you should C<accept ()> in a loop to accept as many
connections as possible during one iteration. You might also want to have
a look at C<ev_set_io_collect_interval ()> to increase the amount of
readiness notifications you get per iteration.

This backend maps C<EV_READ> to the C<readfds> set and C<EV_WRITE> to the
C<writefds> set (and to work around Microsoft Windows bugs, also onto the
C<exceptfds> set on that platform).

=item C<EVBACKEND_POLL>    (value 2, poll backend, available everywhere except on windows)

libev/ev.pod  view on Meta::CPAN

and is of course hard to detect.

Epoll is also notoriously buggy - embedding epoll fds I<should> work,
but of course I<doesn't>, and epoll just loves to report events for
totally I<different> file descriptors (even already closed ones, so
one cannot even remove them from the set) than registered in the set
(especially on SMP systems). Libev tries to counter these spurious
notifications by employing an additional generation counter and comparing
that against the events to filter out spurious ones, recreating the set
when required. Epoll also erroneously rounds down timeouts, but gives you
no way to know when and by how much, so sometimes you have to busy-wait
because epoll returns immediately despite a nonzero timeout. And last
not least, it also refuses to work with some file descriptors which work
perfectly fine with C<select> (files, many character devices...).

Epoll is truly the train wreck among event poll mechanisms, a frankenpoll,
cobbled together in a hurry, no thought to design or interaction with
others. Oh, the pain, will it ever stop...

While stopping, setting and starting an I/O watcher in the same iteration
will result in some caching, there is still a system call per such

libev/ev.pod  view on Meta::CPAN

sleep time ensures that libev will not poll for I/O events more often then
once per this interval, on average (as long as the host time resolution is
good enough).

Likewise, by setting a higher I<timeout collect interval> you allow libev
to spend more time collecting timeouts, at the expense of increased
latency/jitter/inexactness (the watcher callback will be called
later). C<ev_io> watchers will not be affected. Setting this to a non-null
value will not introduce any overhead in libev.

Many (busy) programs can usually benefit by setting the I/O collect
interval to a value near C<0.1> or so, which is often enough for
interactive servers (of course not for games), likewise for timeouts. It
usually doesn't make much sense to set it to a lower value than C<0.01>,
as this approaches the timing granularity of most systems. Note that if
you do transactions with the outside world and you can't increase the
parallelity, then this setting will limit your transaction rate (if you
need to poll once per transaction and the I/O collect interval is 0.01,
then you can't do more than 100 transactions per second).

Setting the I<timeout collect interval> can improve the opportunity for

libev/ev.pod  view on Meta::CPAN

the next iteration again (the connection still exists after all), and
typically causing the program to loop at 100% CPU usage.

Unfortunately, the set of errors that cause this issue differs between
operating systems, there is usually little the app can do to remedy the
situation, and no known thread-safe method of removing the connection to
cope with overload is known (to me).

One of the easiest ways to handle this situation is to just ignore it
- when the program encounters an overload, it will just loop until the
situation is over. While this is a form of busy waiting, no OS offers an
event-based way to handle this situation, so it's the best one can do.

A better way to handle the situation is to log any errors other than
C<EAGAIN> and C<EWOULDBLOCK>, making sure not to flood the log with such
messages, and continue as usual, which at least gives the user an idea of
what could be wrong ("raise the ulimit!"). For extra points one could stop
the C<ev_io> watcher on the listening fd "for a while", which reduces CPU
usage.

If your program is single-threaded, then you could also keep a dummy file

libev/ev.pod  view on Meta::CPAN

descriptor open on the object at all times, and detecting renames, unlinks
etc. is difficult.

=head3 C<stat ()> is a synchronous operation

Libev doesn't normally do any kind of I/O itself, and so is not blocking
the process. The exception are C<ev_stat> watchers - those call C<stat
()>, which is a synchronous operation.

For local paths, this usually doesn't matter: unless the system is very
busy or the intervals between stat's are large, a stat call will be fast,
as the path data is usually in memory already (except when starting the
watcher).

For networked file systems, calling C<stat ()> can block an indefinite
time due to network issues, and even under good conditions, a stat call
often takes multiple milliseconds.

Therefore, it is best to avoid using C<ev_stat> watchers on networked
paths, although this is fully supported by libev.

libev/ev.pod  view on Meta::CPAN

   ev_stat_start (loop, &passwd);
   ev_timer_init (&timer, timer_cb, 0., 1.02);


=head2 C<ev_idle> - when you've got nothing better to do...

Idle watchers trigger events when no other events of the same or higher
priority are pending (prepare, check and other idle watchers do not count
as receiving "events").

That is, as long as your process is busy handling sockets or timeouts
(or even signals, imagine) of the same or higher priority it will not be
triggered. But when your process is idle (or only lower-priority watchers
are pending), the idle watchers are being called once per event loop
iteration - until stopped, that is, or your process receives more events
and becomes busy again with higher priority stuff.

The most noteworthy effect is that as long as any idle watchers are
active, the process will not block when waiting for new events.

Apart from keeping your process non-blocking (which is a useful
effect on its own sometimes), idle watchers are a good place to do
"pseudo-background processing", or delay processing stuff to after the
event loop has handled all outstanding events.

=head3 Abusing an C<ev_idle> watcher for its side-effect



( run in 1.026 second using v1.01-cache-2.11-cpan-3cd7ad12f66 )