EV

 view release on metacpan or  search on metacpan

libev/ev.pod  view on Meta::CPAN

The big advantage of this flag is that you can forget about fork (and
forget about forgetting to tell libev about forking, although you still
have to ignore C<SIGPIPE>) when you use this flag.

This flag setting cannot be overridden or specified in the C<LIBEV_FLAGS>
environment variable.

=item C<EVFLAG_NOINOTIFY>

When this flag is specified, then libev will not attempt to use the
I<inotify> API for its C<ev_stat> watchers. Apart from debugging and
testing, this flag can be useful to conserve inotify file descriptors, as
otherwise each loop using C<ev_stat> watchers consumes one inotify handle.

=item C<EVFLAG_SIGNALFD>

When this flag is specified, then libev will attempt to use the
I<signalfd> API for its C<ev_signal> (and C<ev_child>) watchers. This API
delivers signals synchronously, which makes it both faster and might make
it possible to get the queued signal data. It can also simplify signal
handling with threads, as long as you properly block signals in your
threads that are not interested in handling them.

Signalfd will not be used by default as this changes your signal mask, and
there are a lot of shoddy libraries and programs (glib's threadpool for
example) that can't properly initialise their signal masks.

=item C<EVFLAG_NOSIGMASK>

When this flag is specified, then libev will avoid to modify the signal
mask. Specifically, this means you have to make sure signals are unblocked
when you want to receive them.

This behaviour is useful when you want to do your own signal handling, or
want to handle signals only in specific threads and want to avoid libev
unblocking the signals.

It's also required by POSIX in a threaded program, as libev calls
C<sigprocmask>, whose behaviour is officially unspecified.

=item C<EVFLAG_NOTIMERFD>

When this flag is specified, the libev will avoid using a C<timerfd> to
detect time jumps. It will still be able to detect time jumps, but takes
longer and has a lower accuracy in doing so, but saves a file descriptor
per loop.

The current implementation only tries to use a C<timerfd> when the first
C<ev_periodic> watcher is started and falls back on other methods if it
cannot be created, but this behaviour might change in the future.

=item C<EVBACKEND_SELECT>  (value 1, portable select backend)

This is your standard select(2) backend. Not I<completely> standard, as
libev tries to roll its own fd_set with no limits on the number of fds,
but if that fails, expect a fairly low limit on the number of fds when
using this backend. It doesn't scale too well (O(highest_fd)), but its
usually the fastest backend for a low number of (low-numbered :) fds.

To get good performance out of this backend you need a high amount of
parallelism (most of the file descriptors should be busy). If you are
writing a server, you should C<accept ()> in a loop to accept as many
connections as possible during one iteration. You might also want to have
a look at C<ev_set_io_collect_interval ()> to increase the amount of
readiness notifications you get per iteration.

This backend maps C<EV_READ> to the C<readfds> set and C<EV_WRITE> to the
C<writefds> set (and to work around Microsoft Windows bugs, also onto the
C<exceptfds> set on that platform).

=item C<EVBACKEND_POLL>    (value 2, poll backend, available everywhere except on windows)

And this is your standard poll(2) backend. It's more complicated
than select, but handles sparse fds better and has no artificial
limit on the number of fds you can use (except it will slow down
considerably with a lot of inactive fds). It scales similarly to select,
i.e. O(total_fds). See the entry for C<EVBACKEND_SELECT>, above, for
performance tips.

This backend maps C<EV_READ> to C<POLLIN | POLLERR | POLLHUP>, and
C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>.

=item C<EVBACKEND_EPOLL>   (value 4, Linux)

Use the Linux-specific epoll(7) interface (for both pre- and post-2.6.9
kernels).

For few fds, this backend is a bit little slower than poll and select, but
it scales phenomenally better. While poll and select usually scale like
O(total_fds) where total_fds is the total number of fds (or the highest
fd), epoll scales either O(1) or O(active_fds).

The epoll mechanism deserves honorable mention as the most misdesigned
of the more advanced event mechanisms: mere annoyances include silently
dropping file descriptors, requiring a system call per change per file
descriptor (and unnecessary guessing of parameters), problems with dup,
returning before the timeout value, resulting in additional iterations
(and only giving 5ms accuracy while select on the same platform gives
0.1ms) and so on. The biggest issue is fork races, however - if a program
forks then I<both> parent and child process have to recreate the epoll
set, which can take considerable time (one syscall per file descriptor)
and is of course hard to detect.

Epoll is also notoriously buggy - embedding epoll fds I<should> work,
but of course I<doesn't>, and epoll just loves to report events for
totally I<different> file descriptors (even already closed ones, so
one cannot even remove them from the set) than registered in the set
(especially on SMP systems). Libev tries to counter these spurious
notifications by employing an additional generation counter and comparing
that against the events to filter out spurious ones, recreating the set
when required. Epoll also erroneously rounds down timeouts, but gives you
no way to know when and by how much, so sometimes you have to busy-wait
because epoll returns immediately despite a nonzero timeout. And last
not least, it also refuses to work with some file descriptors which work
perfectly fine with C<select> (files, many character devices...).

Epoll is truly the train wreck among event poll mechanisms, a frankenpoll,
cobbled together in a hurry, no thought to design or interaction with
others. Oh, the pain, will it ever stop...

While stopping, setting and starting an I/O watcher in the same iteration
will result in some caching, there is still a system call per such
incident (because the same I<file descriptor> could point to a different
I<file description> now), so its best to avoid that. Also, C<dup ()>'ed
file descriptors might not work very well if you register events for both
file descriptors.

Best performance from this backend is achieved by not unregistering all
watchers for a file descriptor until it has been closed, if possible,
i.e. keep at least one watcher active per fd at all times. Stopping and
starting a watcher (without re-setting it) also usually doesn't cause
extra overhead. A fork can both result in spurious notifications as well
as in libev having to destroy and recreate the epoll object, which can
take considerable time and thus should be avoided.

All this means that, in practice, C<EVBACKEND_SELECT> can be as fast or
faster than epoll for maybe up to a hundred file descriptors, depending on
the usage. So sad.

While nominally embeddable in other event loops, this feature is broken in
a lot of kernel revisions, but probably(!) works in current versions.

This backend maps C<EV_READ> and C<EV_WRITE> in the same way as
C<EVBACKEND_POLL>.

=item C<EVBACKEND_IOURING>    (value 128, linux)

Use the linux-specific io_uring backend. It offers an enourmous amount
of features other than just I/O events, but suffers from an extreme
feature-first, correctness-later approach, and is slower than epoll, so
it is not used by default.

One important misdesign is that when sleeping in io_uring, the kernel
wrongly counts that as disk I/O wait, keeping loadavg and a cpu core
"virtually" busy, even if nothing actually waits for disk or uses CPU.

If your application forks frequently, then this backend might be faster,
as setting it up again after a fork is far more efficient with this
backend, and it also doesn't suffer from the epoll design flaw of
receiving events for closed file descriptors.

=item C<EVBACKEND_LINUXAIO>   (value 64, Linux)

Use the Linux-specific Linux AIO (I<not> C<< aio(7) >> but C<<
io_submit(2) >>) event interface available in post-4.18 kernels (but libev
only tries to use it in 4.19+).

This is another Linux train wreck of an event interface.

If this backend works for you (as of this writing, it was very
experimental), it is the best event interface available on Linux and might
be well worth enabling it - if it isn't available in your kernel this will
be detected and this backend will be skipped.

This backend can batch oneshot requests and supports a user-space ring
buffer to receive events. It also doesn't suffer from most of the design
problems of epoll (such as not being able to remove event sources from
the epoll set), and generally sounds too good to be true. Because, this
being the Linux kernel, of course it suffers from a whole new set of
limitations, forcing you to fall back to epoll, inheriting all its design
issues.

For one, it is not easily embeddable (but probably could be done using
an event fd at some extra overhead). It also is subject to a system wide
limit that can be configured in F</proc/sys/fs/aio-max-nr>. If no AIO
requests are left, this backend will be skipped during initialisation, and
will switch to epoll when the loop is active.

Most problematic in practice, however, is that not all file descriptors
work with it. For example, in Linux 5.1, TCP sockets, pipes, event fds,
files, F</dev/null> and many others are supported, but ttys do not work
properly (a known bug that the kernel developers don't care about, see
L<https://lore.kernel.org/patchwork/patch/1047453/>), so this is not
(yet?) a generic event polling interface.

Overall, it seems the Linux developers just don't want it to have a
generic event handling mechanism other than C<select> or C<poll>.

To work around all these problem, the current version of libev uses its
epoll backend as a fallback for file descriptor types that do not work. Or
falls back completely to epoll if the kernel acts up.

This backend maps C<EV_READ> and C<EV_WRITE> in the same way as
C<EVBACKEND_POLL>.

=item C<EVBACKEND_KQUEUE>  (value 8, most BSD clones)

Kqueue deserves special mention, as at the time this backend was
implemented, it was broken on all BSDs except NetBSD (usually it doesn't
work reliably with anything but sockets and pipes, except on Darwin,
where of course it's completely useless). Unlike epoll, however, whose
brokenness is by design, these kqueue bugs can be (and mostly have been)
fixed without API changes to existing programs. For this reason it's not
being "auto-detected" on all platforms unless you explicitly specify it
in the flags (i.e. using C<EVBACKEND_KQUEUE>) or libev was compiled on a

libev/ev.pod  view on Meta::CPAN

before stopping it.

As an example, libev itself uses this for its internal signal pipe: It
is not visible to the libev user and should not keep C<ev_run> from
exiting if no event watchers registered by it are active. It is also an
excellent way to do this for generic recurring timers or from within
third-party libraries. Just remember to I<unref after start> and I<ref
before stop> (but only if the watcher wasn't active before, or was active
before, respectively. Note also that libev might stop watchers itself
(e.g. non-repeating timers) in which case you have to C<ev_ref>
in the callback).

Example: Create a signal watcher, but keep it from keeping C<ev_run>
running when nothing else is active.

   ev_signal exitsig;
   ev_signal_init (&exitsig, sig_cb, SIGINT);
   ev_signal_start (loop, &exitsig);
   ev_unref (loop);

Example: For some weird reason, unregister the above signal handler again.

   ev_ref (loop);
   ev_signal_stop (loop, &exitsig);

=item ev_set_io_collect_interval (loop, ev_tstamp interval)

=item ev_set_timeout_collect_interval (loop, ev_tstamp interval)

These advanced functions influence the time that libev will spend waiting
for events. Both time intervals are by default C<0>, meaning that libev
will try to invoke timer/periodic callbacks and I/O callbacks with minimum
latency.

Setting these to a higher value (the C<interval> I<must> be >= C<0>)
allows libev to delay invocation of I/O and timer/periodic callbacks
to increase efficiency of loop iterations (or to increase power-saving
opportunities).

The idea is that sometimes your program runs just fast enough to handle
one (or very few) event(s) per loop iteration. While this makes the
program responsive, it also wastes a lot of CPU time to poll for new
events, especially with backends like C<select ()> which have a high
overhead for the actual polling but can deliver many events at once.

By setting a higher I<io collect interval> you allow libev to spend more
time collecting I/O events, so you can handle more events per iteration,
at the cost of increasing latency. Timeouts (both C<ev_periodic> and
C<ev_timer>) will not be affected. Setting this to a non-null value will
introduce an additional C<ev_sleep ()> call into most loop iterations. The
sleep time ensures that libev will not poll for I/O events more often then
once per this interval, on average (as long as the host time resolution is
good enough).

Likewise, by setting a higher I<timeout collect interval> you allow libev
to spend more time collecting timeouts, at the expense of increased
latency/jitter/inexactness (the watcher callback will be called
later). C<ev_io> watchers will not be affected. Setting this to a non-null
value will not introduce any overhead in libev.

Many (busy) programs can usually benefit by setting the I/O collect
interval to a value near C<0.1> or so, which is often enough for
interactive servers (of course not for games), likewise for timeouts. It
usually doesn't make much sense to set it to a lower value than C<0.01>,
as this approaches the timing granularity of most systems. Note that if
you do transactions with the outside world and you can't increase the
parallelity, then this setting will limit your transaction rate (if you
need to poll once per transaction and the I/O collect interval is 0.01,
then you can't do more than 100 transactions per second).

Setting the I<timeout collect interval> can improve the opportunity for
saving power, as the program will "bundle" timer callback invocations that
are "near" in time together, by delaying some, thus reducing the number of
times the process sleeps and wakes up again. Another useful technique to
reduce iterations/wake-ups is to use C<ev_periodic> watchers and make sure
they fire on, say, one-second boundaries only.

Example: we only need 0.1s timeout granularity, and we wish not to poll
more often than 100 times per second:

   ev_set_timeout_collect_interval (EV_DEFAULT_UC_ 0.1);
   ev_set_io_collect_interval (EV_DEFAULT_UC_ 0.01);

=item ev_invoke_pending (loop)

This call will simply invoke all pending watchers while resetting their
pending state. Normally, C<ev_run> does this automatically when required,
but when overriding the invoke callback this call comes handy. This
function can be invoked from a watcher - this can be useful for example
when you want to do some lengthy calculation and want to pass further
event handling to another thread (you still have to make sure only one
thread executes within C<ev_invoke_pending> or C<ev_run> of course).

=item int ev_pending_count (loop)

Returns the number of pending watchers - zero indicates that no watchers
are pending.

=item ev_set_invoke_pending_cb (loop, void (*invoke_pending_cb)(EV_P))

This overrides the invoke pending functionality of the loop: Instead of
invoking all pending watchers when there are any, C<ev_run> will call
this callback instead. This is useful, for example, when you want to
invoke the actual watchers inside another context (another thread etc.).

If you want to reset the callback, use C<ev_invoke_pending> as new
callback.

=item ev_set_loop_release_cb (loop, void (*release)(EV_P) throw (), void (*acquire)(EV_P) throw ())

Sometimes you want to share the same loop between multiple threads. This
can be done relatively simply by putting mutex_lock/unlock calls around
each call to a libev function.

However, C<ev_run> can run an indefinite time, so it is not feasible
to wait for it to return. One way around this is to wake up the event
loop via C<ev_break> and C<ev_async_send>, another way is to set these
I<release> and I<acquire> callbacks on the loop.

When set, then C<release> will be called just before the thread is
suspended waiting for new events, and C<acquire> is called just

libev/ev.pod  view on Meta::CPAN

on its own, but in the case of files, there is no such thing: the disk
will not send data on its own, simply because it doesn't know what you
wish to read - you would first have to request some data.

Since files are typically not-so-well supported by advanced notification
mechanism, libev tries hard to emulate POSIX behaviour with respect
to files, even though you should not use it. The reason for this is
convenience: sometimes you want to watch STDIN or STDOUT, which is
usually a tty, often a pipe, but also sometimes files or special devices
(for example, C<epoll> on Linux works with F</dev/random> but not with
F</dev/urandom>), and even though the file might better be served with
asynchronous I/O instead of with non-blocking I/O, it is still useful when
it "just works" instead of freezing.

So avoid file descriptors pointing to files when you know it (e.g. use
libeio), but use them when it is convenient, e.g. for STDIN/STDOUT, or
when you rarely read from a file instead of from a socket, and want to
reuse the same code path.

=head3 The special problem of fork

Some backends (epoll, kqueue, linuxaio, iouring) do not support C<fork ()>
at all or exhibit useless behaviour. Libev fully supports fork, but needs
to be told about it in the child if you want to continue to use it in the
child.

To support fork in your child processes, you have to call C<ev_loop_fork
()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to
C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>.

=head3 The special problem of SIGPIPE

While not really specific to libev, it is easy to forget about C<SIGPIPE>:
when writing to a pipe whose other end has been closed, your program gets
sent a SIGPIPE, which, by default, aborts your program. For most programs
this is sensible behaviour, for daemons, this is usually undesirable.

So when you encounter spurious, unexplained daemon exits, make sure you
ignore SIGPIPE (and maybe make sure you log the exit status of your daemon
somewhere, as that would have given you a big clue).

=head3 The special problem of accept()ing when you can't

Many implementations of the POSIX C<accept> function (for example,
found in post-2004 Linux) have the peculiar behaviour of not removing a
connection from the pending queue in all error cases.

For example, larger servers often run out of file descriptors (because
of resource limits), causing C<accept> to fail with C<ENFILE> but not
rejecting the connection, leading to libev signalling readiness on
the next iteration again (the connection still exists after all), and
typically causing the program to loop at 100% CPU usage.

Unfortunately, the set of errors that cause this issue differs between
operating systems, there is usually little the app can do to remedy the
situation, and no known thread-safe method of removing the connection to
cope with overload is known (to me).

One of the easiest ways to handle this situation is to just ignore it
- when the program encounters an overload, it will just loop until the
situation is over. While this is a form of busy waiting, no OS offers an
event-based way to handle this situation, so it's the best one can do.

A better way to handle the situation is to log any errors other than
C<EAGAIN> and C<EWOULDBLOCK>, making sure not to flood the log with such
messages, and continue as usual, which at least gives the user an idea of
what could be wrong ("raise the ulimit!"). For extra points one could stop
the C<ev_io> watcher on the listening fd "for a while", which reduces CPU
usage.

If your program is single-threaded, then you could also keep a dummy file
descriptor for overload situations (e.g. by opening F</dev/null>), and
when you run into C<ENFILE> or C<EMFILE>, close it, run C<accept>,
close that fd, and create a new dummy fd. This will gracefully refuse
clients under typical overload conditions.

The last way to handle it is to simply log the error and C<exit>, as
is often done with C<malloc> failures, but this results in an easy
opportunity for a DoS attack.

=head3 Watcher-Specific Functions

=over 4

=item ev_io_init (ev_io *, callback, int fd, int events)

=item ev_io_set (ev_io *, int fd, int events)

Configures an C<ev_io> watcher. The C<fd> is the file descriptor to
receive events for and C<events> is either C<EV_READ>, C<EV_WRITE>, both
C<EV_READ | EV_WRITE> or C<0>, to express the desire to receive the given
events.

Note that setting the C<events> to C<0> and starting the watcher is
supported, but not specially optimized - if your program sometimes happens
to generate this combination this is fine, but if it is easy to avoid
starting an io watcher watching for no events you should do so.

=item ev_io_modify (ev_io *, int events)

Similar to C<ev_io_set>, but only changes the requested events. Using this
might be faster with some backends, as libev can assume that the C<fd>
still refers to the same underlying file description, something it cannot
do when using C<ev_io_set>.

=item int fd [no-modify]

The file descriptor being watched. While it can be read at any time, you
must not modify this member even when the watcher is stopped - always use
C<ev_io_set> for that.

=item int events [no-modify]

The set of events the fd is being watched for, among other flags. Remember
that this is a bit set - to test for C<EV_READ>, use C<< w->events &
EV_READ >>, and similarly for C<EV_WRITE>.

As with C<fd>, you must not modify this member even when the watcher is
stopped, always use C<ev_io_set> or C<ev_io_modify> for that.

=back

libev/ev.pod  view on Meta::CPAN

interval for this case. If you specify a polling interval of C<0> (highly
recommended!) then a I<suitable, unspecified default> value will be used
(which you can expect to be around five seconds, although this might
change dynamically). Libev will also impose a minimum interval which is
currently around C<0.1>, but that's usually overkill.

This watcher type is not meant for massive numbers of stat watchers,
as even with OS-supported change notifications, this can be
resource-intensive.

At the time of this writing, the only OS-specific interface implemented
is the Linux inotify interface (implementing kqueue support is left as an
exercise for the reader. Note, however, that the author sees no way of
implementing C<ev_stat> semantics with kqueue, except as a hint).

=head3 ABI Issues (Largefile Support)

Libev by default (unless the user overrides this) uses the default
compilation environment, which means that on systems with large file
support disabled by default, you get the 32 bit version of the stat
structure. When using the library from programs that change the ABI to
use 64 bit file offsets the programs will fail. In that case you have to
compile libev with the same flags to get binary compatibility. This is
obviously the case with any flags that change the ABI, but the problem is
most noticeably displayed with ev_stat and large file support.

The solution for this is to lobby your distribution maker to make large
file interfaces available by default (as e.g. FreeBSD does) and not
optional. Libev cannot simply switch on large file support because it has
to exchange stat structures with application programs compiled using the
default compilation environment.

=head3 Inotify and Kqueue

When C<inotify (7)> support has been compiled into libev and present at
runtime, it will be used to speed up change detection where possible. The
inotify descriptor will be created lazily when the first C<ev_stat>
watcher is being started.

Inotify presence does not change the semantics of C<ev_stat> watchers
except that changes might be detected earlier, and in some cases, to avoid
making regular C<stat> calls. Even in the presence of inotify support
there are many cases where libev has to resort to regular C<stat> polling,
but as long as kernel 2.6.25 or newer is used (2.6.24 and older have too
many bugs), the path exists (i.e. stat succeeds), and the path resides on
a local filesystem (libev currently assumes only ext2/3, jfs, reiserfs and
xfs are fully working) libev usually gets away without polling.

There is no support for kqueue, as apparently it cannot be used to
implement this functionality, due to the requirement of having a file
descriptor open on the object at all times, and detecting renames, unlinks
etc. is difficult.

=head3 C<stat ()> is a synchronous operation

Libev doesn't normally do any kind of I/O itself, and so is not blocking
the process. The exception are C<ev_stat> watchers - those call C<stat
()>, which is a synchronous operation.

For local paths, this usually doesn't matter: unless the system is very
busy or the intervals between stat's are large, a stat call will be fast,
as the path data is usually in memory already (except when starting the
watcher).

For networked file systems, calling C<stat ()> can block an indefinite
time due to network issues, and even under good conditions, a stat call
often takes multiple milliseconds.

Therefore, it is best to avoid using C<ev_stat> watchers on networked
paths, although this is fully supported by libev.

=head3 The special problem of stat time resolution

The C<stat ()> system call only supports full-second resolution portably,
and even on systems where the resolution is higher, most file systems
still only support whole seconds.

That means that, if the time is the only thing that changes, you can
easily miss updates: on the first update, C<ev_stat> detects a change and
calls your callback, which does something. When there is another update
within the same second, C<ev_stat> will be unable to detect unless the
stat data does change in other ways (e.g. file size).

The solution to this is to delay acting on a change for slightly more
than a second (or till slightly after the next full second boundary), using
a roughly one-second-delay C<ev_timer> (e.g. C<ev_timer_set (w, 0., 1.02);
ev_timer_again (loop, w)>).

The C<.02> offset is added to work around small timing inconsistencies
of some operating systems (where the second counter of the current time
might be be delayed. One such system is the Linux kernel, where a call to
C<gettimeofday> might return a timestamp with a full second later than
a subsequent C<time> call - if the equivalent of C<time ()> is used to
update file times then there will be a small window where the kernel uses
the previous second to update file times but libev might already execute
the timer callback).

=head3 Watcher-Specific Functions and Data Members

=over 4

=item ev_stat_init (ev_stat *, callback, const char *path, ev_tstamp interval)

=item ev_stat_set (ev_stat *, const char *path, ev_tstamp interval)

Configures the watcher to wait for status changes of the given
C<path>. The C<interval> is a hint on how quickly a change is expected to
be detected and should normally be specified as C<0> to let libev choose
a suitable value. The memory pointed to by C<path> must point to the same
path for as long as the watcher is active.

The callback will receive an C<EV_STAT> event when a change was detected,
relative to the attributes at the time the watcher was started (or the
last change was detected).

=item ev_stat_stat (loop, ev_stat *)

Updates the stat buffer immediately with new values. If you change the
watched path in your callback, you could call this function to avoid
detecting this change (while introducing a race condition if you are not
the only one changing the path). Can also be useful simply to find out the

libev/ev.pod  view on Meta::CPAN


Example: Watch C</etc/passwd> for attribute changes.

   static void
   passwd_cb (struct ev_loop *loop, ev_stat *w, int revents)
   {
     /* /etc/passwd changed in some way */
     if (w->attr.st_nlink)
       {
         printf ("passwd current size  %ld\n", (long)w->attr.st_size);
         printf ("passwd current atime %ld\n", (long)w->attr.st_mtime);
         printf ("passwd current mtime %ld\n", (long)w->attr.st_mtime);
       }
     else
       /* you shalt not abuse printf for puts */
       puts ("wow, /etc/passwd is not there, expect problems. "
             "if this is windows, they already arrived\n");
   }

   ...
   ev_stat passwd;

   ev_stat_init (&passwd, passwd_cb, "/etc/passwd", 0.);
   ev_stat_start (loop, &passwd);

Example: Like above, but additionally use a one-second delay so we do not
miss updates (however, frequent updates will delay processing, too, so
one might do the work both on C<ev_stat> callback invocation I<and> on
C<ev_timer> callback invocation).

   static ev_stat passwd;
   static ev_timer timer;

   static void
   timer_cb (EV_P_ ev_timer *w, int revents)
   {
     ev_timer_stop (EV_A_ w);

     /* now it's one second after the most recent passwd change */
   }

   static void
   stat_cb (EV_P_ ev_stat *w, int revents)
   {
     /* reset the one-second timer */
     ev_timer_again (EV_A_ &timer);
   }

   ...
   ev_stat_init (&passwd, stat_cb, "/etc/passwd", 0.);
   ev_stat_start (loop, &passwd);
   ev_timer_init (&timer, timer_cb, 0., 1.02);


=head2 C<ev_idle> - when you've got nothing better to do...

Idle watchers trigger events when no other events of the same or higher
priority are pending (prepare, check and other idle watchers do not count
as receiving "events").

That is, as long as your process is busy handling sockets or timeouts
(or even signals, imagine) of the same or higher priority it will not be
triggered. But when your process is idle (or only lower-priority watchers
are pending), the idle watchers are being called once per event loop
iteration - until stopped, that is, or your process receives more events
and becomes busy again with higher priority stuff.

The most noteworthy effect is that as long as any idle watchers are
active, the process will not block when waiting for new events.

Apart from keeping your process non-blocking (which is a useful
effect on its own sometimes), idle watchers are a good place to do
"pseudo-background processing", or delay processing stuff to after the
event loop has handled all outstanding events.

=head3 Abusing an C<ev_idle> watcher for its side-effect

As long as there is at least one active idle watcher, libev will never
sleep unnecessarily. Or in other words, it will loop as fast as possible.
For this to work, the idle watcher doesn't need to be invoked at all - the
lowest priority will do.

This mode of operation can be useful together with an C<ev_check> watcher,
to do something on each event loop iteration - for example to balance load
between different connections.

See L</Abusing an ev_check watcher for its side-effect> for a longer
example.

=head3 Watcher-Specific Functions and Data Members

=over 4

=item ev_idle_init (ev_idle *, callback)

Initialises and configures the idle watcher - it has no parameters of any
kind. There is a C<ev_idle_set> macro, but using it is utterly pointless,
believe me.

=back

=head3 Examples

Example: Dynamically allocate an C<ev_idle> watcher, start it, and in the
callback, free it. Also, use no error checking, as usual.

   static void
   idle_cb (struct ev_loop *loop, ev_idle *w, int revents)
   {
     // stop the watcher
     ev_idle_stop (loop, w);

     // now we can free it
     free (w);

     // now do something you wanted to do when the program has
     // no longer anything immediate to do.
   }

   ev_idle *idle_watcher = malloc (sizeof (ev_idle));
   ev_idle_init (idle_watcher, idle_cb);
   ev_idle_start (loop, idle_watcher);


=head2 C<ev_prepare> and C<ev_check> - customise your event loop!



( run in 2.140 seconds using v1.01-cache-2.11-cpan-39bf76dae61 )