EV

 view release on metacpan or  search on metacpan

libev/ev.pod  view on Meta::CPAN

watchers do: as long as the C<ev_async> watcher is active, you can signal
it by calling C<ev_async_send>, which is thread- and signal safe.

This functionality is very similar to C<ev_signal> watchers, as signals,
too, are asynchronous in nature, and signals, too, will be compressed
(i.e. the number of callback invocations may be less than the number of
C<ev_async_send> calls). In fact, you could use signal watchers as a kind
of "global async watchers" by using a watcher on an otherwise unused
signal, and C<ev_feed_signal> to signal this watcher from another thread,
even without knowing which loop owns the signal.

=head3 Queueing

C<ev_async> does not support queueing of data in any way. The reason
is that the author does not know of a simple (or any) algorithm for a
multiple-writer-single-reader queue that works in all cases and doesn't
need elaborate support such as pthreads or unportable memory access
semantics.

That means that if you want to queue data, you have to provide your own
queue. But at least I can tell you how to implement locking around your
queue:

=over 4

=item queueing from a signal handler context

To implement race-free queueing, you simply add to the queue in the signal
handler but you block the signal handler in the watcher callback. Here is
an example that does that for some fictitious SIGUSR1 handler:

   static ev_async mysig;

   static void
   sigusr1_handler (void)
   {
     sometype data;

     // no locking etc.
     queue_put (data);
     ev_async_send (EV_DEFAULT_ &mysig);
   }

   static void
   mysig_cb (EV_P_ ev_async *w, int revents)
   {
     sometype data;
     sigset_t block, prev;

     sigemptyset (&block);
     sigaddset (&block, SIGUSR1);
     sigprocmask (SIG_BLOCK, &block, &prev);

     while (queue_get (&data))
       process (data);

     if (sigismember (&prev, SIGUSR1)
       sigprocmask (SIG_UNBLOCK, &block, 0);
   }

(Note: pthreads in theory requires you to use C<pthread_setmask>
instead of C<sigprocmask> when you use threads, but libev doesn't do it
either...).

=item queueing from a thread context

The strategy for threads is different, as you cannot (easily) block
threads but you can easily preempt them, so to queue safely you need to
employ a traditional mutex lock, such as in this pthread example:

   static ev_async mysig;
   static pthread_mutex_t mymutex = PTHREAD_MUTEX_INITIALIZER;

   static void
   otherthread (void)
   {
     // only need to lock the actual queueing operation
     pthread_mutex_lock (&mymutex);
     queue_put (data);
     pthread_mutex_unlock (&mymutex);

     ev_async_send (EV_DEFAULT_ &mysig);
   }

   static void
   mysig_cb (EV_P_ ev_async *w, int revents)
   {
     pthread_mutex_lock (&mymutex);

     while (queue_get (&data))
       process (data);

     pthread_mutex_unlock (&mymutex);
   }

=back


=head3 Watcher-Specific Functions and Data Members

=over 4

=item ev_async_init (ev_async *, callback)

Initialises and configures the async watcher - it has no parameters of any
kind. There is a C<ev_async_set> macro, but using it is utterly pointless,
trust me.

=item ev_async_send (loop, ev_async *)

Sends/signals/activates the given C<ev_async> watcher, that is, feeds
an C<EV_ASYNC> event on the watcher into the event loop, and instantly
returns.

Unlike C<ev_feed_event>, this call is safe to do from other threads,
signal or similar contexts (see the discussion of C<EV_ATOMIC_T> in the
embedding section below on what exactly this means).

Note that, as with other watchers in libev, multiple events might get
compressed into a single callback invocation (another way to look at
this is that C<ev_async> watchers are level-triggered: they are set on



( run in 1.246 second using v1.01-cache-2.11-cpan-39bf76dae61 )