Alien-uv

 view release on metacpan or  search on metacpan

libuv/ChangeLog  view on Meta::CPAN

* unix,win: handle zero-sized allocations uniformly (Ben Noordhuis)

* unix: remove unused uv__dup() function (Ben Noordhuis)

* core,bsd: refactor process_title functions (Santiago Gimeno)

* win: Redefine NSIG to consider SIGWINCH (Jeremy Studer)

* test: make sure that reading a directory fails (Sakthipriyan Vairamani)

* win, tty: remove zero-size read callbacks (Bartosz Sosnowski)

* test: fix test runner getenv async-signal-safety (Ben Noordhuis)

* test: fix test runner execvp async-signal-safety (Ben Noordhuis)

* test,unix: fix race in test runner (Ben Noordhuis)

* unix,win: support IDNA 2008 in uv_getaddrinfo() (Ben Noordhuis)

* win, tcp: avoid starving the loop (Bartosz Sosnowski)

libuv/ChangeLog  view on Meta::CPAN

* unix,fs: use uint64_t instead of unsigned long (Imran Iqbal)

* build: check for warnings for -fvisibility=hidden (Imran Iqbal)

* unix: remove unneeded TODO note (Saúl Ibarra Corretgé)

* test: skip tty_pty test if pty is not available (Luca Bruno)

* sunos: set phys_addr of interface_address using ARP (Brian Maher)

* doc: clarify callbacks won't be called in error case (Saúl Ibarra Corretgé)

* unix: don't convert stat buffer when syscall fails (Ben Noordhuis)

* win: compare entire filename in watch events (cjihrig)

* doc: add a note on safe reuse of uv_write_t (neevek)

* linux: fix potential event loop stall (Ben Noordhuis)

* unix,win: make uv_get_process_title() stricter (cjihrig)

libuv/ChangeLog  view on Meta::CPAN

* doc: fix typo in tcp.rst (Igor Soarez)

* linux: work around epoll bug in kernels < 2.6.37 (Ben Noordhuis)

* unix,win: add uv_os_homedir() (cjihrig)

* stream: fix `select()` race condition (Fedor Indutny)

* unix: prevent infinite loop in uv__run_pending (Saúl Ibarra Corretgé)

* unix: make sure UDP send callbacks are asynchronous (Saúl Ibarra Corretgé)

* test: fix `platform_output` netmask printing. (Andrew Paprocki)

* aix: add ahafs autoconf detection and README notes (Andrew Paprocki)

* core: add ability to customize memory allocator (Saúl Ibarra Corretgé)


2015.05.07, Version 1.5.0 (Stable), 4e77f74c7b95b639b3397095db1bc5bcc016c203

libuv/ChangeLog  view on Meta::CPAN


* thread: barrier functions (Ben Noordhuis)

* windows: fix PYTHON environment variable usage (Jay Satiro)

* unix, windows: return system error on EAI_SYSTEM (Saúl Ibarra Corretgé)

* windows: fix handling closed socket while poll handle is closing (Saúl Ibarra
  Corretgé)

* unix: don't run i/o callbacks after prepare callbacks (Saúl Ibarra Corretgé)

* windows: add tty unicode support for input (Peter Atashian)

* header: introduce `uv_loop_size()` (Andrius Bentkus)

* darwin: invoke `mach_timebase_info` only once (Fedor Indutny)


2014.05.02, Version 0.11.25 (Unstable), 2acd544cff7142e06aa3b09ec64b4a33dd9ab996

libuv/ChangeLog  view on Meta::CPAN

* windows/fs: make uv_fs_open() report EINVAL correctly (Bert Belder)

* windows/fs: handle _open_osfhandle() failure correctly (Bert Belder)

* build: clarify instructions for Windows (Brian Kaisner)

* build: remove GCC_WARN_ABOUT_MISSING_NEWLINE (Ben Noordhuis)

* darwin: fix 10.6 build error in fsevents.c (Ben Noordhuis)

* windows: run close callbacks after polling for i/o (Saúl Ibarra Corretgé)

* include: clarify uv_tcp_bind() behavior (Ben Noordhuis)

* include: clean up includes in uv.h (Ben Noordhuis)

* include: remove UV_IO_PRIVATE_FIELDS macro (Ben Noordhuis)

* include: fix typo in comment in uv.h (Ben Noordhuis)

* include: update uv_is_active() documentation (Ben Noordhuis)

libuv/docs/src/design.rst  view on Meta::CPAN


The I/O (or event) loop is the central part of libuv. It establishes the content for all I/O
operations, and it's meant to be tied to a single thread. One can run multiple event loops
as long as each runs in a different thread. The libuv event loop (or any other API involving
the loop or handles, for that matter) **is not thread-safe** except where stated otherwise.

The event loop follows the rather usual single threaded asynchronous I/O approach: all (network)
I/O is performed on non-blocking sockets which are polled using the best mechanism available
on the given platform: epoll on Linux, kqueue on OSX and other BSDs, event ports on SunOS and IOCP
on Windows. As part of a loop iteration the loop will block waiting for I/O activity on sockets
which have been added to the poller and callbacks will be fired indicating socket conditions
(readable, writable hangup) so handles can read, write or perform the desired I/O operation.

In order to better understand how the event loop operates, the following diagram illustrates all
stages of a loop iteration:

.. image:: static/loop_iteration.png
    :scale: 75%
    :align: center


#. The loop concept of 'now' is updated. The event loop caches the current time at the start of
   the event loop tick in order to reduce the number of time-related system calls.

#. If the loop is *alive*  an iteration is started, otherwise the loop will exit immediately. So,
   when is a loop considered to be *alive*? If a loop has active and ref'd handles, active
   requests or closing handles it's considered to be *alive*.

#. Due timers are run. All active timers scheduled for a time before the loop's concept of *now*
   get their callbacks called.

#. Pending callbacks are called. All I/O callbacks are called right after polling for I/O, for the
   most part. There are cases, however, in which calling such a callback is deferred for the next
   loop iteration. If the previous iteration deferred any I/O callback it will be run at this point.

#. Idle handle callbacks are called. Despite the unfortunate name, idle handles are run on every
   loop iteration, if they are active.

#. Prepare handle callbacks are called. Prepare handles get their callbacks called right before
   the loop will block for I/O.

#. Poll timeout is calculated. Before blocking for I/O the loop calculates for how long it should
   block. These are the rules when calculating the timeout:

        * If the loop was run with the ``UV_RUN_NOWAIT`` flag, the timeout is 0.
        * If the loop is going to be stopped (:c:func:`uv_stop` was called), the timeout is 0.
        * If there are no active handles or requests, the timeout is 0.
        * If there are any idle handles active, the timeout is 0.
        * If there are any handles pending to be closed, the timeout is 0.
        * If none of the above cases matches, the timeout of the closest timer is taken, or
          if there are no active timers, infinity.

#. The loop blocks for I/O. At this point the loop will block for I/O for the duration calculated
   in the previous step. All I/O related handles that were monitoring a given file descriptor
   for a read or write operation get their callbacks called at this point.

#. Check handle callbacks are called. Check handles get their callbacks called right after the
   loop has blocked for I/O. Check handles are essentially the counterpart of prepare handles.

#. Close callbacks are called. If a handle was closed by calling :c:func:`uv_close` it will
   get the close callback called.

#. Special case in case the loop was run with ``UV_RUN_ONCE``, as it implies forward progress.
   It's possible that no I/O callbacks were fired after blocking for I/O, but some time has passed
   so there might be timers which are due, those timers get their callbacks called.

#. Iteration ends. If the loop was run with ``UV_RUN_NOWAIT`` or ``UV_RUN_ONCE`` modes the
   iteration ends and :c:func:`uv_run` will return. If the loop was run with ``UV_RUN_DEFAULT``
   it will continue from the start if it's still *alive*, otherwise it will also end.


.. important::
    libuv uses a thread pool to make asynchronous file I/O operations possible, but
    network I/O is **always** performed in a single thread, each loop's thread.

libuv/docs/src/guide/basics.rst  view on Meta::CPAN

core job is to provide an event loop and callback based notifications of I/O
and other activities.  libuv offers core utilities like timers, non-blocking
networking support, asynchronous file system access, child processes and more.

Event loops
-----------

In event-driven programming, an application expresses interest in certain events
and respond to them when they occur. The responsibility of gathering events
from the operating system or monitoring other sources of events is handled by
libuv, and the user can register callbacks to be invoked when an event occurs.
The event-loop usually keeps running *forever*. In pseudocode:

.. code-block:: python

    while there are still events to process:
        e = get the next event
        if there is a callback associated with e:
            call the callback

Some examples of events are:

libuv/docs/src/guide/basics.rst  view on Meta::CPAN

.. note::

    node.js uses the default loop as its main loop. If you are writing bindings
    you should be aware of this.

.. _libuv-error-handling:

Error handling
--------------

Initialization functions or synchronous functions which may fail return a negative number on error. Async functions that may fail will pass a status parameter to their callbacks. The error messages are defined as ``UV_E*`` `constants`_. 

.. _constants: http://docs.libuv.org/en/v1.x/errors.html#error-constants

You can use the ``uv_strerror(int)`` and ``uv_err_name(int)`` functions
to get a ``const char *`` describing the error or the error name respectively.

I/O read callbacks (such as for files and sockets) are passed a parameter ``nread``. If ``nread`` is less than 0, there was an error (UV_EOF is the end of file error, which you may want to handle differently).

Handles and Requests
--------------------

libuv works by the user expressing interest in particular events. This is
usually done by creating a **handle** to an I/O device, timer or process.
Handles are opaque structs named as ``uv_TYPE_t`` where type signifies what the
handle is used for. 

.. rubric:: libuv watchers

libuv/docs/src/guide/eventloops.rst  view on Meta::CPAN

is called, the loop **won't** block for i/o on this iteration. The semantics of
these things can be a bit difficult to understand, so let's look at
``uv_run()`` where all the control flow occurs.

.. rubric:: src/unix/core.c - uv_run
.. literalinclude:: ../../../src/unix/core.c
    :linenos:
    :lines: 304-324
    :emphasize-lines: 10,19,21

``stop_flag`` is set by ``uv_stop()``. Now all libuv callbacks are invoked
within the event loop, which is why invoking ``uv_stop()`` in them will still
lead to this iteration of the loop occurring. First libuv updates timers, then
runs pending timer, idle and prepare callbacks, and invokes any pending I/O
callbacks. If you were to call ``uv_stop()`` in any of them, ``stop_flag``
would be set. This causes ``uv_backend_timeout()`` to return ``0``, which is
why the loop does not block on I/O. If on the other hand, you called
``uv_stop()`` in one of the check handlers, I/O has already finished and is not
affected.

``uv_stop()`` is useful to shutdown a loop when a result has been computed or
there is an error, without having to ensure that all handlers are stopped one
by one.

Here is a simple example that stops the loop and demonstrates how the current

libuv/docs/src/guide/filesystem.rst  view on Meta::CPAN

`Unix flags <http://man7.org/linux/man-pages/man2/open.2.html>`_.
libuv takes care of converting to the appropriate Windows flags.

File descriptors are closed using

.. code-block:: c

    int uv_fs_close(uv_loop_t* loop, uv_fs_t* req, uv_file file, uv_fs_cb cb)


Filesystem operation callbacks have the signature:

.. code-block:: c

    void callback(uv_fs_t* req);

Let's see a simple implementation of ``cat``. We start with registering
a callback for when the file is opened:

.. rubric:: uvcat/main.c - opening a file
.. literalinclude:: ../../code/uvcat/main.c

libuv/docs/src/guide/filesystem.rst  view on Meta::CPAN

Here you see a common pattern when writing asynchronous programs. The
``uv_fs_close()`` call is performed synchronously. *Usually tasks which are
one-off, or are done as part of the startup or shutdown stage are performed
synchronously, since we are interested in fast I/O when the program is going
about its primary task and dealing with multiple I/O sources*. For solo tasks
the performance difference usually is negligible and may lead to simpler code.

Filesystem writing is similarly simple using ``uv_fs_write()``.  *Your callback
will be triggered after the write is complete*.  In our case the callback
simply drives the next read. Thus read and write proceed in lockstep via
callbacks.

.. rubric:: uvcat/main.c - write callback
.. literalinclude:: ../../code/uvcat/main.c
    :linenos:
    :lines: 16-24
    :emphasize-lines: 6

.. warning::

    Due to the way filesystems and disk drives are configured for performance,

libuv/docs/src/guide/filesystem.rst  view on Meta::CPAN

point there is nothing to be read. Most applications will just ignore this.

.. rubric:: uvtee/main.c - Write to pipe
.. literalinclude:: ../../code/uvtee/main.c
    :linenos:
    :lines: 9-13,23-42

``write_data()`` makes a copy of the buffer obtained from read. This buffer
does not get passed through to the write callback trigged on write completion. To
get around this we wrap a write request and a buffer in ``write_req_t`` and
unwrap it in the callbacks. We make a copy so we can free the two buffers from
the two calls to ``write_data`` independently of each other. While acceptable
for a demo program like this, you'll probably want smarter memory management,
like reference counted buffers or a pool of buffers in any major application.

.. WARNING::

    If your program is meant to be used with other programs it may knowingly or
    unknowingly be writing to a pipe. This makes it susceptible to `aborting on
    receiving a SIGPIPE`_. It is a good idea to insert::

libuv/docs/src/guide/networking.rst  view on Meta::CPAN

First we setup the receiving socket to bind on all interfaces on port 68 (DHCP
client) and start a read on it. This will read back responses from any DHCP
server that replies. We use the UV_UDP_REUSEADDR flag to play nice with any
other system DHCP clients that are running on this computer on the same port.
Then we setup a similar send socket and use ``uv_udp_send`` to send
a *broadcast message* on port 67 (DHCP server).

It is **necessary** to set the broadcast flag, otherwise you will get an
``EACCES`` error [#]_. The exact message being sent is not relevant to this
book and you can study the code if you are interested. As usual the read and
write callbacks will receive a status code of < 0 if something went wrong.

Since UDP sockets are not connected to a particular peer, the read callback
receives an extra parameter about the sender of the packet.

``nread`` may be zero if there is no more data to be read. If ``addr`` is NULL,
it indicates there is nothing to read (the callback shouldn't do anything), if
not NULL, it indicates that an empty datagram was received from the host at
``addr``. The ``flags`` parameter may be ``UV_UDP_PARTIAL`` if the buffer
provided by your allocator was not large enough to hold the data. *In this case
the OS will discard the data that could not fit* (That's UDP for you!).

libuv/docs/src/guide/utilities.rst  view on Meta::CPAN

    :lines: 5-8, 17-
    :emphasize-lines: 9

We initialize the garbage collector timer, then immediately ``unref`` it.
Observe how after 9 seconds, when the fake job is done, the program
automatically exits, even though the garbage collector is still running.

Idler pattern
-------------

The callbacks of idle handles are invoked once per event loop. The idle
callback can be used to perform some very low priority activity. For example,
you could dispatch a summary of the daily application performance to the
developers for analysis during periods of idleness, or use the application's
CPU time to perform SETI calculations :) An idle watcher is also useful in
a GUI application. Say you are using an event loop for a file download. If the
TCP socket is still being established and no other events are present your
event loop will pause (**block**), which means your progress bar will freeze
and the user will face an unresponsive application. In such a case queue up and
idle watcher to keep the UI operational.

libuv/docs/src/guide/utilities.rst  view on Meta::CPAN

.. _libcurl: http://curl.haxx.se/libcurl/
.. _multi: http://curl.haxx.se/libcurl/c/libcurl-multi.html

.. rubric:: uvwget/main.c - The setup
.. literalinclude:: ../../code/uvwget/main.c
    :linenos:
    :lines: 1-9,140-
    :emphasize-lines: 7,21,24-25

The way each library is integrated with libuv will vary. In the case of
libcurl, we can register two callbacks. The socket callback ``handle_socket``
is invoked whenever the state of a socket changes and we have to start polling
it. ``start_timeout`` is called by libcurl to notify us of the next timeout
interval, after which we should drive libcurl forward regardless of I/O status.
This is so that libcurl can handle errors or do whatever else is required to
get the download moving.

Our downloader is to be invoked as::

    $ ./uvwget [url1] [url2] ...

libuv/docs/src/handle.rst  view on Meta::CPAN

    Request handle to be closed. `close_cb` will be called asynchronously after
    this call. This MUST be called on each handle before memory is released.
    Moreover, the memory can only be released in `close_cb` or after it has
    returned.

    Handles that wrap file descriptors are closed immediately but
    `close_cb` will still be deferred to the next iteration of the event loop.
    It gives you a chance to free up any resources associated with the handle.

    In-progress requests, like uv_connect_t or uv_write_t, are cancelled and
    have their callbacks called asynchronously with status=UV_ECANCELED.

.. c:function:: void uv_ref(uv_handle_t* handle)

    Reference the given handle. References are idempotent, that is, if a handle
    is already referenced calling this function again will have no effect.

    See :ref:`refcount`.

.. c:function:: void uv_unref(uv_handle_t* handle)

libuv/docs/src/idle.rst  view on Meta::CPAN

===================================

Idle handles will run the given callback once per loop iteration, right
before the :c:type:`uv_prepare_t` handles.

.. note::
    The notable difference with prepare handles is that when there are active idle handles,
    the loop will perform a zero timeout poll instead of blocking for i/o.

.. warning::
    Despite the name, idle handles will get their callbacks called on every loop iteration,
    not when the loop is actually "idle".


Data types
----------

.. c:type:: uv_idle_t

    Idle handle type.

libuv/docs/src/loop.rst  view on Meta::CPAN


.. _loop:

:c:type:`uv_loop_t` --- Event loop
==================================

The event loop is the central part of libuv's functionality. It takes care
of polling for i/o and scheduling callbacks to be run based on different sources
of events.


Data types
----------

.. c:type:: uv_loop_t

    Loop data type.

libuv/docs/src/loop.rst  view on Meta::CPAN

.. c:function:: int uv_run(uv_loop_t* loop, uv_run_mode mode)

    This function runs the event loop. It will act differently depending on the
    specified mode:

    - UV_RUN_DEFAULT: Runs the event loop until there are no more active and
      referenced handles or requests. Returns non-zero if :c:func:`uv_stop`
      was called and there are still active handles or requests.  Returns
      zero in all other cases.
    - UV_RUN_ONCE: Poll for i/o once. Note that this function blocks if
      there are no pending callbacks. Returns zero when done (no active handles
      or requests left), or non-zero if more callbacks are expected (meaning
      you should run the event loop again sometime in the future).
    - UV_RUN_NOWAIT: Poll for i/o once but don't block if there are no
      pending callbacks. Returns zero if done (no active handles
      or requests left), or non-zero if more callbacks are expected (meaning
      you should run the event loop again sometime in the future).

    :c:func:`uv_run` is not reentrant. It must not be called from a callback.

.. c:function:: int uv_loop_alive(const uv_loop_t* loop)

    Returns non-zero if there are referenced active handles, active
    requests or closing handles in the loop.

.. c:function:: void uv_stop(uv_loop_t* loop)

libuv/docs/src/loop.rst  view on Meta::CPAN


    Returns the size of the `uv_loop_t` structure. Useful for FFI binding
    writers who don't want to know the structure layout.

.. c:function:: int uv_backend_fd(const uv_loop_t* loop)

    Get backend file descriptor. Only kqueue, epoll and event ports are
    supported.

    This can be used in conjunction with `uv_run(loop, UV_RUN_NOWAIT)` to
    poll in one thread and run the event loop's callbacks in another see
    test/test-embed.c for an example.

    .. note::
        Embedding a kqueue fd in another kqueue pollset doesn't work on all platforms. It's not
        an error to add the fd but it never generates events.

.. c:function:: int uv_backend_timeout(const uv_loop_t* loop)

    Get the poll timeout. The return value is in milliseconds, or -1 for no
    timeout.

libuv/docs/src/loop.rst  view on Meta::CPAN


    .. note::
        Use :c:func:`uv_hrtime` if you need sub-millisecond granularity.

.. c:function:: void uv_update_time(uv_loop_t* loop)

    Update the event loop's concept of "now". Libuv caches the current time
    at the start of the event loop tick in order to reduce the number of
    time-related system calls.

    You won't normally need to call this function unless you have callbacks
    that block the event loop for longer periods of time, where "longer" is
    somewhat subjective but probably on the order of a millisecond or more.

.. c:function:: void uv_walk(uv_loop_t* loop, uv_walk_cb walk_cb, void* arg)

    Walk the list of handles: `walk_cb` will be executed with the given `arg`.

.. c:function:: int uv_loop_fork(uv_loop_t* loop)

    .. versionadded:: 1.12.0

libuv/docs/src/migration_010_100.rst  view on Meta::CPAN

have also changed, make sure you check the documentation.

..note::
    This change applies to all functions that made a distinction between IPv4 and IPv6
    addresses.


Streams / UDP  data receive callback API change
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The streams and UDP data receive callbacks now get a pointer to a :c:type:`uv_buf_t` buffer,
not a structure by value.

libuv 0.10

::

    void on_read(uv_stream_t* handle,
                 ssize_t nread,
                 uv_buf_t buf) {
        ...

libuv/docs/src/request.rst  view on Meta::CPAN


    Cancel a pending request. Fails if the request is executing or has finished
    executing.

    Returns 0 on success, or an error code < 0 on failure.

    Only cancellation of :c:type:`uv_fs_t`, :c:type:`uv_getaddrinfo_t`,
    :c:type:`uv_getnameinfo_t` and :c:type:`uv_work_t` requests is
    currently supported.

    Cancelled requests have their callbacks invoked some time in the future.
    It's **not** safe to free the memory associated with the request until the
    callback is called.

    Here is how cancellation is reported to the callback:

    * A :c:type:`uv_fs_t` request has its req->result field set to `UV_ECANCELED`.

    * A :c:type:`uv_work_t`, :c:type:`uv_getaddrinfo_t` or c:type:`uv_getnameinfo_t`
      request has its callback invoked with status == `UV_ECANCELED`.

libuv/docs/src/timer.rst  view on Meta::CPAN


.. _timer:

:c:type:`uv_timer_t` --- Timer handle
=====================================

Timer handles are used to schedule callbacks to be called in the future.


Data types
----------

.. c:type:: uv_timer_t

    Timer handle type.

.. c:type:: void (*uv_timer_cb)(uv_timer_t* handle)

libuv/src/unix/core.c  view on Meta::CPAN

    if ((mode == UV_RUN_ONCE && !ran_pending) || mode == UV_RUN_DEFAULT)
      timeout = uv_backend_timeout(loop);

    uv__io_poll(loop, timeout);
    uv__run_check(loop);
    uv__run_closing_handles(loop);

    if (mode == UV_RUN_ONCE) {
      /* UV_RUN_ONCE implies forward progress: at least one callback must have
       * been invoked when it returns. uv__io_poll() can return without doing
       * I/O (meaning: no callbacks) when its timeout expires - which means we
       * have pending timers that satisfy the forward progress constraint.
       *
       * UV_RUN_NOWAIT makes no guarantees about progress so it's omitted from
       * the check.
       */
      uv__update_time(loop);
      uv__run_timers(loop);
    }

    r = uv__loop_alive(loop);

libuv/src/unix/linux-core.c  view on Meta::CPAN

        /* File descriptor that we've stopped watching, disarm it.
         *
         * Ignore all errors because we may be racing with another thread
         * when the file descriptor is closed.
         */
        epoll_ctl(loop->backend_fd, EPOLL_CTL_DEL, fd, pe);
        continue;
      }

      /* Give users only events they're interested in. Prevents spurious
       * callbacks when previous callback invocation in this loop has stopped
       * the current watcher. Also, filters out events that users has not
       * requested us to watch.
       */
      pe->events &= w->pevents | POLLERR | POLLHUP;

      /* Work around an epoll quirk where it sometimes reports just the
       * EPOLLERR or EPOLLHUP event.  In order to force the event loop to
       * move forward, we merge in the read/write events that the watcher
       * is interested in; uv__read() and uv__write() will then deal with
       * the error or hangup in the usual fashion.

libuv/src/unix/linux-inotify.c  view on Meta::CPAN

      w = find_watcher(loop, e->wd);
      if (w == NULL)
        continue; /* Stale event, no watchers left. */

      /* inotify does not return the filename when monitoring a single file
       * for modifications. Repurpose the filename for API compatibility.
       * I'm not convinced this is a good thing, maybe it should go.
       */
      path = e->len ? (const char*) (e + 1) : uv__basename_r(w->path);

      /* We're about to iterate over the queue and call user's callbacks.
       * What can go wrong?
       * A callback could call uv_fs_event_stop()
       * and the queue can change under our feet.
       * So, we use QUEUE_MOVE() trick to safely iterate over the queue.
       * And we don't free the watcher_list until we're done iterating.
       *
       * First,
       * tell uv_fs_event_stop() (that could be called from a user's callback)
       * not to free watcher_list.
       */

libuv/src/unix/os390.c  view on Meta::CPAN

        /* File descriptor that we've stopped watching, disarm it.
         *
         * Ignore all errors because we may be racing with another thread
         * when the file descriptor is closed.
         */
        epoll_ctl(loop->ep, EPOLL_CTL_DEL, fd, pe);
        continue;
      }

      /* Give users only events they're interested in. Prevents spurious
       * callbacks when previous callback invocation in this loop has stopped
       * the current watcher. Also, filters out events that users has not
       * requested us to watch.
       */
      pe->events &= w->pevents | POLLERR | POLLHUP;

      if (pe->events == POLLERR || pe->events == POLLHUP)
        pe->events |= w->pevents & (POLLIN | POLLOUT);

      if (pe->events != 0) {
        w->cb(loop, w, pe->events);

libuv/src/unix/stream.c  view on Meta::CPAN

#else
# define RETRY_ON_WRITE_ERROR(errno) (errno == EINTR)
# define IS_TRANSIENT_WRITE_ERROR(errno, send_handle) \
    (errno == EAGAIN || errno == EWOULDBLOCK || errno == ENOBUFS)
#endif /* defined(__APPLE__) */

static void uv__stream_connect(uv_stream_t*);
static void uv__write(uv_stream_t* stream);
static void uv__read(uv_stream_t* stream);
static void uv__stream_io(uv_loop_t* loop, uv__io_t* w, unsigned int events);
static void uv__write_callbacks(uv_stream_t* stream);
static size_t uv__write_req_size(uv_write_t* req);


void uv__stream_init(uv_loop_t* loop,
                     uv_stream_t* stream,
                     uv_handle_type type) {
  int err;

  uv__handle_init(loop, (uv_handle_t*)stream, type);
  stream->read_cb = NULL;

libuv/src/unix/stream.c  view on Meta::CPAN

  assert(!uv__io_active(&stream->io_watcher, POLLIN | POLLOUT));
  assert(stream->flags & UV_HANDLE_CLOSED);

  if (stream->connect_req) {
    uv__req_unregister(stream->loop, stream->connect_req);
    stream->connect_req->cb(stream->connect_req, UV_ECANCELED);
    stream->connect_req = NULL;
  }

  uv__stream_flush_write_queue(stream, UV_ECANCELED);
  uv__write_callbacks(stream);

  if (stream->shutdown_req) {
    /* The ECANCELED error code is a lie, the shutdown(2) syscall is a
     * fait accompli at this point. Maybe we should revisit this in v0.11.
     * A possible reason for leaving it unchanged is that it informs the
     * callee that the handle has been destroyed.
     */
    uv__req_unregister(stream->loop, stream->shutdown_req);
    stream->shutdown_req->cb(stream->shutdown_req, UV_ECANCELED);
    stream->shutdown_req = NULL;

libuv/src/unix/stream.c  view on Meta::CPAN

error:
  req->error = err;
  uv__write_req_finish(req);
  uv__io_stop(stream->loop, &stream->io_watcher, POLLOUT);
  if (!uv__io_active(&stream->io_watcher, POLLIN))
    uv__handle_stop(stream);
  uv__stream_osx_interrupt_select(stream);
}


static void uv__write_callbacks(uv_stream_t* stream) {
  uv_write_t* req;
  QUEUE* q;
  QUEUE pq;

  if (QUEUE_EMPTY(&stream->write_completed_queue))
    return;

  QUEUE_MOVE(&stream->write_completed_queue, &pq);

  while (!QUEUE_EMPTY(&pq)) {

libuv/src/unix/stream.c  view on Meta::CPAN

      !(stream->flags & UV_HANDLE_READ_EOF)) {
    uv_buf_t buf = { NULL, 0 };
    uv__stream_eof(stream, &buf);
  }

  if (uv__stream_fd(stream) == -1)
    return;  /* read_cb closed stream. */

  if (events & (POLLOUT | POLLERR | POLLHUP)) {
    uv__write(stream);
    uv__write_callbacks(stream);

    /* Write queue drained. */
    if (QUEUE_EMPTY(&stream->write_queue))
      uv__drain(stream);
  }
}


/**
 * We get called here from directly following a call to connect(2).

libuv/src/unix/stream.c  view on Meta::CPAN

  }

  if (req->cb)
    req->cb(req, error);

  if (uv__stream_fd(stream) == -1)
    return;

  if (error < 0) {
    uv__stream_flush_write_queue(stream, UV_ECANCELED);
    uv__write_callbacks(stream);
  }
}


int uv_write2(uv_write_t* req,
              uv_stream_t* stream,
              const uv_buf_t bufs[],
              unsigned int nbufs,
              uv_stream_t* send_handle,
              uv_write_cb cb) {

libuv/src/win/core.c  view on Meta::CPAN

    else
      uv__poll_wine(loop, timeout);


    uv_check_invoke(loop);
    uv_process_endgames(loop);

    if (mode == UV_RUN_ONCE) {
      /* UV_RUN_ONCE implies forward progress: at least one callback must have
       * been invoked when it returns. uv__io_poll() can return without doing
       * I/O (meaning: no callbacks) when its timeout expires - which means we
       * have pending timers that satisfy the forward progress constraint.
       *
       * UV_RUN_NOWAIT makes no guarantees about progress so it's omitted from
       * the check.
       */
      uv__run_timers(loop);
    }

    r = uv__loop_alive(loop);
    if (mode == UV_RUN_ONCE || mode == UV_RUN_NOWAIT)

libuv/src/win/fs-event.c  view on Meta::CPAN

  int err, sizew, size;
  char* filename = NULL;
  WCHAR* filenamew = NULL;
  WCHAR* long_filenamew = NULL;
  DWORD offset = 0;

  assert(req->type == UV_FS_EVENT_REQ);
  assert(handle->req_pending);
  handle->req_pending = 0;

  /* Don't report any callbacks if:
   * - We're closing, just push the handle onto the endgame queue
   * - We are not active, just ignore the callback
   */
  if (!uv__is_active(handle)) {
    if (handle->flags & UV_HANDLE_CLOSING) {
      uv_want_endgame(loop, (uv_handle_t*) handle);
    }
    return;
  }

libuv/src/win/process.c  view on Meta::CPAN

    uv_want_endgame(loop, (uv_handle_t*) handle);
    return;
  }

  /* Unregister from process notification. */
  if (handle->wait_handle != INVALID_HANDLE_VALUE) {
    UnregisterWait(handle->wait_handle);
    handle->wait_handle = INVALID_HANDLE_VALUE;
  }

  /* Set the handle to inactive: no callbacks will be made after the exit
   * callback. */
  uv__handle_stop(handle);

  if (GetExitCodeProcess(handle->process_handle, &status)) {
    exit_code = status;
  } else {
    /* Unable to obtain the exit code. This should never happen. */
    exit_code = uv_translate_sys_error(GetLastError());
  }

libuv/test/benchmark-async-pummel.c  view on Meta::CPAN


#include "task.h"
#include "uv.h"

#include <stdio.h>
#include <stdlib.h>

#define NUM_PINGS               (1000 * 1000)
#define ACCESS_ONCE(type, var)  (*(volatile type*) &(var))

static unsigned int callbacks;
static volatile int done;

static const char running[] = "running";
static const char stop[]    = "stop";
static const char stopped[] = "stopped";


static void async_cb(uv_async_t* handle) {
  if (++callbacks == NUM_PINGS) {
    /* Tell the pummel thread to stop. */
    ACCESS_ONCE(const char*, handle->data) = stop;

    /* Wait for the pummel thread to acknowledge that it has stoppped. */
    while (ACCESS_ONCE(const char*, handle->data) != stopped)
      uv_sleep(0);

    uv_close((uv_handle_t*) handle, NULL);
  }
}

libuv/test/benchmark-async-pummel.c  view on Meta::CPAN

  time = uv_hrtime();

  ASSERT(0 == uv_run(uv_default_loop(), UV_RUN_DEFAULT));

  time = uv_hrtime() - time;
  done = 1;

  for (i = 0; i < nthreads; i++)
    ASSERT(0 == uv_thread_join(tids + i));

  printf("async_pummel_%d: %s callbacks in %.2f seconds (%s/sec)\n",
         nthreads,
         fmt(callbacks),
         time / 1e9,
         fmt(callbacks / (time / 1e9)));

  free(tids);

  MAKE_VALGRIND_HAPPY();
  return 0;
}


BENCHMARK_IMPL(async_pummel_1) {
  return test_async_pummel(1);

libuv/test/test-ping-pong.c  view on Meta::CPAN

  ASSERT(!r);

  /* We are never doing multiple reads/connects at a time anyway, so these
   * handles can be pre-initialized. */
  r = uv_tcp_connect(&pinger->connect_req,
                     &pinger->stream.tcp,
                     (const struct sockaddr*) &server_addr,
                     pinger_on_connect);
  ASSERT(!r);

  /* Synchronous connect callbacks are not allowed. */
  ASSERT(pinger_on_connect_count == 0);
}


static void tcp_pinger_new(int vectored_writes) {
  int r;
  struct sockaddr_in server_addr;
  pinger_t *pinger;

  ASSERT(0 == uv_ip4_addr("127.0.0.1", TEST_PORT, &server_addr));

libuv/test/test-ping-pong.c  view on Meta::CPAN

  ASSERT(!r);

  /* We are never doing multiple reads/connects at a time anyway, so these
   * handles can be pre-initialized. */
  r = uv_tcp_connect(&pinger->connect_req,
                     &pinger->stream.tcp,
                     (const struct sockaddr*) &server_addr,
                     pinger_on_connect);
  ASSERT(!r);

  /* Synchronous connect callbacks are not allowed. */
  ASSERT(pinger_on_connect_count == 0);
}


static void pipe_pinger_new(int vectored_writes) {
  int r;
  pinger_t *pinger;

  pinger = (pinger_t*)malloc(sizeof(*pinger));
  ASSERT(pinger != NULL);

libuv/test/test-ping-pong.c  view on Meta::CPAN

  /* Try to connect to the server and do NUM_PINGS ping-pongs. */
  r = uv_pipe_init(uv_default_loop(), &pinger->stream.pipe, 0);
  pinger->stream.pipe.data = pinger;
  ASSERT(!r);

  /* We are never doing multiple reads/connects at a time anyway, so these
   * handles can be pre-initialized. */
  uv_pipe_connect(&pinger->connect_req, &pinger->stream.pipe, TEST_PIPENAME,
      pinger_on_connect);

  /* Synchronous connect callbacks are not allowed. */
  ASSERT(pinger_on_connect_count == 0);
}


static int run_ping_pong_test(void) {
  uv_run(uv_default_loop(), UV_RUN_DEFAULT);
  ASSERT(completed_pingers == 1);

  MAKE_VALGRIND_HAPPY();
  return 0;

libuv/test/test-queue-foreach-delete.c  view on Meta::CPAN

 * To do so we do the following (for each handle type):
 *  1. Create and start 3 handles (#0, #1, and #2).
 *
 *     The queue after the start() calls:
 *     ..=> [queue head] <=> [handle] <=> [handle #1] <=> [handle] <=..
 *
 *  2. Trigger handles to fire (for uv_idle_t, uv_prepare_t, and uv_check_t there is nothing to do).
 *
 *  3. In the callback for the first-executed handle (#0 or #2 depending on handle type)
 *     stop the handle and the next one (#1).
 *     (for uv_idle_t, uv_prepare_t, and uv_check_t callbacks are executed in the reverse order as they are start()'ed,
 *     so callback for handle #2 will be called first)
 *
 *     The queue after the stop() calls:
 *                                correct foreach "next"  |
 *                                                       \/
 *     ..=> [queue head] <==============================> [handle] <=..
 *          [          ] <-  [handle] <=> [handle #1]  -> [      ]
 *                                       /\
 *                  wrong foreach "next"  |
 *

libuv/test/test-queue-foreach-delete.c  view on Meta::CPAN


  loop = uv_default_loop();

  INIT_AND_START(idle,    loop);
  INIT_AND_START(prepare, loop);
  INIT_AND_START(check,   loop);

#ifdef __linux__
  init_and_start_fs_events(loop);

  /* helper timer to trigger async and fs_event callbacks */
  r = uv_timer_init(loop, &timer);
  ASSERT(r == 0);

  r = uv_timer_start(&timer, helper_timer_cb, 0, 0);
  ASSERT(r == 0);
#endif

  r = uv_run(loop, UV_RUN_NOWAIT);
  ASSERT(r == 1);

libuv/test/test-signal-multiple-loops.c  view on Meta::CPAN

  if (action == ONLY_SIGUSR2 || action == SIGUSR1_AND_SIGUSR2) {
    r = uv_signal_init(&loop, &signal2);
    ASSERT(r == 0);
    r = uv_signal_start(&signal2, signal2_cb, SIGUSR2);
    ASSERT(r == 0);
  }

  /* Signal watchers are now set up. */
  uv_sem_post(&sem);

  /* Wait for all signals. The signal callbacks stop the watcher, so uv_run
   * will return when all signal watchers caught a signal.
   */
  r = uv_run(&loop, UV_RUN_DEFAULT);
  ASSERT(r == 0);

  /* Restart the signal watchers. */
  if (action == ONLY_SIGUSR1 || action == SIGUSR1_AND_SIGUSR2) {
    r = uv_signal_start(&signal1a, signal1_cb, SIGUSR1);
    ASSERT(r == 0);
    r = uv_signal_start(&signal1b, signal1_cb, SIGUSR1);

libuv/test/test-tcp-close.c  view on Meta::CPAN


    r = uv_write(req, (uv_stream_t*)&tcp_handle, &buf, 1, write_cb);
    ASSERT(r == 0);
  }

  uv_close((uv_handle_t*)&tcp_handle, close_cb);
}


static void write_cb(uv_write_t* req, int status) {
  /* write callbacks should run before the close callback */
  ASSERT(close_cb_called == 0);
  ASSERT(req->handle == (uv_stream_t*)&tcp_handle);
  write_cb_called++;
  free(req);
}


static void close_cb(uv_handle_t* handle) {
  ASSERT(handle == (uv_handle_t*)&tcp_handle);
  close_cb_called++;

libuv/test/test-tcp-close.c  view on Meta::CPAN

  r = uv_tcp_bind(handle, (const struct sockaddr*) &addr, 0);
  ASSERT(r == 0);

  r = uv_listen((uv_stream_t*)handle, 128, connection_cb);
  ASSERT(r == 0);

  uv_unref((uv_handle_t*)handle);
}


/* Check that pending write requests have their callbacks
 * invoked when the handle is closed.
 */
TEST_IMPL(tcp_close) {
  struct sockaddr_in addr;
  uv_tcp_t tcp_server;
  uv_loop_t* loop;
  int r;

  ASSERT(0 == uv_ip4_addr("127.0.0.1", TEST_PORT, &addr));

libuv/test/test-tcp-write-queue-order.c  view on Meta::CPAN


#define REQ_COUNT 10000

static uv_timer_t timer;
static uv_tcp_t server;
static uv_tcp_t client;
static uv_tcp_t incoming;
static int connect_cb_called;
static int close_cb_called;
static int connection_cb_called;
static int write_callbacks;
static int write_cancelled_callbacks;
static int write_error_callbacks;

static uv_write_t write_requests[REQ_COUNT];


static void close_cb(uv_handle_t* handle) {
  close_cb_called++;
}

static void timer_cb(uv_timer_t* handle) {
  uv_close((uv_handle_t*) &client, close_cb);
  uv_close((uv_handle_t*) &server, close_cb);
  uv_close((uv_handle_t*) &incoming, close_cb);
}

static void write_cb(uv_write_t* req, int status) {
  if (status == 0)
    write_callbacks++;
  else if (status == UV_ECANCELED)
    write_cancelled_callbacks++;
  else
    write_error_callbacks++;
}

static void connect_cb(uv_connect_t* req, int status) {
  static char base[1024];
  int r;
  int i;
  uv_buf_t buf;

  ASSERT(status == 0);
  connect_cb_called++;

libuv/test/test-tcp-write-queue-order.c  view on Meta::CPAN

  ASSERT(0 == uv_tcp_connect(&connect_req,
                             &client,
                             (struct sockaddr*) &addr,
                             connect_cb));
  ASSERT(0 == uv_send_buffer_size((uv_handle_t*) &client, &buffer_size));

  ASSERT(0 == uv_run(uv_default_loop(), UV_RUN_DEFAULT));

  ASSERT(connect_cb_called == 1);
  ASSERT(connection_cb_called == 1);
  ASSERT(write_callbacks > 0);
  ASSERT(write_cancelled_callbacks > 0);
  ASSERT(write_callbacks +
         write_error_callbacks +
         write_cancelled_callbacks == REQ_COUNT);
  ASSERT(close_cb_called == 3);

  MAKE_VALGRIND_HAPPY();
  return 0;
}



( run in 0.831 second using v1.01-cache-2.11-cpan-9b1e4054eb1 )