view release on metacpan or search on metacpan
libuv/ChangeLog view on Meta::CPAN
* doc: fix typo in tcp.rst (Igor Soarez)
* linux: work around epoll bug in kernels < 2.6.37 (Ben Noordhuis)
* unix,win: add uv_os_homedir() (cjihrig)
* stream: fix `select()` race condition (Fedor Indutny)
* unix: prevent infinite loop in uv__run_pending (Saúl Ibarra Corretgé)
* unix: make sure UDP send callbacks are asynchronous (Saúl Ibarra Corretgé)
* test: fix `platform_output` netmask printing. (Andrew Paprocki)
* aix: add ahafs autoconf detection and README notes (Andrew Paprocki)
* core: add ability to customize memory allocator (Saúl Ibarra Corretgé)
2015.05.07, Version 1.5.0 (Stable), 4e77f74c7b95b639b3397095db1bc5bcc016c203
libuv/ChangeLog view on Meta::CPAN
Bentkus)
* doc: explain how the threadpool is allocated (Alex Mo)
* doc: clarify uv_default_loop (Saúl Ibarra Corretgé)
* unix: fix implicit declaration compiler warning (Ben Noordhuis)
* unix: fix long line introduced in commit 94e628fa (Ben Noordhuis)
* unix, win: add synchronous uv_get{addr,name}info (Saúl Ibarra Corretgé)
* linux: fix epoll_pwait() regression with < 2.6.19 (Ben Noordhuis)
* build: compile -D_GNU_SOURCE on linux (Ben Noordhuis)
* build: use -fvisibility=hidden in autotools build (Ben Noordhuis)
* fs, pipe: no trailing terminator in exact sized buffers (Andrius Bentkus)
* style: rename buf to buffer and len to size for consistency (Andrius Bentkus)
libuv/ChangeLog view on Meta::CPAN
Corretgé)
* windows: don't use atexit for cleaning up the threadpool (Saúl Ibarra
Corretgé)
* windows: destroy work queue elements when colsing a loop (Saúl Ibarra
Corretgé)
* unix, windows: add uv_fs_mkdtemp (Pavel Platto)
* build: handle platforms without multiprocessing.synchronize (Saúl Ibarra
Corretgé)
* windows: change GENERIC_ALL to GENERIC_WRITE in fs__create_junction (Tony
Kelman)
* windows: relay TCP bind errors via ipc (Alexis Campailla)
2014.07.32, Version 0.10.28 (Stable), 9c14b616f5fb84bfd7d45707bab4bbb85894443e
libuv/ChangeLog view on Meta::CPAN
* unix: revert recent FSEvent changes (Ben Noordhuis)
* fsevents: fix clever rescheduling (Fedor Indutny)
* linux: ignore fractional time in uv_uptime() (Ben Noordhuis)
* unix: fix SIGCHLD waitpid() race in process.c (Ben Noordhuis)
* unix, windows: add uv_fs_event_start/stop functions (Saúl Ibarra Corretgé)
* unix: fix non-synchronized access in signal.c (Ben Noordhuis)
* unix: add atomic-ops.h (Ben Noordhuis)
* unix: add spinlock.h (Ben Noordhuis)
* unix: clean up uv_tty_set_mode() a little (Ben Noordhuis)
* unix: make uv_tty_reset_mode() async signal-safe (Ben Noordhuis)
* include: add E2BIG status code mapping (Ben Noordhuis)
libuv/ChangeLog view on Meta::CPAN
2013.10.19, Version 0.10.18 (Stable), 9ec52963b585e822e87bdc5de28d6143aff0d2e5
Changes since version 0.10.17:
* unix: fix uv_spawn() NULL pointer deref on ENOMEM (Ben Noordhuis)
* unix: don't close inherited fds on uv_spawn() fail (Ben Noordhuis)
* unix: revert recent FSEvent changes (Ben Noordhuis)
* unix: fix non-synchronized access in signal.c (Ben Noordhuis)
2013.09.25, Version 0.10.17 (Stable), 9670e0a93540c2f0d86c84a375f2303383c11e7e
Changes since version 0.10.16:
* build: remove GCC_WARN_ABOUT_MISSING_NEWLINE (Ben Noordhuis)
* darwin: fix 10.6 build error in fsevents.c (Ben Noordhuis)
libuv/README.md view on Meta::CPAN
![libuv][libuv_banner]
## Overview
libuv is a multi-platform support library with a focus on asynchronous I/O. It
was primarily developed for use by [Node.js][], but it's also
used by [Luvit](http://luvit.io/), [Julia](http://julialang.org/),
[pyuv](https://github.com/saghul/pyuv), and [others](https://github.com/libuv/libuv/wiki/Projects-that-use-libuv).
## Feature highlights
* Full-featured event loop backed by epoll, kqueue, IOCP, event ports.
* Asynchronous TCP and UDP sockets
* Asynchronous DNS resolution
* Asynchronous file and file system operations
* File system events
* ANSI escape code controlled TTY
* IPC with socket sharing, using Unix domain sockets or named pipes (Windows)
* Child processes
* Thread pool
* Signal handling
* High resolution clock
* Threading and synchronization primitives
## Versioning
Starting with version 1.0.0 libuv follows the [semantic versioning](http://semver.org/)
scheme. The API change and backwards compatibility rules are those indicated by
SemVer. libuv will keep a stable ABI across major releases.
The ABI/API changes can be tracked [here](http://abi-laboratory.pro/tracker/timeline/libuv/).
## Licensing
libuv/docs/code/uvcat/main.c view on Meta::CPAN
uv_fs_read(uv_default_loop(), &read_req, open_req.result, &iov, 1, -1, on_read);
}
}
void on_read(uv_fs_t *req) {
if (req->result < 0) {
fprintf(stderr, "Read error: %s\n", uv_strerror(req->result));
}
else if (req->result == 0) {
uv_fs_t close_req;
// synchronous
uv_fs_close(uv_default_loop(), &close_req, open_req.result, NULL);
}
else if (req->result > 0) {
iov.len = req->result;
uv_fs_write(uv_default_loop(), &write_req, 1, &iov, 1, -1, on_write);
}
}
void on_open(uv_fs_t *req) {
// The request passed to the callback is the same as the one the call setup
libuv/docs/src/conf.py view on Meta::CPAN
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'libuv', u'libuv documentation',
u'libuv contributors', 'libuv', 'Cross-platform asynchronous I/O',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
libuv/docs/src/design.rst view on Meta::CPAN
.. _design:
Design overview
===============
libuv is cross-platform support library which was originally written for NodeJS. It's designed
around the event-driven asynchronous I/O model.
The library provides much more than a simple abstraction over different I/O polling mechanisms:
'handles' and 'streams' provide a high level abstraction for sockets and other entities;
cross-platform file I/O and threading functionality is also provided, amongst other things.
Here is a diagram illustrating the different parts that compose libuv and what subsystem they
relate to:
.. image:: static/architecture.png
:scale: 75%
libuv/docs/src/design.rst view on Meta::CPAN
The I/O loop
^^^^^^^^^^^^
The I/O (or event) loop is the central part of libuv. It establishes the content for all I/O
operations, and it's meant to be tied to a single thread. One can run multiple event loops
as long as each runs in a different thread. The libuv event loop (or any other API involving
the loop or handles, for that matter) **is not thread-safe** except where stated otherwise.
The event loop follows the rather usual single threaded asynchronous I/O approach: all (network)
I/O is performed on non-blocking sockets which are polled using the best mechanism available
on the given platform: epoll on Linux, kqueue on OSX and other BSDs, event ports on SunOS and IOCP
on Windows. As part of a loop iteration the loop will block waiting for I/O activity on sockets
which have been added to the poller and callbacks will be fired indicating socket conditions
(readable, writable hangup) so handles can read, write or perform the desired I/O operation.
In order to better understand how the event loop operates, the following diagram illustrates all
stages of a loop iteration:
.. image:: static/loop_iteration.png
libuv/docs/src/design.rst view on Meta::CPAN
#. Special case in case the loop was run with ``UV_RUN_ONCE``, as it implies forward progress.
It's possible that no I/O callbacks were fired after blocking for I/O, but some time has passed
so there might be timers which are due, those timers get their callbacks called.
#. Iteration ends. If the loop was run with ``UV_RUN_NOWAIT`` or ``UV_RUN_ONCE`` modes the
iteration ends and :c:func:`uv_run` will return. If the loop was run with ``UV_RUN_DEFAULT``
it will continue from the start if it's still *alive*, otherwise it will also end.
.. important::
libuv uses a thread pool to make asynchronous file I/O operations possible, but
network I/O is **always** performed in a single thread, each loop's thread.
.. note::
While the polling mechanism is different, libuv makes the execution model consistent
across Unix systems and Windows.
File I/O
^^^^^^^^
Unlike network I/O, there are no platform-specific file I/O primitives libuv could rely on,
so the current approach is to run blocking file I/O operations in a thread pool.
For a thorough explanation of the cross-platform file I/O landscape, checkout
`this post <http://blog.libtorrent.org/2012/10/asynchronous-disk-io/>`_.
libuv currently uses a global thread pool on which all loops can queue work. 3 types of
operations are currently run on this pool:
* File system operations
* DNS functions (getaddrinfo and getnameinfo)
* User specified code via :c:func:`uv_queue_work`
.. warning::
See the :c:ref:`threadpool` section for more details, but keep in mind the thread pool size
libuv/docs/src/dns.rst view on Meta::CPAN
.. _dns:
DNS utility functions
=====================
libuv provides asynchronous variants of `getaddrinfo` and `getnameinfo`.
Data types
----------
.. c:type:: uv_getaddrinfo_t
`getaddrinfo` request type.
.. c:type:: void (*uv_getaddrinfo_cb)(uv_getaddrinfo_t* req, int status, struct addrinfo* res)
libuv/docs/src/dns.rst view on Meta::CPAN
.. versionchanged:: 1.3.0 the field is declared as public.
.. seealso:: The :c:type:`uv_req_t` members also apply.
API
---
.. c:function:: int uv_getaddrinfo(uv_loop_t* loop, uv_getaddrinfo_t* req, uv_getaddrinfo_cb getaddrinfo_cb, const char* node, const char* service, const struct addrinfo* hints)
Asynchronous :man:`getaddrinfo(3)`.
Either node or service may be NULL but not both.
`hints` is a pointer to a struct addrinfo with additional address type
constraints, or NULL. Consult `man -s 3 getaddrinfo` for more details.
Returns 0 on success or an error code < 0 on failure. If successful, the
callback will get called sometime in the future with the lookup result,
which is either:
* status == 0, the res argument points to a valid `struct addrinfo`, or
* status < 0, the res argument is NULL. See the UV_EAI_* constants.
Call :c:func:`uv_freeaddrinfo` to free the addrinfo structure.
.. versionchanged:: 1.3.0 the callback parameter is now allowed to be NULL,
in which case the request will run **synchronously**.
.. c:function:: void uv_freeaddrinfo(struct addrinfo* ai)
Free the struct addrinfo. Passing NULL is allowed and is a no-op.
.. c:function:: int uv_getnameinfo(uv_loop_t* loop, uv_getnameinfo_t* req, uv_getnameinfo_cb getnameinfo_cb, const struct sockaddr* addr, int flags)
Asynchronous :man:`getnameinfo(3)`.
Returns 0 on success or an error code < 0 on failure. If successful, the
callback will get called sometime in the future with the lookup result.
Consult `man -s 3 getnameinfo` for more details.
.. versionchanged:: 1.3.0 the callback parameter is now allowed to be NULL,
in which case the request will run **synchronously**.
.. seealso:: The :c:type:`uv_req_t` API functions also apply.
libuv/docs/src/fs.rst view on Meta::CPAN
.. _fs:
File system operations
======================
libuv provides a wide variety of cross-platform sync and async file system
operations. All functions defined in this document take a callback, which is
allowed to be NULL. If the callback is NULL the request is completed synchronously,
otherwise it will be performed asynchronously.
All file operations are run on the threadpool. See :ref:`threadpool` for information
on the threadpool size.
.. note::
On Windows `uv_fs_*` functions use utf-8 encoding.
Data types
----------
libuv/docs/src/fs.rst view on Meta::CPAN
.. c:macro:: UV_FS_O_DIRECTORY
If the path is not a directory, fail the open.
.. note::
`UV_FS_O_DIRECTORY` is not supported on Windows.
.. c:macro:: UV_FS_O_DSYNC
The file is opened for synchronous I/O. Write operations will complete once
all data and a minimum of metadata are flushed to disk.
.. note::
`UV_FS_O_DSYNC` is supported on Windows via
`FILE_FLAG_WRITE_THROUGH <https://msdn.microsoft.com/en-us/library/windows/desktop/cc644950.aspx>`_.
.. c:macro:: UV_FS_O_EXCL
If the `O_CREAT` flag is set and the file already exists, fail the open.
libuv/docs/src/fs.rst view on Meta::CPAN
.. note::
`UV_FS_O_SHORT_LIVED` is only supported on Windows via
`FILE_ATTRIBUTE_TEMPORARY <https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858.aspx>`_.
.. c:macro:: UV_FS_O_SYMLINK
Open the symbolic link itself rather than the resource it points to.
.. c:macro:: UV_FS_O_SYNC
The file is opened for synchronous I/O. Write operations will complete once
all data and all metadata are flushed to disk.
.. note::
`UV_FS_O_SYNC` is supported on Windows via
`FILE_FLAG_WRITE_THROUGH <https://msdn.microsoft.com/en-us/library/windows/desktop/cc644950.aspx>`_.
.. c:macro:: UV_FS_O_TEMPORARY
The file is temporary and should not be flushed to disk if possible.
libuv/docs/src/guide/basics.rst view on Meta::CPAN
Basics of libuv
===============
libuv enforces an **asynchronous**, **event-driven** style of programming. Its
core job is to provide an event loop and callback based notifications of I/O
and other activities. libuv offers core utilities like timers, non-blocking
networking support, asynchronous file system access, child processes and more.
Event loops
-----------
In event-driven programming, an application expresses interest in certain events
and respond to them when they occur. The responsibility of gathering events
from the operating system or monitoring other sources of events is handled by
libuv, and the user can register callbacks to be invoked when an event occurs.
The event-loop usually keeps running *forever*. In pseudocode:
libuv/docs/src/guide/basics.rst view on Meta::CPAN
a disproportionately long time compared to the speed of the processor. The
functions don't return until the task is done, so that your program is doing
nothing. For programs which require high performance this is a major roadblock
as other activities and other I/O operations are kept waiting.
One of the standard solutions is to use threads. Each blocking I/O operation is
started in a separate thread (or in a thread pool). When the blocking function
gets invoked in the thread, the processor can schedule another thread to run,
which actually needs the CPU.
The approach followed by libuv uses another style, which is the **asynchronous,
non-blocking** style. Most modern operating systems provide event notification
subsystems. For example, a normal ``read`` call on a socket would block until
the sender actually sent something. Instead, the application can request the
operating system to watch the socket and put an event notification in the
queue. The application can inspect the events at its convenience (perhaps doing
some number crunching before to use the processor to the maximum) and grab the
data. It is **asynchronous** because the application expressed interest at one
point, then used the data at another point (in time and space). It is
**non-blocking** because the application process was free to do other tasks.
This fits in well with libuv's event-loop approach, since the operating system
events can be treated as just another libuv event. The non-blocking ensures
that other events can continue to be handled as fast as they come in [#]_.
.. NOTE::
How the I/O is run in the background is not of our concern, but due to the
way our computer hardware works, with the thread as the basic unit of the
libuv/docs/src/guide/basics.rst view on Meta::CPAN
.. note::
node.js uses the default loop as its main loop. If you are writing bindings
you should be aware of this.
.. _libuv-error-handling:
Error handling
--------------
Initialization functions or synchronous functions which may fail return a negative number on error. Async functions that may fail will pass a status parameter to their callbacks. The error messages are defined as ``UV_E*`` `constants`_.
.. _constants: http://docs.libuv.org/en/v1.x/errors.html#error-constants
You can use the ``uv_strerror(int)`` and ``uv_err_name(int)`` functions
to get a ``const char *`` describing the error or the error name respectively.
I/O read callbacks (such as for files and sockets) are passed a parameter ``nread``. If ``nread`` is less than 0, there was an error (UV_EOF is the end of file error, which you may want to handle differently).
Handles and Requests
--------------------
libuv/docs/src/guide/filesystem.rst view on Meta::CPAN
The libuv filesystem operations are different from :doc:`socket operations
<networking>`. Socket operations use the non-blocking operations provided
by the operating system. Filesystem operations use blocking functions
internally, but invoke these functions in a `thread pool`_ and notify
watchers registered with the event loop when application interaction is
required.
.. _thread pool: http://docs.libuv.org/en/v1.x/threadpool.html#thread-pool-work-scheduling
All filesystem functions have two forms - *synchronous* and *asynchronous*.
The *synchronous* forms automatically get called (and **block**) if the
callback is null. The return value of functions is a :ref:`libuv error code
<libuv-error-handling>`. This is usually only useful for synchronous calls.
The *asynchronous* form is called when a callback is passed and the return
value is 0.
Reading/Writing files
---------------------
A file descriptor is obtained using
.. code-block:: c
int uv_fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, int mode, uv_fs_cb cb)
libuv/docs/src/guide/filesystem.rst view on Meta::CPAN
:linenos:
:lines: 26-40
:emphasize-lines: 2,8,12
In the case of a read call, you should pass an *initialized* buffer which will
be filled with data before the read callback is triggered. The ``uv_fs_*``
operations map almost directly to certain POSIX functions, so EOF is indicated
in this case by ``result`` being 0. In the case of streams or pipes, the
``UV_EOF`` constant would have been passed as a status instead.
Here you see a common pattern when writing asynchronous programs. The
``uv_fs_close()`` call is performed synchronously. *Usually tasks which are
one-off, or are done as part of the startup or shutdown stage are performed
synchronously, since we are interested in fast I/O when the program is going
about its primary task and dealing with multiple I/O sources*. For solo tasks
the performance difference usually is negligible and may lead to simpler code.
Filesystem writing is similarly simple using ``uv_fs_write()``. *Your callback
will be triggered after the write is complete*. In our case the callback
simply drives the next read. Thus read and write proceed in lockstep via
callbacks.
.. rubric:: uvcat/main.c - write callback
.. literalinclude:: ../../code/uvcat/main.c
libuv/docs/src/guide/filesystem.rst view on Meta::CPAN
.. warning::
The ``uv_fs_req_cleanup()`` function must always be called on filesystem
requests to free internal memory allocations in libuv.
Filesystem operations
---------------------
All the standard filesystem operations like ``unlink``, ``rmdir``, ``stat`` are
supported asynchronously and have intuitive argument order. They follow the
same patterns as the read/write/open calls, returning the result in the
``uv_fs_t.result`` field. The full list:
.. rubric:: Filesystem operations
.. literalinclude:: ../../../include/uv.h
:lines: 1084-1195
.. _buffers-and-streams:
Buffers and Streams
libuv/docs/src/guide/filesystem.rst view on Meta::CPAN
(``uv_buf_t.len``). The ``uv_buf_t`` is lightweight and passed around by value.
What does require management is the actual bytes, which have to be allocated
and freed by the application.
.. ERROR::
THIS PROGRAM DOES NOT ALWAYS WORK, NEED SOMETHING BETTER**
To demonstrate streams we will need to use ``uv_pipe_t``. This allows streaming
local files [#]_. Here is a simple tee utility using libuv. Doing all operations
asynchronously shows the power of evented I/O. The two writes won't block each
other, but we have to be careful to copy over the buffer data to ensure we don't
free a buffer until it has been written.
The program is to be executed as::
./uvtee <output_file>
We start off opening pipes on the files we require. libuv pipes to a file are
opened as bidirectional by default.
libuv/docs/src/guide/introduction.rst view on Meta::CPAN
Who this book is for
--------------------
If you are reading this book, you are either:
1) a systems programmer, creating low-level programs such as daemons or network
services and clients. You have found that the event loop approach is well
suited for your application and decided to use libuv.
2) a node.js module writer, who wants to wrap platform APIs
written in C or C++ with a set of (a)synchronous APIs that are exposed to
JavaScript. You will use libuv purely in the context of node.js. For
this you will require some other resources as the book does not cover parts
specific to v8/node.js.
This book assumes that you are comfortable with the C programming language.
Background
----------
The node.js_ project began in 2009 as a JavaScript environment decoupled
libuv/docs/src/guide/networking.rst view on Meta::CPAN
:emphasize-lines: 4-5,7-10
You can see the utility function ``uv_ip4_addr`` being used to convert from
a human readable IP address, port pair to the sockaddr_in structure required by
the BSD socket APIs. The reverse can be obtained using ``uv_ip4_name``.
.. NOTE::
There are ``uv_ip6_*`` analogues for the ip4 functions.
Most of the setup functions are synchronous since they are CPU-bound.
``uv_listen`` is where we return to libuv's callback style. The second
arguments is the backlog queue -- the maximum length of queued connections.
When a connection is initiated by clients, the callback is required to set up
a handle for the client socket and associate the handle using ``uv_accept``.
In this case we also establish interest in reading from this stream.
.. rubric:: tcp-echo-server/main.c - Accepting the client
.. literalinclude:: ../../code/tcp-echo-server/main.c
:linenos:
libuv/docs/src/guide/networking.rst view on Meta::CPAN
Local loopback of multicast packets is enabled by default [#]_, use
``uv_udp_set_multicast_loop`` to switch it off.
The packet time-to-live for multicast packets can be changed using
``uv_udp_set_multicast_ttl``.
Querying DNS
------------
libuv provides asynchronous DNS resolution. For this it provides its own
``getaddrinfo`` replacement [#]_. In the callback you can
perform normal socket operations on the retrieved addresses. Let's connect to
Freenode to see an example of DNS resolution.
.. rubric:: dns/main.c
.. literalinclude:: ../../code/dns/main.c
:linenos:
:lines: 61-
:emphasize-lines: 12
If ``uv_getaddrinfo`` returns non-zero, something went wrong in the setup and
your callback won't be invoked at all. All arguments can be freed immediately
after ``uv_getaddrinfo`` returns. The `hostname`, `servname` and `hints`
structures are documented in `the getaddrinfo man page <getaddrinfo>`_. The
callback can be ``NULL`` in which case the function will run synchronously.
In the resolver callback, you can pick any IP from the linked list of ``struct
addrinfo(s)``. This also demonstrates ``uv_tcp_connect``. It is necessary to
call ``uv_freeaddrinfo`` in the callback.
.. rubric:: dns/main.c
.. literalinclude:: ../../code/dns/main.c
:linenos:
:lines: 42-60
:emphasize-lines: 8,16
libuv/docs/src/guide/threads.rst view on Meta::CPAN
Threads
=======
Wait a minute? Why are we on threads? Aren't event loops supposed to be **the
way** to do *web-scale programming*? Well... no. Threads are still the medium in
which processors do their jobs. Threads are therefore mighty useful sometimes, even
though you might have to wade through various synchronization primitives.
Threads are used internally to fake the asynchronous nature of all of the system
calls. libuv also uses threads to allow you, the application, to perform a task
asynchronously that is actually blocking, by spawning a thread and collecting
the result when it is done.
Today there are two predominant thread libraries: the Windows threads
implementation and POSIX's `pthreads`_. libuv's thread API is analogous to
the pthreads API and often has similar semantics.
A notable aspect of libuv's thread facilities is that it is a self contained
section within libuv. Whereas other features intimately depend on the event
loop and callback principles, threads are complete agnostic, they block as
required, signal errors directly via return values, and, as shown in the
libuv/docs/src/handle.rst view on Meta::CPAN
.. c:function:: int uv_is_closing(const uv_handle_t* handle)
Returns non-zero if the handle is closing or closed, zero otherwise.
.. note::
This function should only be used between the initialization of the handle and the
arrival of the close callback.
.. c:function:: void uv_close(uv_handle_t* handle, uv_close_cb close_cb)
Request handle to be closed. `close_cb` will be called asynchronously after
this call. This MUST be called on each handle before memory is released.
Moreover, the memory can only be released in `close_cb` or after it has
returned.
Handles that wrap file descriptors are closed immediately but
`close_cb` will still be deferred to the next iteration of the event loop.
It gives you a chance to free up any resources associated with the handle.
In-progress requests, like uv_connect_t or uv_write_t, are cancelled and
have their callbacks called asynchronously with status=UV_ECANCELED.
.. c:function:: void uv_ref(uv_handle_t* handle)
Reference the given handle. References are idempotent, that is, if a handle
is already referenced calling this function again will have no effect.
See :ref:`refcount`.
.. c:function:: void uv_unref(uv_handle_t* handle)
libuv/docs/src/index.rst view on Meta::CPAN
Welcome to the libuv documentation
==================================
Overview
--------
libuv is a multi-platform support library with a focus on asynchronous I/O. It
was primarily developed for use by `Node.js`_, but it's also used by `Luvit`_,
`Julia`_, `pyuv`_, and `others`_.
.. note::
In case you find errors in this documentation you can help by sending
`pull requests <https://github.com/libuv/libuv>`_!
.. _Node.js: http://nodejs.org
.. _Luvit: http://luvit.io
.. _Julia: http://julialang.org
.. _pyuv: https://github.com/saghul/pyuv
.. _others: https://github.com/libuv/libuv/wiki/Projects-that-use-libuv
Features
--------
* Full-featured event loop backed by epoll, kqueue, IOCP, event ports.
* Asynchronous TCP and UDP sockets
* Asynchronous DNS resolution
* Asynchronous file and file system operations
* File system events
* ANSI escape code controlled TTY
* IPC with socket sharing, using Unix domain sockets or named pipes (Windows)
* Child processes
* Thread pool
* Signal handling
* High resolution clock
* Threading and synchronization primitives
Documentation
-------------
.. toctree::
:maxdepth: 1
design
api
libuv/docs/src/stream.rst view on Meta::CPAN
Returns 1 if the stream is readable, 0 otherwise.
.. c:function:: int uv_is_writable(const uv_stream_t* handle)
Returns 1 if the stream is writable, 0 otherwise.
.. c:function:: int uv_stream_set_blocking(uv_stream_t* handle, int blocking)
Enable or disable blocking mode for a stream.
When blocking mode is enabled all writes complete synchronously. The
interface remains unchanged otherwise, e.g. completion or failure of the
operation will still be reported through a callback which is made
asynchronously.
.. warning::
Relying too much on this API is not recommended. It is likely to change
significantly in the future.
Currently only works on Windows for :c:type:`uv_pipe_t` handles.
On UNIX platforms, all :c:type:`uv_stream_t` handles are supported.
Also libuv currently makes no ordering guarantee when the blocking mode
is changed after write requests have already been submitted. Therefore it is
libuv/docs/src/tcp.rst view on Meta::CPAN
Enable `TCP_NODELAY`, which disables Nagle's algorithm.
.. c:function:: int uv_tcp_keepalive(uv_tcp_t* handle, int enable, unsigned int delay)
Enable / disable TCP keep-alive. `delay` is the initial delay in seconds,
ignored when `enable` is zero.
.. c:function:: int uv_tcp_simultaneous_accepts(uv_tcp_t* handle, int enable)
Enable / disable simultaneous asynchronous accept requests that are
queued by the operating system when listening for new TCP connections.
This setting is used to tune a TCP server for the desired performance.
Having simultaneous accepts can significantly improve the rate of accepting
connections (which is why it is enabled by default) but may lead to uneven
load distribution in multi-process setups.
.. c:function:: int uv_tcp_bind(uv_tcp_t* handle, const struct sockaddr* addr, unsigned int flags)
Bind the handle to an address and port. `addr` should point to an
libuv/docs/src/threading.rst view on Meta::CPAN
.. _threading:
Threading and synchronization utilities
=======================================
libuv provides cross-platform implementations for multiple threading and
synchronization primitives. The API largely follows the pthreads API.
Data types
----------
.. c:type:: uv_thread_t
Thread data type.
.. c:type:: void (*uv_thread_cb)(void* arg)
libuv/gyp_uv.py view on Meta::CPAN
#!/usr/bin/env python
import os
import platform
import sys
try:
import multiprocessing.synchronize
gyp_parallel_support = True
except ImportError:
gyp_parallel_support = False
CC = os.environ.get('CC', 'cc')
script_dir = os.path.dirname(__file__)
uv_root = os.path.normpath(script_dir)
output_dir = os.path.join(os.path.abspath(uv_root), 'out')
libuv/gyp_uv.py view on Meta::CPAN
if not any(a.startswith('-Dhost_arch=') for a in args):
args.append('-Dhost_arch=%s' % host_arch())
if not any(a.startswith('-Dtarget_arch=') for a in args):
args.append('-Dtarget_arch=%s' % host_arch())
if not any(a.startswith('-Duv_library=') for a in args):
args.append('-Duv_library=static_library')
# Some platforms (OpenBSD for example) don't have multiprocessing.synchronize
# so gyp must be run with --no-parallel
if not gyp_parallel_support:
args.append('--no-parallel')
gyp_args = list(args)
print(gyp_args)
run_gyp(gyp_args)
libuv/libuv.pc.in view on Meta::CPAN
prefix=@prefix@
exec_prefix=${prefix}
libdir=@libdir@
includedir=@includedir@
Name: libuv
Version: @PACKAGE_VERSION@
Description: multi-platform support library with a focus on asynchronous I/O.
URL: http://libuv.org/
Libs: -L${libdir} -luv @LIBS@
Cflags: -I${includedir}
libuv/src/unix/kqueue.c view on Meta::CPAN
if (r == 0) {
uv__handle_start(handle);
} else {
uv__free(handle->path);
handle->path = NULL;
}
return r;
}
#endif /* defined(__APPLE__) */
/* TODO open asynchronously - but how do we report back errors? */
fd = open(path, O_RDONLY);
if (fd == -1)
return UV__ERR(errno);
handle->path = uv__strdup(path);
if (handle->path == NULL) {
uv__close_nocheckstdio(fd);
return UV_ENOMEM;
}
libuv/src/unix/stream.c view on Meta::CPAN
static void uv__stream_connect(uv_stream_t* stream) {
int error;
uv_connect_t* req = stream->connect_req;
socklen_t errorsize = sizeof(int);
assert(stream->type == UV_TCP || stream->type == UV_NAMED_PIPE);
assert(req);
if (stream->delayed_error) {
/* To smooth over the differences between unixes errors that
* were reported synchronously on the first connect can be delayed
* until the next tick--which is now.
*/
error = stream->delayed_error;
stream->delayed_error = 0;
} else {
/* Normal situation: we need to get the socket error from the kernel. */
assert(uv__stream_fd(stream) >= 0);
getsockopt(uv__stream_fd(stream),
SOL_SOCKET,
SO_ERROR,
libuv/src/unix/udp.c view on Meta::CPAN
memcpy(req->bufs, bufs, nbufs * sizeof(bufs[0]));
handle->send_queue_size += uv__count_bufs(req->bufs, req->nbufs);
handle->send_queue_count++;
QUEUE_INSERT_TAIL(&handle->write_queue, &req->queue);
uv__handle_start(handle);
if (empty_queue && !(handle->flags & UV_HANDLE_UDP_PROCESSING)) {
uv__udp_sendmsg(handle);
/* `uv__udp_sendmsg` may not be able to do non-blocking write straight
* away. In such cases the `io_watcher` has to be queued for asynchronous
* write.
*/
if (!QUEUE_EMPTY(&handle->write_queue))
uv__io_start(handle->loop, &handle->io_watcher, POLLOUT);
} else {
uv__io_start(handle->loop, &handle->io_watcher, POLLOUT);
}
return 0;
}
libuv/src/win/pipe.c view on Meta::CPAN
BOOL r;
if (!(handle->flags & UV_HANDLE_READ_PENDING))
return; /* No pending reads. */
if (handle->flags & UV_HANDLE_CANCELLATION_PENDING)
return; /* Already cancelled. */
if (handle->handle == INVALID_HANDLE_VALUE)
return; /* Pipe handle closed. */
if (!(handle->flags & UV_HANDLE_NON_OVERLAPPED_PIPE)) {
/* Cancel asynchronous read. */
r = CancelIoEx(handle->handle, &handle->read_req.u.io.overlapped);
assert(r || GetLastError() == ERROR_NOT_FOUND);
} else {
/* Cancel synchronous read (which is happening in the thread pool). */
HANDLE thread;
volatile HANDLE* thread_ptr = &handle->pipe.conn.readfile_thread_handle;
EnterCriticalSection(&handle->pipe.conn.readfile_thread_lock);
thread = *thread_ptr;
if (thread == NULL) {
/* The thread pool thread has not yet reached the point of blocking, we
* can pre-empt it by setting thread_handle to INVALID_HANDLE_VALUE. */
*thread_ptr = INVALID_HANDLE_VALUE;
libuv/src/win/pipe.c view on Meta::CPAN
uv__once_init();
name_info = NULL;
if (handle->handle == INVALID_HANDLE_VALUE) {
*size = 0;
return UV_EINVAL;
}
/* NtQueryInformationFile will block if another thread is performing a
* blocking operation on the queried handle. If the pipe handle is
* synchronous, there may be a worker thread currently calling ReadFile() on
* the pipe handle, which could cause a deadlock. To avoid this, interrupt
* the read. */
if (handle->flags & UV_HANDLE_CONNECTION &&
handle->flags & UV_HANDLE_NON_OVERLAPPED_PIPE) {
uv__pipe_interrupt_read((uv_pipe_t*) handle); /* cast away const warning */
}
nt_status = pNtQueryInformationFile(handle->handle,
&io_status,
&tmp_name_info,
libuv/src/win/process.c view on Meta::CPAN
process_flags,
env,
cwd,
&startup,
&info)) {
/* CreateProcessW failed. */
err = GetLastError();
goto done;
}
/* Spawn succeeded. Beyond this point, failure is reported asynchronously. */
process->process_handle = info.hProcess;
process->pid = info.dwProcessId;
/* If the process isn't spawned as detached, assign to the global job object
* so windows will kill it when the parent process dies. */
if (!(options->flags & UV_PROCESS_DETACHED)) {
uv_once(&uv_global_job_handle_init_guard_, uv__init_global_job_handle);
if (!AssignProcessToJobObject(uv_global_job_handle_, info.hProcess)) {
libuv/test/test-fs-copyfile.c view on Meta::CPAN
unlink(dst);
r = uv_fs_copyfile(NULL, &req, src, dst, 0, NULL);
ASSERT(req.result == UV_ENOENT);
ASSERT(r == UV_ENOENT);
uv_fs_req_cleanup(&req);
/* The destination should not exist. */
r = uv_fs_stat(NULL, &req, dst, NULL);
ASSERT(r != 0);
uv_fs_req_cleanup(&req);
/* Copies file synchronously. Creates new file. */
unlink(dst);
r = uv_fs_copyfile(NULL, &req, fixture, dst, 0, NULL);
ASSERT(r == 0);
handle_result(&req);
/* Copies a file of size zero. */
unlink(dst);
touch_file(src, 0);
r = uv_fs_copyfile(NULL, &req, src, dst, 0, NULL);
ASSERT(r == 0);
handle_result(&req);
/* Copies file synchronously. Overwrites existing file. */
r = uv_fs_copyfile(NULL, &req, fixture, dst, 0, NULL);
ASSERT(r == 0);
handle_result(&req);
/* Fails to overwrites existing file. */
r = uv_fs_copyfile(NULL, &req, fixture, dst, UV_FS_COPYFILE_EXCL, NULL);
ASSERT(r == UV_EEXIST);
uv_fs_req_cleanup(&req);
/* Truncates when an existing destination is larger than the source file. */
libuv/test/test-fs-copyfile.c view on Meta::CPAN
handle_result(&req);
/* Copies a larger file. */
unlink(dst);
touch_file(src, 4096 * 2);
r = uv_fs_copyfile(NULL, &req, src, dst, 0, NULL);
ASSERT(r == 0);
handle_result(&req);
unlink(src);
/* Copies file asynchronously */
unlink(dst);
r = uv_fs_copyfile(loop, &req, fixture, dst, 0, handle_result);
ASSERT(r == 0);
ASSERT(result_check_count == 5);
uv_run(loop, UV_RUN_DEFAULT);
ASSERT(result_check_count == 6);
/* If the flags are invalid, the loop should not be kept open */
unlink(dst);
r = uv_fs_copyfile(loop, &req, fixture, dst, -1, fail_cb);
libuv/test/test-fs-readdir.c view on Meta::CPAN
r = uv_fs_readdir(uv_default_loop(),
&readdir_req,
dir,
empty_readdir_cb);
ASSERT(r == 0);
uv_fs_req_cleanup(req);
++empty_opendir_cb_count;
}
/*
* This test makes sure that both synchronous and asynchronous flavors
* of the uv_fs_opendir() -> uv_fs_readdir() -> uv_fs_closedir() sequence work
* as expected when processing an empty directory.
*/
TEST_IMPL(fs_readdir_empty_dir) {
const char* path;
uv_fs_t mkdir_req;
uv_fs_t rmdir_req;
int r;
int nb_entries_read;
uv_dir_t* dir;
path = "./empty_dir/";
uv_fs_mkdir(uv_default_loop(), &mkdir_req, path, 0777, NULL);
uv_fs_req_cleanup(&mkdir_req);
/* Fill the req to ensure that required fields are cleaned up. */
memset(&opendir_req, 0xdb, sizeof(opendir_req));
/* Testing the synchronous flavor. */
r = uv_fs_opendir(uv_default_loop(),
&opendir_req,
path,
NULL);
ASSERT(r == 0);
ASSERT(opendir_req.fs_type == UV_FS_OPENDIR);
ASSERT(opendir_req.result == 0);
ASSERT(opendir_req.ptr != NULL);
dir = opendir_req.ptr;
uv_fs_req_cleanup(&opendir_req);
libuv/test/test-fs-readdir.c view on Meta::CPAN
NULL);
ASSERT(nb_entries_read == 0);
uv_fs_req_cleanup(&readdir_req);
/* Fill the req to ensure that required fields are cleaned up. */
memset(&closedir_req, 0xdb, sizeof(closedir_req));
uv_fs_closedir(uv_default_loop(), &closedir_req, dir, NULL);
ASSERT(closedir_req.result == 0);
uv_fs_req_cleanup(&closedir_req);
/* Testing the asynchronous flavor. */
/* Fill the req to ensure that required fields are cleaned up. */
memset(&opendir_req, 0xdb, sizeof(opendir_req));
memset(&readdir_req, 0xdb, sizeof(readdir_req));
memset(&closedir_req, 0xdb, sizeof(closedir_req));
r = uv_fs_opendir(uv_default_loop(), &opendir_req, path, empty_opendir_cb);
ASSERT(r == 0);
ASSERT(empty_opendir_cb_count == 0);
ASSERT(empty_closedir_cb_count == 0);
libuv/test/test-fs-readdir.c view on Meta::CPAN
TEST_IMPL(fs_readdir_non_existing_dir) {
const char* path;
int r;
path = "./non-existing-dir/";
/* Fill the req to ensure that required fields are cleaned up. */
memset(&opendir_req, 0xdb, sizeof(opendir_req));
/* Testing the synchronous flavor. */
r = uv_fs_opendir(uv_default_loop(), &opendir_req, path, NULL);
ASSERT(r == UV_ENOENT);
ASSERT(opendir_req.fs_type == UV_FS_OPENDIR);
ASSERT(opendir_req.result == UV_ENOENT);
ASSERT(opendir_req.ptr == NULL);
uv_fs_req_cleanup(&opendir_req);
/* Fill the req to ensure that required fields are cleaned up. */
memset(&opendir_req, 0xdb, sizeof(opendir_req));
libuv/test/test-fs-readdir.c view on Meta::CPAN
TEST_IMPL(fs_readdir_file) {
const char* path;
int r;
path = "test/fixtures/empty_file";
/* Fill the req to ensure that required fields are cleaned up. */
memset(&opendir_req, 0xdb, sizeof(opendir_req));
/* Testing the synchronous flavor. */
r = uv_fs_opendir(uv_default_loop(), &opendir_req, path, NULL);
ASSERT(r == UV_ENOTDIR);
ASSERT(opendir_req.fs_type == UV_FS_OPENDIR);
ASSERT(opendir_req.result == UV_ENOTDIR);
ASSERT(opendir_req.ptr == NULL);
uv_fs_req_cleanup(&opendir_req);
/* Fill the req to ensure that required fields are cleaned up. */
libuv/test/test-fs-readdir.c view on Meta::CPAN
uv_fs_t create_req;
uv_fs_t close_req;
uv_dir_t* dir;
int r;
cleanup_test_files();
r = uv_fs_mkdir(uv_default_loop(), &mkdir_req, "test_dir", 0755, NULL);
ASSERT(r == 0);
/* Create two files synchronously. */
r = uv_fs_open(uv_default_loop(),
&create_req,
"test_dir/file1",
O_WRONLY | O_CREAT, S_IWUSR | S_IRUSR,
NULL);
ASSERT(r >= 0);
uv_fs_req_cleanup(&create_req);
r = uv_fs_close(uv_default_loop(),
&close_req,
create_req.result,
libuv/test/test-fs-readdir.c view on Meta::CPAN
&mkdir_req,
"test_dir/test_subdir",
0755,
NULL);
ASSERT(r == 0);
uv_fs_req_cleanup(&mkdir_req);
/* Fill the req to ensure that required fields are cleaned up. */
memset(&opendir_req, 0xdb, sizeof(opendir_req));
/* Testing the synchronous flavor. */
r = uv_fs_opendir(uv_default_loop(), &opendir_req, "test_dir", NULL);
ASSERT(r == 0);
ASSERT(opendir_req.fs_type == UV_FS_OPENDIR);
ASSERT(opendir_req.result == 0);
ASSERT(opendir_req.ptr != NULL);
entries_count = 0;
dir = opendir_req.ptr;
dir->dirents = dirents;
dir->nentries = ARRAY_SIZE(dirents);
libuv/test/test-fs-readdir.c view on Meta::CPAN
ASSERT(entries_count == 3);
uv_fs_req_cleanup(&readdir_req);
/* Fill the req to ensure that required fields are cleaned up. */
memset(&closedir_req, 0xdb, sizeof(closedir_req));
uv_fs_closedir(uv_default_loop(), &closedir_req, dir, NULL);
ASSERT(closedir_req.result == 0);
uv_fs_req_cleanup(&closedir_req);
/* Testing the asynchronous flavor. */
/* Fill the req to ensure that required fields are cleaned up. */
memset(&opendir_req, 0xdb, sizeof(opendir_req));
r = uv_fs_opendir(uv_default_loop(),
&opendir_req,
"test_dir",
non_empty_opendir_cb);
ASSERT(r == 0);
ASSERT(non_empty_opendir_cb_count == 0);
libuv/test/test-fs.c view on Meta::CPAN
rmdir("test_dir");
loop = uv_default_loop();
r = uv_fs_mkdir(loop, &mkdir_req, "test_dir", 0755, mkdir_cb);
ASSERT(r == 0);
uv_run(loop, UV_RUN_DEFAULT);
ASSERT(mkdir_cb_count == 1);
/* Create 2 files synchronously. */
r = uv_fs_open(NULL, &open_req1, "test_dir/file1", O_WRONLY | O_CREAT,
S_IWUSR | S_IRUSR, NULL);
ASSERT(r >= 0);
uv_fs_req_cleanup(&open_req1);
r = uv_fs_close(NULL, &close_req, open_req1.result, NULL);
ASSERT(r == 0);
uv_fs_req_cleanup(&close_req);
r = uv_fs_open(NULL, &open_req1, "test_dir/file2", O_WRONLY | O_CREAT,
S_IWUSR | S_IRUSR, NULL);
libuv/test/test-fs.c view on Meta::CPAN
* because we just created the file. On older kernels, it's set to zero.
*/
ASSERT(s->st_birthtim.tv_sec == 0 ||
s->st_birthtim.tv_sec == t.st_ctim.tv_sec);
ASSERT(s->st_birthtim.tv_nsec == 0 ||
s->st_birthtim.tv_nsec == t.st_ctim.tv_nsec);
#endif
uv_fs_req_cleanup(&req);
/* Now do the uv_fs_fstat call asynchronously */
r = uv_fs_fstat(loop, &req, file, fstat_cb);
ASSERT(r == 0);
uv_run(loop, UV_RUN_DEFAULT);
ASSERT(fstat_cb_count == 1);
r = uv_fs_close(NULL, &req, file, NULL);
ASSERT(r == 0);
ASSERT(req.result == 0);
uv_fs_req_cleanup(&req);
libuv/test/test-mutexes.c view on Meta::CPAN
uv_rwlock_rdunlock(&rwlock);
uv_rwlock_wrlock(&rwlock);
uv_rwlock_wrunlock(&rwlock);
uv_rwlock_destroy(&rwlock);
return 0;
}
/* Call when holding |mutex|. */
static void synchronize_nowait(void) {
step += 1;
uv_cond_signal(&condvar);
}
/* Call when holding |mutex|. */
static void synchronize(void) {
int current;
synchronize_nowait();
/* Wait for the other thread. Guard against spurious wakeups. */
for (current = step; current == step; uv_cond_wait(&condvar, &mutex));
ASSERT(step == current + 1);
}
static void thread_rwlock_trylock_peer(void* unused) {
(void) &unused;
uv_mutex_lock(&mutex);
/* Write lock held by other thread. */
ASSERT(UV_EBUSY == uv_rwlock_tryrdlock(&rwlock));
ASSERT(UV_EBUSY == uv_rwlock_trywrlock(&rwlock));
synchronize();
/* Read lock held by other thread. */
ASSERT(0 == uv_rwlock_tryrdlock(&rwlock));
uv_rwlock_rdunlock(&rwlock);
ASSERT(UV_EBUSY == uv_rwlock_trywrlock(&rwlock));
synchronize();
/* Acquire write lock. */
ASSERT(0 == uv_rwlock_trywrlock(&rwlock));
synchronize();
/* Release write lock and acquire read lock. */
uv_rwlock_wrunlock(&rwlock);
ASSERT(0 == uv_rwlock_tryrdlock(&rwlock));
synchronize();
uv_rwlock_rdunlock(&rwlock);
synchronize_nowait(); /* Signal main thread we're going away. */
uv_mutex_unlock(&mutex);
}
TEST_IMPL(thread_rwlock_trylock) {
uv_thread_t thread;
ASSERT(0 == uv_cond_init(&condvar));
ASSERT(0 == uv_mutex_init(&mutex));
ASSERT(0 == uv_rwlock_init(&rwlock));
uv_mutex_lock(&mutex);
ASSERT(0 == uv_thread_create(&thread, thread_rwlock_trylock_peer, NULL));
/* Hold write lock. */
ASSERT(0 == uv_rwlock_trywrlock(&rwlock));
synchronize(); /* Releases the mutex to the other thread. */
/* Release write lock and acquire read lock. Pthreads doesn't support
* the notion of upgrading or downgrading rwlocks, so neither do we.
*/
uv_rwlock_wrunlock(&rwlock);
ASSERT(0 == uv_rwlock_tryrdlock(&rwlock));
synchronize();
/* Release read lock. */
uv_rwlock_rdunlock(&rwlock);
synchronize();
/* Write lock held by other thread. */
ASSERT(UV_EBUSY == uv_rwlock_tryrdlock(&rwlock));
ASSERT(UV_EBUSY == uv_rwlock_trywrlock(&rwlock));
synchronize();
/* Read lock held by other thread. */
ASSERT(0 == uv_rwlock_tryrdlock(&rwlock));
uv_rwlock_rdunlock(&rwlock);
ASSERT(UV_EBUSY == uv_rwlock_trywrlock(&rwlock));
synchronize();
ASSERT(0 == uv_thread_join(&thread));
uv_rwlock_destroy(&rwlock);
uv_mutex_unlock(&mutex);
uv_mutex_destroy(&mutex);
uv_cond_destroy(&condvar);
return 0;
}