...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Copyright © 2003-2016 Christopher M. Kohlhoff
Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
Table of Contents
Boost.Asio is a cross-platform C++ library for network and low-level I/O programming that provides developers with a consistent asynchronous model using a modern C++ approach.
An overview of the features included in Boost.Asio, plus rationale and design information.
How to use Boost.Asio in your applications. Includes information on library dependencies and supported platforms.
A tutorial that introduces the fundamental concepts required to use Boost.Asio, and shows how to use Boost.Asio to develop simple client and server programs.
Examples that illustrate the use of Boost.Asio in more complex applications.
Detailed class and function reference.
Log of Boost.Asio changes made in each Boost release.
Book-style text index of Boost.Asio documentation.
Most programs interact with the outside world in some way, whether it be via a file, a network, a serial cable, or the console. Sometimes, as is the case with networking, individual I/O operations can take a long time to complete. This poses particular challenges to application development.
Boost.Asio provides the tools to manage these long running operations, without requiring programs to use concurrency models based on threads and explicit locking.
The Boost.Asio library is intended for programmers using C++ for systems programming, where access to operating system functionality such as networking is often required. In particular, Boost.Asio addresses the following goals:
Although Boost.Asio started life focused primarily on networking, its concepts of asynchronous I/O have been extended to include other operating system resources such as serial ports, file descriptors, and so on.
Boost.Asio may be used to perform both synchronous and asynchronous operations on I/O objects such as sockets. Before using Boost.Asio it may be useful to get a conceptual picture of the various parts of Boost.Asio, your program, and how they work together.
As an introductory example, let's consider what happens when you perform a connect operation on a socket. We shall start by examining synchronous operations.
Your program will have at least one io_service object. The io_service represents your program's link to the operating system's I/O services.
boost::asio::io_service io_service;
To perform I/O operations your program will need an I/O object such as a TCP socket:
boost::asio::ip::tcp::socket socket(io_service);
When a synchronous connect operation is performed, the following sequence of events occurs:
1. Your program initiates the connect operation by calling the I/O object:
socket.connect(server_endpoint);
2. The I/O object forwards the request to the io_service.
3. The io_service calls on the operating system to perform the connect operation.
4. The operating system returns the result of the operation to the io_service.
5. The io_service translates any error
resulting from the operation into an object of type boost::system::error_code
.
An error_code
may be compared
with specific values, or tested as a boolean (where a false
result means that no error occurred). The result is then forwarded back
up to the I/O object.
6. The I/O object throws an exception
of type boost::system::system_error
if the operation failed.
If the code to initiate the operation had instead been written as:
boost::system::error_code ec; socket.connect(server_endpoint, ec);
then the error_code
variable
ec
would be set to the
result of the operation, and no exception would be thrown.
When an asynchronous operation is used, a different sequence of events occurs.
1. Your program initiates the connect operation by calling the I/O object:
socket.async_connect(server_endpoint, your_completion_handler);
where your_completion_handler
is a function or function object with the signature:
void your_completion_handler(const boost::system::error_code& ec);
The exact signature required depends on the asynchronous operation being performed. The reference documentation indicates the appropriate form for each operation.
2. The I/O object forwards the request to the io_service.
3. The io_service signals to the operating system that it should start an asynchronous connect.
Time passes. (In the synchronous case this wait would have been contained entirely within the duration of the connect operation.)
4. The operating system indicates that the connect operation has completed by placing the result on a queue, ready to be picked up by the io_service.
5. Your program must make a call to io_service::run()
(or to one of the similar io_service member
functions) in order for the result to be retrieved. A call to io_service::run()
blocks while there are unfinished asynchronous operations, so you would
typically call it as soon as you have started your first asynchronous operation.
6. While inside the call to io_service::run()
, the io_service
dequeues the result of the operation, translates it into an error_code
, and then passes it to your completion handler.
This is a simplified picture of how Boost.Asio operates. You will want to delve further into the documentation if your needs are more advanced, such as extending Boost.Asio to perform other types of asynchronous operations.
The Boost.Asio library offers side-by-side support for synchronous and asynchronous operations. The asynchronous support is based on the Proactor design pattern [POSA2]. The advantages and disadvantages of this approach, when compared to a synchronous-only or Reactor approach, are outlined below.
Let us examine how the Proactor design pattern is implemented in Boost.Asio, without reference to platform-specific details.
Proactor design pattern (adapted from [POSA2])
— Asynchronous Operation
Defines an operation that is executed asynchronously, such as an asynchronous read or write on a socket.
— Asynchronous Operation Processor
Executes asynchronous operations and queues events on a completion event queue when operations complete. From a high-level point of view, services like
stream_socket_service
are asynchronous operation processors.
— Completion Event Queue
Buffers completion events until they are dequeued by an asynchronous event demultiplexer.
— Completion Handler
Processes the result of an asynchronous operation. These are function objects, often created using
boost::bind
.
— Asynchronous Event Demultiplexer
Blocks waiting for events to occur on the completion event queue, and returns a completed event to its caller.
— Proactor
Calls the asynchronous event demultiplexer to dequeue events, and dispatches the completion handler (i.e. invokes the function object) associated with the event. This abstraction is represented by the
io_service
class.
— Initiator
Application-specific code that starts asynchronous operations. The initiator interacts with an asynchronous operation processor via a high-level interface such as
basic_stream_socket
, which in turn delegates to a service likestream_socket_service
.
On many platforms, Boost.Asio implements the Proactor design pattern in
terms of a Reactor, such as select
,
epoll
or kqueue
. This implementation approach
corresponds to the Proactor design pattern as follows:
— Asynchronous Operation Processor
A reactor implemented using
select
,epoll
orkqueue
. When the reactor indicates that the resource is ready to perform the operation, the processor executes the asynchronous operation and enqueues the associated completion handler on the completion event queue.
— Completion Event Queue
A linked list of completion handlers (i.e. function objects).
— Asynchronous Event Demultiplexer
This is implemented by waiting on an event or condition variable until a completion handler is available in the completion event queue.
On Windows NT, 2000 and XP, Boost.Asio takes advantage of overlapped I/O to provide an efficient implementation of the Proactor design pattern. This implementation approach corresponds to the Proactor design pattern as follows:
— Asynchronous Operation Processor
This is implemented by the operating system. Operations are initiated by calling an overlapped function such as
AcceptEx
.
— Completion Event Queue
This is implemented by the operating system, and is associated with an I/O completion port. There is one I/O completion port for each
io_service
instance.
— Asynchronous Event Demultiplexer
Called by Boost.Asio to dequeue events and their associated completion handlers.
— Portability.
Many operating systems offer a native asynchronous I/O API (such as overlapped I/O on Windows) as the preferred option for developing high performance network applications. The library may be implemented in terms of native asynchronous I/O. However, if native support is not available, the library may also be implemented using synchronous event demultiplexors that typify the Reactor pattern, such as POSIX
select()
.
— Decoupling threading from concurrency.
Long-duration operations are performed asynchronously by the implementation on behalf of the application. Consequently applications do not need to spawn many threads in order to increase concurrency.
— Performance and scalability.
Implementation strategies such as thread-per-connection (which a synchronous-only approach would require) can degrade system performance, due to increased context switching, synchronisation and data movement among CPUs. With asynchronous operations it is possible to avoid the cost of context switching by minimising the number of operating system threads — typically a limited resource — and only activating the logical threads of control that have events to process.
— Simplified application synchronisation.
Asynchronous operation completion handlers can be written as though they exist in a single-threaded environment, and so application logic can be developed with little or no concern for synchronisation issues.
— Function composition.
Function composition refers to the implementation of functions to provide a higher-level operation, such as sending a message in a particular format. Each function is implemented in terms of multiple calls to lower-level read or write operations.
For example, consider a protocol where each message consists of a fixed-length header followed by a variable length body, where the length of the body is specified in the header. A hypothetical read_message operation could be implemented using two lower-level reads, the first to receive the header and, once the length is known, the second to receive the body.
To compose functions in an asynchronous model, asynchronous operations can be chained together. That is, a completion handler for one operation can initiate the next. Starting the first call in the chain can be encapsulated so that the caller need not be aware that the higher-level operation is implemented as a chain of asynchronous operations.
The ability to compose new operations in this way simplifies the development of higher levels of abstraction above a networking library, such as functions to support a specific protocol.
— Program complexity.
It is more difficult to develop applications using asynchronous mechanisms due to the separation in time and space between operation initiation and completion. Applications may also be harder to debug due to the inverted flow of control.
— Memory usage.
Buffer space must be committed for the duration of a read or write operation, which may continue indefinitely, and a separate buffer is required for each concurrent operation. The Reactor pattern, on the other hand, does not require buffer space until a socket is ready for reading or writing.
[POSA2] D. Schmidt et al, Pattern Oriented Software Architecture, Volume 2. Wiley, 2000.
In general, it is safe to make concurrent use of distinct objects, but
unsafe to make concurrent use of a single object. However, types such as
io_service
provide a stronger
guarantee that it is safe to use a single object concurrently.
Multiple threads may call io_service::run()
to set up a pool of threads from which
completion handlers may be invoked. This approach may also be used with
io_service::post()
to use a means to perform any computational tasks across a thread pool.
Note that all threads that have joined an io_service
's
pool are considered equivalent, and the io_service
may distribute work across them in an arbitrary fashion.
The implementation of this library for a particular platform may make use of one or more internal threads to emulate asynchronicity. As far as possible, these threads must be invisible to the library user. In particular, the threads:
This approach is complemented by the following guarantee:
io_service::run()
.
Consequently, it is the library user's responsibility to create and manage all threads to which the notifications will be delivered.
The reasons for this approach include:
io_service::run()
from a single thread, the user's
code can avoid the development complexity associated with synchronisation.
For example, a library user can implement scalable servers that are
single-threaded (from the user's point of view).
CoInitializeEx
before any other COM operations can be called from that thread.
A strand is defined as a strictly sequential invocation of event handlers (i.e. no concurrent invocation). Use of strands allows execution of code in a multithreaded program without the need for explicit locking (e.g. using mutexes).
Strands may be either implicit or explicit, as illustrated by the following alternative approaches:
io_service::strand
.
All event handler function objects need to be wrapped using io_service::strand::wrap()
or otherwise posted/dispatched through the io_service::strand
object.
In the case of composed asynchronous operations, such as async_read()
or async_read_until()
,
if a completion handler goes through a strand, then all intermediate handlers
should also go through the same strand. This is needed to ensure thread
safe access for any objects that are shared between the caller and the
composed operation (in the case of async_read()
it's the socket, which the caller can
close() to cancel the operation). This is done by having hook functions
for all intermediate handlers which forward the calls to the customisable
hook associated with the final handler:
struct my_handler { void operator()() { ... } }; template<class F> void asio_handler_invoke(F f, my_handler*) { // Do custom invocation here. // Default implementation calls f(); }
The io_service::strand::wrap()
function creates a new completion handler that defines asio_handler_invoke
so that the function object is executed through the strand.
io_service::strand, tutorial Timer.5, HTTP server 3 example.
Fundamentally, I/O involves the transfer of data to and from contiguous regions of memory, called buffers. These buffers can be simply expressed as a tuple consisting of a pointer and a size in bytes. However, to allow the development of efficient network applications, Boost.Asio includes support for scatter-gather operations. These operations involve one or more buffers:
Therefore we require an abstraction to represent a collection of buffers. The approach used in Boost.Asio is to define a type (actually two types) to represent a single buffer. These can be stored in a container, which may be passed to the scatter-gather operations.
In addition to specifying buffers as a pointer and size in bytes, Boost.Asio makes a distinction between modifiable memory (called mutable) and non-modifiable memory (where the latter is created from the storage for a const-qualified variable). These two types could therefore be defined as follows:
typedef std::pair<void*, std::size_t> mutable_buffer; typedef std::pair<const void*, std::size_t> const_buffer;
Here, a mutable_buffer would be convertible to a const_buffer, but conversion in the opposite direction is not valid.
However, Boost.Asio does not use the above definitions as-is, but instead
defines two classes: mutable_buffer
and const_buffer
. The goal
of these is to provide an opaque representation of contiguous memory, where:
mutable_buffer
is convertible to
a const_buffer
, but
the opposite conversion is disallowed.
boost::array
or std::vector
of POD elements, or from a
std::string
.
buffer_cast
function. In general
an application should never need to do this, but it is required by
the library implementation to pass the raw memory to the underlying
operating system functions.
Finally, multiple buffers can be passed to scatter-gather operations (such
as read() or write())
by putting the buffer objects into a container. The MutableBufferSequence
and ConstBufferSequence
concepts have been defined so that containers such as std::vector
,
std::list
, std::vector
or boost::array
can be used.
The class boost::asio::basic_streambuf
is derived from std::basic_streambuf
to associate the input
sequence and output sequence with one or more objects of some character
array type, whose elements store arbitrary values. These character array
objects are internal to the streambuf object, but direct access to the
array elements is provided to permit them to be used with I/O operations,
such as the send or receive operations of a socket:
ConstBufferSequence
requirements.
MutableBufferSequence
requirements.
The streambuf constructor accepts a size_t
argument specifying the maximum of the sum of the sizes of the input sequence
and output sequence. Any operation that would, if successful, grow the
internal data beyond this limit will throw a std::length_error
exception.
The buffers_iterator<>
class template allows buffer sequences (i.e. types meeting MutableBufferSequence
or ConstBufferSequence
requirements) to
be traversed as though they were a contiguous sequence of bytes. Helper
functions called buffers_begin() and buffers_end() are also provided, where
the buffers_iterator<> template parameter is automatically deduced.
As an example, to read a single line from a socket and into a std::string
, you may write:
boost::asio::streambuf sb; ... std::size_t n = boost::asio::read_until(sock, sb, '\n'); boost::asio::streambuf::const_buffers_type bufs = sb.data(); std::string line( boost::asio::buffers_begin(bufs), boost::asio::buffers_begin(bufs) + n);
Some standard library implementations, such as the one that ships with Microsoft Visual C++ 8.0 and later, provide a feature called iterator debugging. What this means is that the validity of iterators is checked at runtime. If a program tries to use an iterator that has been invalidated, an assertion will be triggered. For example:
std::vector<int> v(1) std::vector<int>::iterator i = v.begin(); v.clear(); // invalidates iterators *i = 0; // assertion!
Boost.Asio takes advantage of this feature to add buffer debugging. Consider the following code:
void dont_do_this() { std::string msg = "Hello, world!"; boost::asio::async_write(sock, boost::asio::buffer(msg), my_handler); }
When you call an asynchronous read or write you need to ensure that the
buffers for the operation are valid until the completion handler is called.
In the above example, the buffer is the std::string
variable msg
. This variable
is on the stack, and so it goes out of scope before the asynchronous operation
completes. If you're lucky then the application will crash, but random
failures are more likely.
When buffer debugging is enabled, Boost.Asio stores an iterator into the string until the asynchronous operation completes, and then dereferences it to check its validity. In the above example you would observe an assertion failure just before Boost.Asio tries to call the completion handler.
This feature is automatically made available for Microsoft Visual Studio
8.0 or later and for GCC when _GLIBCXX_DEBUG
is defined. There is a performance cost to this checking, so buffer debugging
is only enabled in debug builds. For other compilers it may be enabled
by defining BOOST_ASIO_ENABLE_BUFFER_DEBUGGING
.
It can also be explicitly disabled by defining BOOST_ASIO_DISABLE_BUFFER_DEBUGGING
.
buffer, buffers_begin, buffers_end, buffers_iterator, const_buffer, const_buffers_1, mutable_buffer, mutable_buffers_1, streambuf, ConstBufferSequence, MutableBufferSequence, buffers example (C++03), buffers example (c++11).
Many I/O objects in Boost.Asio are stream-oriented. This means that:
Objects that provide stream-oriented I/O model one or more of the following type requirements:
SyncReadStream
, where
synchronous read operations are performed using a member function called
read_some()
.
AsyncReadStream
, where
asynchronous read operations are performed using a member function
called async_read_some()
.
SyncWriteStream
, where
synchronous write operations are performed using a member function
called write_some()
.
AsyncWriteStream
, where
synchronous write operations are performed using a member function
called async_write_some()
.
Examples of stream-oriented I/O objects include ip::tcp::socket
,
ssl::stream<>
,
posix::stream_descriptor
, windows::stream_handle
,
etc.
Programs typically want to transfer an exact number of bytes. When a short
read or short write occurs the program must restart the operation, and
continue to do so until the required number of bytes has been transferred.
Boost.Asio provides generic functions that do this automatically: read()
,
async_read()
,
write()
and async_write()
.
read
,
async_read
, read_until
or async_read_until
functions to violate their contract. E.g. a read of N bytes may finish
early due to EOF.
async_read(), async_write(), read(), write(), AsyncReadStream, AsyncWriteStream, SyncReadStream, SyncWriteStream.
Sometimes a program must be integrated with a third-party library that
wants to perform the I/O operations itself. To facilitate this, Boost.Asio
includes a null_buffers
type that can be used with both read and write operations. A null_buffers
operation doesn't return
until the I/O object is "ready" to perform the operation.
As an example, to perform a non-blocking read something like the following may be used:
ip::tcp::socket socket(my_io_service); ... socket.non_blocking(true); ... socket.async_read_some(null_buffers(), read_handler); ... void read_handler(boost::system::error_code ec) { if (!ec) { std::vector<char> buf(socket.available()); socket.read_some(buffer(buf)); } }
These operations are supported for sockets on all platforms, and for the POSIX stream-oriented descriptor classes.
null_buffers, basic_socket::non_blocking(), basic_socket::native_non_blocking(), nonblocking example.
Many commonly-used internet protocols are line-based, which means that
they have protocol elements that are delimited by the character sequence
"\r\n"
. Examples
include HTTP, SMTP and FTP. To more easily permit the implementation of
line-based protocols, as well as other protocols that use delimiters, Boost.Asio
includes the functions read_until()
and async_read_until()
.
The following example illustrates the use of async_read_until()
in an HTTP server, to receive the first
line of an HTTP request from a client:
class http_connection { ... void start() { boost::asio::async_read_until(socket_, data_, "\r\n", boost::bind(&http_connection::handle_request_line, this, _1)); } void handle_request_line(boost::system::error_code ec) { if (!ec) { std::string method, uri, version; char sp1, sp2, cr, lf; std::istream is(&data_); is.unsetf(std::ios_base::skipws); is >> method >> sp1 >> uri >> sp2 >> version >> cr >> lf; ... } } ... boost::asio::ip::tcp::socket socket_; boost::asio::streambuf data_; };
The streambuf
data member
serves as a place to store the data that has been read from the socket
before it is searched for the delimiter. It is important to remember that
there may be additional data after the delimiter.
This surplus data should be left in the streambuf
so that it may be inspected by a subsequent call to read_until()
or async_read_until()
.
The delimiters may be specified as a single char
,
a std::string
or a boost::regex
.
The read_until()
and async_read_until()
functions also include overloads that accept a user-defined function object
called a match condition. For example, to read data into a streambuf until
whitespace is encountered:
typedef boost::asio::buffers_iterator< boost::asio::streambuf::const_buffers_type> iterator; std::pair<iterator, bool> match_whitespace(iterator begin, iterator end) { iterator i = begin; while (i != end) if (std::isspace(*i++)) return std::make_pair(i, true); return std::make_pair(i, false); } ... boost::asio::streambuf b; boost::asio::read_until(s, b, match_whitespace);
To read data into a streambuf until a matching character is found:
class match_char { public: explicit match_char(char c) : c_(c) {} template <typename Iterator> std::pair<Iterator, bool> operator()( Iterator begin, Iterator end) const { Iterator i = begin; while (i != end) if (c_ == *i++) return std::make_pair(i, true); return std::make_pair(i, false); } private: char c_; }; namespace boost { namespace asio { template <> struct is_match_condition<match_char> : public boost::true_type {}; } } // namespace boost::asio ... boost::asio::streambuf b; boost::asio::read_until(s, b, match_char('a'));
The is_match_condition<>
type trait automatically evaluates
to true for functions, and for function objects with a nested result_type
typedef. For other types
the trait must be explicitly specialised, as shown above.
async_read_until(), is_match_condition, read_until(), streambuf, HTTP client example.
Many asynchronous operations need to allocate an object to store state
associated with the operation. For example, a Win32 implementation needs
OVERLAPPED
-derived objects
to pass to Win32 API functions.
Furthermore, programs typically contain easily identifiable chains of asynchronous operations. A half duplex protocol implementation (e.g. an HTTP server) would have a single chain of operations per client (receives followed by sends). A full duplex protocol implementation would have two chains executing in parallel. Programs should be able to leverage this knowledge to reuse memory for all asynchronous operations in a chain.
Given a copy of a user-defined Handler
object h
, if the implementation
needs to allocate memory associated with that handler it will execute the
code:
void* pointer = asio_handler_allocate(size, &h);
Similarly, to deallocate the memory it will execute:
asio_handler_deallocate(pointer, size, &h);
These functions are located using argument-dependent lookup. The implementation
provides default implementations of the above functions in the asio
namespace:
void* asio_handler_allocate(size_t, ...); void asio_handler_deallocate(void*, size_t, ...);
which are implemented in terms of ::operator new()
and ::operator delete()
respectively.
The implementation guarantees that the deallocation will occur before the associated handler is invoked, which means the memory is ready to be reused for any new asynchronous operations started by the handler.
The custom memory allocation functions may be called from any user-created thread that is calling a library function. The implementation guarantees that, for the asynchronous operations included the library, the implementation will not make concurrent calls to the memory allocation functions for that handler. The implementation will insert appropriate memory barriers to ensure correct memory visibility should allocation functions need to be called from different threads.
asio_handler_allocate, asio_handler_deallocate, custom memory allocation example (C++03), custom memory allocation example (C++11).
To aid in debugging asynchronous programs, Boost.Asio provides support
for handler tracking. When enabled by defining BOOST_ASIO_ENABLE_HANDLER_TRACKING
,
Boost.Asio writes debugging output to the standard error stream. The output
records asynchronous operations and the relationships between their handlers.
This feature is useful when debugging and you need to know how your asynchronous operations are chained together, or what the pending asynchronous operations are. As an illustration, here is the output when you run the HTTP Server example, handle a single request, then shut down via Ctrl+C:
@asio|1298160085.070638|0*1|signal_set@0x7fff50528f40.async_wait @asio|1298160085.070888|0*2|socket@0x7fff50528f60.async_accept @asio|1298160085.070913|0|resolver@0x7fff50528e28.cancel @asio|1298160118.075438|>2|ec=asio.system:0 @asio|1298160118.075472|2*3|socket@0xb39048.async_receive @asio|1298160118.075507|2*4|socket@0x7fff50528f60.async_accept @asio|1298160118.075527|<2| @asio|1298160118.075540|>3|ec=asio.system:0,bytes_transferred=122 @asio|1298160118.075731|3*5|socket@0xb39048.async_send @asio|1298160118.075778|<3| @asio|1298160118.075793|>5|ec=asio.system:0,bytes_transferred=156 @asio|1298160118.075831|5|socket@0xb39048.close @asio|1298160118.075855|<5| @asio|1298160122.827317|>1|ec=asio.system:0,signal_number=2 @asio|1298160122.827333|1|socket@0x7fff50528f60.close @asio|1298160122.827359|<1| @asio|1298160122.827370|>4|ec=asio.system:125 @asio|1298160122.827378|<4| @asio|1298160122.827394|0|signal_set@0x7fff50528f40.cancel
Each line is of the form:
<tag>|<timestamp>|<action>|<description>
The <tag>
is always @asio
, and is used
to identify and extract the handler tracking messages from the program
output.
The <timestamp>
is seconds and microseconds from 1 Jan
1970 UTC.
The <action>
takes one of the following forms:
The program entered the handler number n
. The <description>
shows the arguments to the handler.
The program left handler number n
.
The program left handler number n due to an exception.
The handler number n
was destroyed without having been
invoked. This is usually the case for any unfinished asynchronous
operations when the io_service
is destroyed.
The handler number n
created a new asynchronous operation
with completion handler number m
. The <description>
shows what asynchronous operation was started.
The handler number n performed some other operation. The <description>
shows what function was called. Currently only close()
and cancel()
operations are logged, as these may affect
the state of pending asynchronous operations.
Where the <description>
shows a synchronous or asynchronous
operation, the format is <object-type>@<pointer>.<operation>
.
For handler entry, it shows a comma-separated list of arguments and their
values.
As shown above, Each handler is assigned a numeric identifier. Where the handler tracking output shows a handler number of 0, it means that the action was performed outside of any handler.
The handler tracking output may be post-processed using the included handlerviz.pl
tool to create a visual representation of the handlers (requires the GraphViz
tool dot
).
The coroutine
class provides support for stackless coroutines. Stackless coroutines enable
programs to implement asynchronous logic in a synchronous manner, with
minimal overhead, as shown in the following example:
struct session : boost::asio::coroutine { boost::shared_ptr<tcp::socket> socket_; boost::shared_ptr<std::vector<char> > buffer_; session(boost::shared_ptr<tcp::socket> socket) : socket_(socket), buffer_(new std::vector<char>(1024)) { } void operator()(boost::system::error_code ec = boost::system::error_code(), std::size_t n = 0) { if (!ec) reenter (this) { for (;;) { yield socket_->async_read_some(boost::asio::buffer(*buffer_), *this); yield boost::asio::async_write(*socket_, boost::asio::buffer(*buffer_, n), *this); } } } };
The coroutine
class is
used in conjunction with the pseudo-keywords reenter
,
yield
and fork
. These are preprocessor macros,
and are implemented in terms of a switch
statement using a technique similar to Duff's Device. The coroutine
class's documentation
provides a complete description of these pseudo-keywords.
The spawn()
function is a high-level wrapper
for running stackful coroutines. It is based on the Boost.Coroutine library.
The spawn()
function enables programs to implement asynchronous logic in a synchronous
manner, as shown in the following example:
boost::asio::spawn(my_strand, do_echo); // ... void do_echo(boost::asio::yield_context yield) { try { char data[128]; for (;;) { std::size_t length = my_socket.async_read_some( boost::asio::buffer(data), yield); boost::asio::async_write(my_socket, boost::asio::buffer(data, length), yield); } } catch (std::exception& e) { // ... } }
The first argument to spawn()
may be a strand
, io_service
, or completion
handler. This argument determines the context in which the coroutine
is permitted to execute. For example, a server's per-client object may
consist of multiple coroutines; they should all run on the same strand
so that no explicit synchronisation
is required.
The second argument is a function object with signature:
void coroutine(boost::asio::yield_context yield);
that specifies the code to be run as part of the coroutine. The parameter
yield
may be passed to
an asynchronous operation in place of the completion handler, as in:
std::size_t length = my_socket.async_read_some( boost::asio::buffer(data), yield);
This starts the asynchronous operation and suspends the coroutine. The coroutine will be resumed automatically when the asynchronous operation completes.
Where an asynchronous operation's handler signature has the form:
void handler(boost::system::error_code ec, result_type result);
the initiating function returns the result_type. In the async_read_some
example above, this is size_t
.
If the asynchronous operation fails, the error_code
is converted into a system_error
exception and thrown.
Where a handler signature has the form:
void handler(boost::system::error_code ec);
the initiating function returns void
.
As above, an error is passed back to the coroutine as a system_error
exception.
To collect the error_code
from an operation, rather than have it throw an exception, associate the
output variable with the yield_context
as follows:
boost::system::error_code ec; std::size_t length = my_socket.async_read_some( boost::asio::buffer(data), yield[ec]);
Note: if spawn()
is used with a custom completion handler
of type Handler
, the function
object signature is actually:
void coroutine(boost::asio::basic_yield_context<Handler> yield);
spawn, yield_context, basic_yield_context, Spawn example (C++03), Spawn example (C++11), Stackless Coroutines.
Boost.Asio provides off-the-shelf support for the internet protocols TCP, UDP and ICMP.
Hostname resolution is performed using a resolver, where host and service names are looked up and converted into one or more endpoints:
ip::tcp::resolver resolver(my_io_service); ip::tcp::resolver::query query("www.boost.org", "http"); ip::tcp::resolver::iterator iter = resolver.resolve(query); ip::tcp::resolver::iterator end; // End marker. while (iter != end) { ip::tcp::endpoint endpoint = *iter++; std::cout << endpoint << std::endl; }
The list of endpoints obtained above could contain both IPv4 and IPv6 endpoints, so a program should try each of them until it finds one that works. This keeps the client program independent of a specific IP version.
To simplify the development of protocol-independent programs, TCP clients may establish connections using the free functions connect() and async_connect(). These operations try each endpoint in a list until the socket is successfully connected. For example, a single call:
ip::tcp::socket socket(my_io_service); boost::asio::connect(socket, resolver.resolve(query));
will synchronously try all endpoints until one is successfully connected. Similarly, an asynchronous connect may be performed by writing:
boost::asio::async_connect(socket_, iter, boost::bind(&client::handle_connect, this, boost::asio::placeholders::error)); // ... void handle_connect(const error_code& error) { if (!error) { // Start read or write operations. } else { // Handle error. } }
When a specific endpoint is available, a socket can be created and connected:
ip::tcp::socket socket(my_io_service); socket.connect(endpoint);
Data may be read from or written to a connected TCP socket using the receive(), async_receive(), send() or async_send() member functions. However, as these could result in short writes or reads, an application will typically use the following operations instead: read(), async_read(), write() and async_write().
A program uses an acceptor to accept incoming TCP connections:
ip::tcp::acceptor acceptor(my_io_service, my_endpoint); ... ip::tcp::socket socket(my_io_service); acceptor.accept(socket);
After a socket has been successfully accepted, it may be read from or written to as illustrated for TCP clients above.
UDP hostname resolution is also performed using a resolver:
ip::udp::resolver resolver(my_io_service); ip::udp::resolver::query query("localhost", "daytime"); ip::udp::resolver::iterator iter = resolver.resolve(query); ...
A UDP socket is typically bound to a local endpoint. The following code
will create an IP version 4 UDP socket and bind it to the "any"
address on port 12345
:
ip::udp::endpoint endpoint(ip::udp::v4(), 12345); ip::udp::socket socket(my_io_service, endpoint);
Data may be read from or written to an unconnected UDP socket using the receive_from(), async_receive_from(), send_to() or async_send_to() member functions. For a connected UDP socket, use the receive(), async_receive(), send() or async_send() member functions.
As with TCP and UDP, ICMP hostname resolution is performed using a resolver:
ip::icmp::resolver resolver(my_io_service); ip::icmp::resolver::query query("localhost", ""); ip::icmp::resolver::iterator iter = resolver.resolve(query); ...
An ICMP socket may be bound to a local endpoint. The following code will create an IP version 6 ICMP socket and bind it to the "any" address:
ip::icmp::endpoint endpoint(ip::icmp::v6(), 0); ip::icmp::socket socket(my_io_service, endpoint);
The port number is not used for ICMP.
Data may be read from or written to an unconnected ICMP socket using the receive_from(), async_receive_from(), send_to() or async_send_to() member functions.
ip::tcp, ip::udp, ip::icmp, daytime protocol tutorials, ICMP ping example.
Support for other socket protocols (such as Bluetooth or IRCOMM sockets) can be added by implementing the protocol type requirements. However, in many cases these protocols may also be used with Boost.Asio's generic protocol support. For this, Boost.Asio provides the following four classes:
These classes implement the protocol
type requirements, but allow the user to specify the address family
(e.g. AF_INET
) and protocol
type (e.g. IPPROTO_TCP
)
at runtime. For example:
boost::asio::generic::stream_protocol::socket my_socket(my_io_service); my_socket.open(boost::asio::generic::stream_protocol(AF_INET, IPPROTO_TCP)); ...
An endpoint class template, boost::asio::generic::basic_endpoint
, is included to
support these protocol classes. This endpoint can hold any other endpoint
type, provided its native representation fits into a sockaddr_storage
object. This class will also convert from other types that implement the
endpoint type requirements:
boost::asio::ip::tcp::endpoint my_endpoint1 = ...; boost::asio::generic::stream_protocol::endpoint my_endpoint2(my_endpoint1);
The conversion is implicit, so as to support the following use cases:
boost::asio::generic::stream_protocol::socket my_socket(my_io_service); boost::asio::ip::tcp::endpoint my_endpoint = ...; my_socket.connect(my_endpoint);
When using C++11, it is possible to perform move construction from a socket (or acceptor) object to convert to the more generic protocol's socket (or acceptor) type. If the protocol conversion is valid:
Protocol1 p1 = ...; Protocol2 p2(p1);
then the corresponding socket conversion is allowed:
Protocol1::socket my_socket1(my_io_service); ... Protocol2::socket my_socket2(std::move(my_socket1));
For example, one possible conversion is from a TCP socket to a generic stream-oriented socket:
boost::asio::ip::tcp::socket my_socket1(my_io_service); ... boost::asio::generic::stream_protocol::socket my_socket2(std::move(my_socket1));
These conversions are also available for move-assignment.
These conversions are not limited to the above generic protocol classes.
User-defined protocols may take advantage of this feature by similarly
ensuring the conversion from Protocol1
to Protocol2
is valid,
as above.
As a convenience, a socket acceptor's accept()
and async_accept()
functions can directly accept into a
different protocol's socket type, provided the corresponding protocol conversion
is valid. For example, the following is supported because the protocol
boost::asio::ip::tcp
is convertible to boost::asio::generic::stream_protocol
:
boost::asio::ip::tcp::acceptor my_acceptor(my_io_service); ... boost::asio::generic::stream_protocol::socket my_socket(my_io_service); my_acceptor.accept(my_socket);
generic::datagram_protocol
, generic::raw_protocol
, generic::seq_packet_protocol
, generic::stream_protocol
, protocol
type requirements.
Boost.Asio includes classes that implement iostreams on top of sockets. These hide away the complexities associated with endpoint resolution, protocol independence, etc. To create a connection one might simply write:
ip::tcp::iostream stream("www.boost.org", "http"); if (!stream) { // Can't connect. }
The iostream class can also be used in conjunction with an acceptor to create simple servers. For example:
io_service ios; ip::tcp::endpoint endpoint(tcp::v4(), 80); ip::tcp::acceptor acceptor(ios, endpoint); for (;;) { ip::tcp::iostream stream; acceptor.accept(*stream.rdbuf()); ... }
Timeouts may be set by calling expires_at()
or expires_from_now()
to establish a deadline. Any socket operations
that occur past the deadline will put the iostream into a "bad"
state.
For example, a simple client program like this:
ip::tcp::iostream stream; stream.expires_from_now(boost::posix_time::seconds(60)); stream.connect("www.boost.org", "http"); stream << "GET /LICENSE_1_0.txt HTTP/1.0\r\n"; stream << "Host: www.boost.org\r\n"; stream << "Accept: */*\r\n"; stream << "Connection: close\r\n\r\n"; stream.flush(); std::cout << stream.rdbuf();
will fail if all the socket operations combined take longer than 60 seconds.
If an error does occur, the iostream's error()
member function may be used to retrieve
the error code from the most recent system call:
if (!stream) { std::cout << "Error: " << stream.error().message() << "\n"; }
ip::tcp::iostream, basic_socket_iostream, iostreams examples.
These iostream templates only support char
,
not wchar_t
, and do not perform
any code conversion.
The Boost.Asio library includes a low-level socket interface based on the BSD socket API, which is widely implemented and supported by extensive literature. It is also used as the basis for networking APIs in other languages, like Java. This low-level interface is designed to support the development of efficient and scalable applications. For example, it permits programmers to exert finer control over the number of system calls, avoid redundant data copying, minimise the use of resources like threads, and so on.
Unsafe and error prone aspects of the BSD socket API not included. For
example, the use of int
to
represent all sockets lacks type safety. The socket representation in Boost.Asio
uses a distinct type for each protocol, e.g. for TCP one would use ip::tcp::socket
, and for UDP one uses ip::udp::socket
.
The following table shows the mapping between the BSD socket API and Boost.Asio:
Long running I/O operations will often have a deadline by which they must have completed. These deadlines may be expressed as absolute times, but are often calculated relative to the current time.
As a simple example, to perform a synchronous wait operation on a timer using a relative time one may write:
io_service i; ... deadline_timer t(i); t.expires_from_now(boost::posix_time::seconds(5)); t.wait();
More commonly, a program will perform an asynchronous wait operation on a timer:
void handler(boost::system::error_code ec) { ... } ... io_service i; ... deadline_timer t(i); t.expires_from_now(boost::posix_time::milliseconds(400)); t.async_wait(handler); ... i.run();
The deadline associated with a timer may also be obtained as a relative time:
boost::posix_time::time_duration time_until_expiry = t.expires_from_now();
or as an absolute time to allow composition of timers:
deadline_timer t2(i); t2.expires_at(t.expires_at() + boost::posix_time::seconds(30));
basic_deadline_timer, deadline_timer, deadline_timer_service, timer tutorials.
Boost.Asio includes classes for creating and manipulating serial ports in a portable manner. For example, a serial port may be opened using:
serial_port port(my_io_service, name);
where name is something like "COM1"
on Windows, and "/dev/ttyS0"
on POSIX platforms.
Once opened, the serial port may be used as a stream. This means the objects can be used with any of the read(), async_read(), write(), async_write(), read_until() or async_read_until() free functions.
The serial port implementation also includes option classes for configuring the port's baud rate, flow control type, parity, stop bits and character size.
serial_port, serial_port_base, basic_serial_port, serial_port_service, serial_port_base::baud_rate, serial_port_base::flow_control, serial_port_base::parity, serial_port_base::stop_bits, serial_port_base::character_size.
Serial ports are available on all POSIX platforms. For Windows, serial ports
are only available at compile time when the I/O completion port backend is
used (which is the default). A program may test for the macro BOOST_ASIO_HAS_SERIAL_PORT
to determine
whether they are supported.
Boost.Asio supports signal handling using a class called signal_set.
Programs may add one or more signals to the set, and then perform an async_wait()
operation. The specified handler will be called when one of the signals occurs.
The same signal number may be registered with multiple signal_set
objects, however the signal number must be used only with Boost.Asio.
void handler( const boost::system::error_code& error, int signal_number) { if (!error) { // A signal occurred. } } ... // Construct a signal set registered for process termination. boost::asio::signal_set signals(io_service, SIGINT, SIGTERM); // Start an asynchronous wait for one of the signals to occur. signals.async_wait(handler);
Signal handling also works on Windows, as the Microsoft Visual C++ runtime library maps console events like Ctrl+C to the equivalent signal.
signal_set, HTTP server example (C++03), HTTP server example (C++11).
Stream-Oriented File Descriptors
Boost.Asio provides basic support UNIX domain sockets (also known as local sockets). The simplest use involves creating a pair of connected sockets. The following code:
local::stream_protocol::socket socket1(my_io_service); local::stream_protocol::socket socket2(my_io_service); local::connect_pair(socket1, socket2);
will create a pair of stream-oriented sockets. To do the same for datagram-oriented sockets, use:
local::datagram_protocol::socket socket1(my_io_service); local::datagram_protocol::socket socket2(my_io_service); local::connect_pair(socket1, socket2);
A UNIX domain socket server may be created by binding an acceptor to an endpoint, in much the same way as one does for a TCP server:
::unlink("/tmp/foobar"); // Remove previous binding. local::stream_protocol::endpoint ep("/tmp/foobar"); local::stream_protocol::acceptor acceptor(my_io_service, ep); local::stream_protocol::socket socket(my_io_service); acceptor.accept(socket);
A client that connects to this server might look like:
local::stream_protocol::endpoint ep("/tmp/foobar"); local::stream_protocol::socket socket(my_io_service); socket.connect(ep);
Transmission of file descriptors or credentials across UNIX domain sockets is not directly supported within Boost.Asio, but may be achieved by accessing the socket's underlying descriptor using the native_handle() member function.
local::connect_pair, local::datagram_protocol, local::datagram_protocol::endpoint, local::datagram_protocol::socket, local::stream_protocol, local::stream_protocol::acceptor, local::stream_protocol::endpoint, local::stream_protocol::iostream, local::stream_protocol::socket, UNIX domain sockets examples.
UNIX domain sockets are only available at compile time if supported by
the target operating system. A program may test for the macro BOOST_ASIO_HAS_LOCAL_SOCKETS
to determine
whether they are supported.
Boost.Asio includes classes added to permit synchronous and asynchronous read and write operations to be performed on POSIX file descriptors, such as pipes, standard input and output, and various devices (but not regular files).
For example, to perform read and write operations on standard input and output, the following objects may be created:
posix::stream_descriptor in(my_io_service, ::dup(STDIN_FILENO)); posix::stream_descriptor out(my_io_service, ::dup(STDOUT_FILENO));
These are then used as synchronous or asynchronous read and write streams. This means the objects can be used with any of the read(), async_read(), write(), async_write(), read_until() or async_read_until() free functions.
posix::stream_descriptor, posix::basic_stream_descriptor, posix::stream_descriptor_service, Chat example (C++03), Chat example (C++11).
POSIX stream descriptors are only available at compile time if supported
by the target operating system. A program may test for the macro BOOST_ASIO_HAS_POSIX_STREAM_DESCRIPTOR
to determine whether they are supported.
Boost.Asio supports programs that utilise the fork()
system call. Provided the program calls
io_service.notify_fork()
at the appropriate times, Boost.Asio will recreate any internal file descriptors
(such as the "self-pipe trick" descriptor used for waking up
a reactor). The notification is usually performed as follows:
io_service_.notify_fork(boost::asio::io_service::fork_prepare); if (fork() == 0) { io_service_.notify_fork(boost::asio::io_service::fork_child); ... } else { io_service_.notify_fork(boost::asio::io_service::fork_parent); ... }
User-defined services can also be made fork-aware by overriding the io_service::service::fork_service()
virtual function.
Note that any file descriptors accessible via Boost.Asio's public API (e.g.
the descriptors underlying basic_socket<>
, posix::stream_descriptor
,
etc.) are not altered during a fork. It is the program's responsibility
to manage these as required.
io_service::notify_fork(), io_service::fork_event, io_service::service::fork_service(), Fork examples.
Boost.Asio contains classes to allow asynchronous read and write operations
to be performed on Windows HANDLE
s,
such as named pipes.
For example, to perform asynchronous operations on a named pipe, the following object may be created:
HANDLE handle = ::CreateFile(...); windows::stream_handle pipe(my_io_service, handle);
These are then used as synchronous or asynchronous read and write streams. This means the objects can be used with any of the read(), async_read(), write(), async_write(), read_until() or async_read_until() free functions.
The kernel object referred to by the HANDLE
must support use with I/O completion ports (which means that named pipes
are supported, but anonymous pipes and console streams are not).
windows::stream_handle, windows::basic_stream_handle, windows::stream_handle_service.
Windows stream HANDLE
s
are only available at compile time when targeting Windows and only when
the I/O completion port backend is used (which is the default). A program
may test for the macro BOOST_ASIO_HAS_WINDOWS_STREAM_HANDLE
to determine whether they are supported.
Boost.Asio provides Windows-specific classes that permit asynchronous read and write operations to be performed on HANDLEs that refer to regular files.
For example, to perform asynchronous operations on a file the following object may be created:
HANDLE handle = ::CreateFile(...); windows::random_access_handle file(my_io_service, handle);
Data may be read from or written to the handle using one of the read_some_at()
,
async_read_some_at()
,
write_some_at()
or async_write_some_at()
member functions. However, like the equivalent
functions (read_some()
,
etc.) on streams, these functions are only required to transfer one or
more bytes in a single operation. Therefore free functions called read_at(), async_read_at(),
write_at() and async_write_at() have
been created to repeatedly call the corresponding *_some_at()
function until all data has been transferred.
windows::random_access_handle, windows::basic_random_access_handle, windows::random_access_handle_service.
Windows random-access HANDLE
s
are only available at compile time when targeting Windows and only when
the I/O completion port backend is used (which is the default). A program
may test for the macro BOOST_ASIO_HAS_WINDOWS_RANDOM_ACCESS_HANDLE
to determine whether they are supported.
Boost.Asio provides Windows-specific classes that permit asynchronous wait operations to be performed on HANDLEs to kernel objects of the following types:
For example, to perform asynchronous operations on an event, the following object may be created:
HANDLE handle = ::CreateEvent(...); windows::object_handle file(my_io_service, handle);
The wait()
and async_wait()
member functions may then be used to wait until the kernel object is signalled.
windows::object_handle, windows::basic_object_handle, windows::object_handle_service.
Windows object HANDLE
s
are only available at compile time when targeting Windows. Programs may
test for the macro BOOST_ASIO_HAS_WINDOWS_OBJECT_HANDLE
to determine whether they are supported.
Boost.Asio contains classes and class templates for basic SSL support. These classes allow encrypted communication to be layered on top of an existing stream, such as a TCP socket.
Before creating an encrypted stream, an application must construct an SSL context object. This object is used to set SSL options such as verification mode, certificate files, and so on. As an illustration, client-side initialisation may look something like:
ssl::context ctx(ssl::context::sslv23); ctx.set_verify_mode(ssl::verify_peer); ctx.load_verify_file("ca.pem");
To use SSL with a TCP socket, one may write:
ssl::stream<ip::tcp::socket> ssl_sock(my_io_service, ctx);
To perform socket-specific operations, such as establishing an outbound connection
or accepting an incoming one, the underlying socket must first be obtained
using the ssl::stream
template's lowest_layer()
member function:
ip::tcp::socket::lowest_layer_type& sock = ssl_sock.lowest_layer(); sock.connect(my_endpoint);
In some use cases the underlying stream object will need to have a longer lifetime than the SSL stream, in which case the template parameter should be a reference to the stream type:
ip::tcp::socket sock(my_io_service); ssl::stream<ip::tcp::socket&> ssl_sock(sock, ctx);
SSL handshaking must be performed prior to transmitting or receiving data
over an encrypted connection. This is accomplished using the ssl::stream
template's handshake()
or async_handshake()
member functions.
Once connected, SSL stream objects are used as synchronous or asynchronous read and write streams. This means the objects can be used with any of the read(), async_read(), write(), async_write(), read_until() or async_read_until() free functions.
Boost.Asio provides various methods for configuring the way SSL certificates are verified:
To simplify use cases where certificates are verified according to the rules in RFC 2818 (certificate verification for HTTPS), Boost.Asio provides a reusable verification callback as a function object:
The following example shows verification of a remote host's certificate according to the rules used by HTTPS:
using boost::asio::ip::tcp; namespace ssl = boost::asio::ssl; typedef ssl::stream<tcp::socket> ssl_socket; // Create a context that uses the default paths for // finding CA certificates. ssl::context ctx(ssl::context::sslv23); ctx.set_default_verify_paths(); // Open a socket and connect it to the remote host. boost::asio::io_service io_service; ssl_socket sock(io_service, ctx); tcp::resolver resolver(io_service); tcp::resolver::query query("host.name", "https"); boost::asio::connect(sock.lowest_layer(), resolver.resolve(query)); sock.lowest_layer().set_option(tcp::no_delay(true)); // Perform SSL handshake and verify the remote host's // certificate. sock.set_verify_mode(ssl::verify_peer); sock.set_verify_callback(ssl::rfc2818_verification("host.name")); sock.handshake(ssl_socket::client); // ... read and write as normal ...
SSL stream objects perform no locking of their own. Therefore, it is essential that all asynchronous SSL operations are performed in an implicit or explicit strand. Note that this means that no synchronisation is required (and so no locking overhead is incurred) in single threaded programs.
ssl::context, ssl::rfc2818_verification, ssl::stream, SSL example.
OpenSSL is required to make use
of Boost.Asio's SSL support. When an application needs to use OpenSSL functionality
that is not wrapped by Boost.Asio, the underlying OpenSSL types may be obtained
by calling ssl::context::native_handle()
or ssl::stream::native_handle()
.
When move support is available (via rvalue references), Boost.Asio allows move construction and assignment of sockets, serial ports, POSIX descriptors and Windows handles.
Move support allows you to write code like:
tcp::socket make_socket(io_service& i) { tcp::socket s(i); ... std::move(s); }
or:
class connection : public enable_shared_from_this<connection> { private: tcp::socket socket_; ... public: connection(tcp::socket&& s) : socket_(std::move(s)) {} ... }; ... class server { private: tcp::acceptor acceptor_; tcp::socket socket_; ... void handle_accept(error_code ec) { if (!ec) std::make_shared<connection>(std::move(socket_))->go(); acceptor_.async_accept(socket_, ...); } ... };
as well as:
std::vector<tcp::socket> sockets; sockets.push_back(tcp::socket(...));
A word of warning: There is nothing stopping you from moving these objects while there are pending asynchronous operations, but it is unlikely to be a good idea to do so. In particular, composed operations like async_read() store a reference to the stream object. Moving during the composed operation means that the composed operation may attempt to access a moved-from object.
Move support is automatically enabled for g++
4.5 and
later, when the -std=c++0x
or -std=gnu++0x
compiler options are used. It may be disabled by defining BOOST_ASIO_DISABLE_MOVE
, or explicitly
enabled for other compilers by defining BOOST_ASIO_HAS_MOVE
.
Note that these macros also affect the availability of movable
handlers.
As an optimisation, user-defined completion handlers may provide move constructors, and Boost.Asio's implementation will use a handler's move constructor in preference to its copy constructor. In certain circumstances, Boost.Asio may be able to eliminate all calls to a handler's copy constructor. However, handler types are still required to be copy constructible.
When move support is enabled, asynchronous that are documented as follows:
template <typename Handler> void async_XYZ(..., Handler handler);
are actually declared as:
template <typename Handler> void async_XYZ(..., Handler&& handler);
The handler argument is perfectly forwarded and the move construction occurs
within the body of async_XYZ()
. This ensures that all other function
arguments are evaluated prior to the move. This is critical when the other
arguments to async_XYZ()
are members of the handler. For example:
struct my_operation { shared_ptr<tcp::socket> socket; shared_ptr<vector<char>> buffer; ... void operator(error_code ec, size_t length) { ... socket->async_read_some(boost::asio::buffer(*buffer), std::move(*this)); ... } };
Move support is automatically enabled for g++
4.5 and
later, when the -std=c++0x
or -std=gnu++0x
compiler options are used. It may be disabled by defining BOOST_ASIO_DISABLE_MOVE
, or explicitly
enabled for other compilers by defining BOOST_ASIO_HAS_MOVE
.
Note that these macros also affect the availability of movable
I/O objects.
When supported by a compiler, Boost.Asio can use variadic templates to implement the basic_socket_streambuf::connect() and basic_socket_iostream::connect() functions.
Support for variadic templates is automatically enabled for g++
4.3 and later, when the -std=c++0x
or -std=gnu++0x
compiler options are used. It may be disabled by defining BOOST_ASIO_DISABLE_VARIADIC_TEMPLATES
,
or explicitly enabled for other compilers by defining BOOST_ASIO_HAS_VARIADIC_TEMPLATES
.
Where the standard library provides std::array<>
, Boost.Asio:
boost::array<>
for the ip::address_v4::bytes_type
and ip::address_v6::bytes_type
types.
boost::array<>
where a fixed size array type
is needed in the implementation.
Support for std::array<>
is automatically enabled for g++
4.3 and later, when
the -std=c++0x
or -std=gnu++0x
compiler
options are used, as well as for Microsoft Visual C++ 10. It may be disabled
by defining BOOST_ASIO_DISABLE_STD_ARRAY
,
or explicitly enabled for other compilers by defining BOOST_ASIO_HAS_STD_ARRAY
.
Boost.Asio's implementation can use std::atomic<>
in preference to boost::detail::atomic_count
.
Support for the standard atomic integer template is automatically enabled
for g++
4.5 and later, when the -std=c++0x
or -std=gnu++0x
compiler options are used. It may be
disabled by defining BOOST_ASIO_DISABLE_STD_ATOMIC
,
or explicitly enabled for other compilers by defining BOOST_ASIO_HAS_STD_ATOMIC
.
Boost.Asio's implementation can use std::shared_ptr<>
and std::weak_ptr<>
in preference to the Boost equivalents.
Support for the standard smart pointers is automatically enabled for g++
4.3 and later, when the -std=c++0x
or -std=gnu++0x
compiler options are used, as well as for Microsoft Visual C++ 10. It may
be disabled by defining BOOST_ASIO_DISABLE_STD_SHARED_PTR
,
or explicitly enabled for other compilers by defining BOOST_ASIO_HAS_STD_SHARED_PTR
.
Boost.Asio provides timers based on the std::chrono
facilities via the basic_waitable_timer
class template. The typedefs system_timer,
steady_timer and
high_resolution_timer
utilise the standard clocks system_clock
,
steady_clock
and high_resolution_clock
respectively.
Support for the std::chrono
facilities is automatically enabled
for g++
4.6 and later, when the -std=c++0x
or -std=gnu++0x
compiler options are used. (Note that,
for g++
, the draft-standard monotonic_clock
is used in place of steady_clock
.)
Support may be disabled by defining BOOST_ASIO_DISABLE_STD_CHRONO
,
or explicitly enabled for other compilers by defining BOOST_ASIO_HAS_STD_CHRONO
.
When standard chrono
is
unavailable, Boost.Asio will otherwise use the Boost.Chrono library. The
basic_waitable_timer
class template may be used with either.
The boost::asio::use_future
special value provides first-class
support for returning a C++11 std::future
from an asynchronous operation's initiating function.
To use boost::asio::use_future
, pass it to an asynchronous
operation instead of a normal completion handler. For example:
std::future<std::size_t> length = my_socket.async_read_some(my_buffer, boost::asio::use_future);
Where a handler signature has the form:
void handler(boost::system::error_code ec, result_type result);
the initiating function returns a std::future
templated on result_type
.
In the above example, this is std::size_t
.
If the asynchronous operation fails, the error_code
is converted into a system_error
exception and passed back to the caller through the future.
Where a handler signature has the form:
void handler(boost::system::error_code ec);
the initiating function returns std::future<void>
. As above, an error is passed back
in the future as a system_error
exception.
This section lists platform-specific implementation details, such as the default demultiplexing mechanism, the number of threads created internally, and when threads are created.
Demultiplexing mechanism:
select
for demultiplexing.
This means that the number of file descriptors in the process cannot
be permitted to exceed FD_SETSIZE
.
Threads:
select
is performed in one of the threads that calls io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
min(64,IOV_MAX)
buffers may be transferred in a single
operation.
Demultiplexing mechanism:
epoll
for demultiplexing.
Threads:
epoll
is performed in one of the threads that calls io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
min(64,IOV_MAX)
buffers may be transferred in a single
operation.
Demultiplexing mechanism:
/dev/poll
for demultiplexing.
Threads:
/dev/poll
is performed in one
of the threads that calls io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
min(64,IOV_MAX)
buffers may be transferred in a single
operation.
Demultiplexing mechanism:
select
for demultiplexing.
This means that the number of file descriptors in the process cannot
be permitted to exceed FD_SETSIZE
.
Threads:
select
is performed in one of the threads that calls io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
min(64,IOV_MAX)
buffers may be transferred in a single
operation.
Demultiplexing mechanism:
kqueue
for demultiplexing.
Threads:
kqueue
is performed in one of the threads that calls io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
min(64,IOV_MAX)
buffers may be transferred in a single
operation.
Demultiplexing mechanism:
kqueue
for demultiplexing.
Threads:
kqueue
is performed in one of the threads that calls io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
min(64,IOV_MAX)
buffers may be transferred in a single
operation.
Demultiplexing mechanism:
select
for demultiplexing.
This means that the number of file descriptors in the process cannot
be permitted to exceed FD_SETSIZE
.
Threads:
select
is performed in one of the threads that calls io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
min(64,IOV_MAX)
buffers may be transferred in a single
operation.
Demultiplexing mechanism:
select
for demultiplexing.
This means that the number of file descriptors in the process cannot
be permitted to exceed FD_SETSIZE
.
Threads:
select
is performed in one of the threads that calls io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
min(64,IOV_MAX)
buffers may be transferred in a single
operation.
Demultiplexing mechanism:
select
for demultiplexing.
This means that the number of file descriptors in the process cannot
be permitted to exceed FD_SETSIZE
.
Threads:
select
is performed in one of the threads that calls io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
min(64,IOV_MAX)
buffers may be transferred in a single
operation.
Demultiplexing mechanism:
select
for demultiplexing.
Threads:
select
is performed in one of the threads that calls io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
Demultiplexing mechanism:
select
for emulating
asynchronous connect.
Threads:
io_service::run()
, io_service::run_one()
, io_service::poll()
or io_service::poll_one()
.
io_service
is used to trigger timers. This thread is created on construction of
the first deadline_timer
or deadline_timer_service
objects.
io_service
is used for the select
demultiplexing. This thread is created on the first call to async_connect()
.
io_service
is used to emulate asynchronous host resolution. This thread is created
on the first call to either ip::tcp::resolver::async_resolve()
or ip::udp::resolver::async_resolve()
.
Scatter-Gather:
Boost.Asio provides limited support for the Windows Runtime. It requires that the language extensions be enabled. Due to the restricted facilities exposed by the Windows Runtime API, the support comes with the following caveats:
io_service
,
strand
, buffers, composed
operations, timers, etc., should all work as normal.
cancel()
function is not supported for sockets. Asynchronous operations may only
be cancelled by closing the socket.
null_buffers
are not supported.
tcp::no_delay
and socket_base::keep_alive
options are supported.
Demultiplexing mechanism:
Windows::Networking::Sockets::StreamSocket
class to implement asynchronous
TCP socket operations.
Threads:
io_service
for
the handler to be executed.
io_service
is used to trigger timers. This thread is created on construction of
the first timer objects.
Scatter-Gather:
Last revised: December 22, 2016 at 12:37:16 GMT |