...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Many asynchronous operations need to allocate an object to store state
associated with the operation. For example, a Win32 implementation needs
OVERLAPPED
-derived objects
to pass to Win32 API functions.
Furthermore, programs typically contain easily identifiable chains of asynchronous operations. A half duplex protocol implementation (e.g. an HTTP server) would have a single chain of operations per client (receives followed by sends). A full duplex protocol implementation would have two chains executing in parallel. Programs should be able to leverage this knowledge to reuse memory for all asynchronous operations in a chain.
Given a copy of a user-defined Handler
object h
, if the implementation
needs to allocate memory associated with that handler it will obtain an
allocator using the get_associated_allocator
function. For example:
boost::asio::associated_allocator_t<Handler> a = boost::asio::get_associated_allocator(h);
The associated allocator must satisfy the standard Allocator requirements.
By default, handlers use the standard allocator (which is implemented in
terms of ::operator
new()
and ::operator
delete()
).
The allocator may be customised for a particular handler type by specifying
a nested type allocator_type
and member function get_allocator()
:
class my_handler { public: // Custom implementation of Allocator type requirements. typedef my_allocator allocator_type; // Return a custom allocator implementation. allocator_type get_allocator() const noexcept { return my_allocator(); } void operator()() { ... } };
In more complex cases, the associated_allocator
template may be partially specialised directly:
namespace boost { namespace asio { template <typename Allocator> struct associated_allocator<my_handler, Allocator> { // Custom implementation of Allocator type requirements. typedef my_allocator type; // Return a custom allocator implementation. static type get(const my_handler&, const Allocator& a = Allocator()) noexcept { return my_allocator(); } }; } } // namespace boost::asio
The implementation guarantees that the deallocation will occur before the associated handler is invoked, which means the memory is ready to be reused for any new asynchronous operations started by the handler.
The custom memory allocation functions may be called from any user-created thread that is calling an Boost.Asio library function. The implementation guarantees, for the asynchronous operations included with the library, that within the context of an individual operation the implementation will not make concurrent calls to the memory allocation functions for that handler. The implementation will insert appropriate memory barriers to ensure correct memory visibility should an asynchronous operation need to call the allocation functions from different threads. (Note: If the same allocator is shared across multiple concurrent asynchronous operations, this can result in concurrent calls to the memory allocation functions. Use of a strand does not prevent these concurrent calls, as an operation may need to allocate memory from outside the strand. In this case, the shared allocator is responsible for providing the necessary thread safety guarantees.)
associated_allocator, get_associated_allocator, custom memory allocation example (C++11).