...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
As seen, Boost.Interprocess offers raw memory allocation and object construction using managed memory segments (managed shared memory, managed mapped files...) and one of the first user requests is the use of containers in managed shared memories. To achieve this, Boost.Interprocess makes use of managed memory segment's memory allocation algorithms to build several memory allocation schemes, including general purpose and node allocators.
Boost.Interprocess allocators are configurable
via template parameters. Allocators define their pointer
typedef based on the void_pointer
typedef of the segment manager passed as template argument. When this segment_manager::void_pointer
is a relative pointer, (for
example, offset_ptr<void>
)
the user can place these allocators in memory mapped in different base addresses
in several processes.
Container allocators are normally default-constructible because the are
stateless. std::allocator
and Boost.Pool's
boost::pool_allocator
/boost::fast_pool_allocator
are examples of default-constructible allocators.
On the other hand, Boost.Interprocess allocators need to allocate memory from a concrete memory segment and not from a system-wide memory source (like the heap). Boost.Interprocess allocators are stateful, which means that they must be configured to tell them where the shared memory or the memory mapped file is.
This information is transmitted at compile-time and run-time: The allocators receive a template parameter defining the type of the segment manager and their constructor receive a pointer to the segment manager of the managed memory segment where the user wants to allocate the values.
Boost.Interprocess allocators have no default-constructors and containers must be explicitly initialized with a configured allocator:
//The allocators must be templatized with the segment manager type typedef any_interprocess_allocator <int, managed_shared_memory::segment_manager, ...> Allocator; //The allocator must be constructed with a pointer to the segment manager Allocator alloc_instance (segment.get_segment_manager(), ...); //Containers must be initialized with a configured allocator typedef my_list<int, Allocator> MyIntList; MyIntList mylist(alloc_inst); //This would lead to a compilation error, because //the allocator has no default constructor //MyIntList mylist;
Boost.Interprocess allocators also have
a get_segment_manager()
function that returns the underlying segment manager that they have received
in the constructor:
Allocator::segment_manager s = alloc_instance.get_segment_manager(); AnotherType *a = s->construct<AnotherType>(anonymous_instance)(/*Parameters*/);
When swapping STL containers, there is an active discussion on what to do with the allocators. Some STL implementations, for example Dinkumware from Visual .NET 2003, perform a deep swap of the whole container through a temporary when allocators are not equal. The proposed resolution to container swapping is that allocators should be swapped in a non-throwing way.
Unfortunately, this approach is not valid with shared memory. Using heap allocators, if Group1 of node allocators share a common segregated storage, and Group2 share another common segregated storage, a simple pointer swapping is needed to swap an allocator of Group1 and another allocator of Group2. But when the user wants to swap two shared memory allocators, each one placed in a different shared memory segment, this is not possible. As generally shared memory is mapped in different addresses in each process, a pointer placed in one segment can't point to any object placed in other shared memory segment, since in each process, the distance between the segments is different. However, if both shared memory allocators are in the same segment, a non-throwing swap is possible, just like heap allocators.
Until a final resolution is achieved. Boost.Interprocess allocators implement a non-throwing swap function that swaps internal pointers. If an allocator placed in a shared memory segment is swapped with other placed in a different shared memory segment, the result is undefined. But a crash is quite sure.
The allocator
class defines an allocator class that uses the managed memory segment's
algorithm to allocate and deallocate memory. This is achieved through the
segment manager of the managed memory
segment. This allocator is the equivalent for managed memory segments of
the standard std::allocator
. allocator
is templatized with the allocated type, and the segment manager.
Equality: Two allocator
instances constructed with the same segment manager compare equal. If an
instance is created using copy constructor, that instance compares equal
with the original one.
Allocation thread-safety: Allocation and deallocation are implemented as calls to the segment manager's allocation function so the allocator offers the same thread-safety as the segment manager.
To use allocator
you must include the following header:
#include <boost/interprocess/allocators/allocator.hpp>
allocator
has
the following declaration:
namespace boost { namespace interprocess { template<class T, class SegmentManager> class allocator; } //namespace interprocess { } //namespace boost {
The allocator just provides the needed typedefs and forwards all allocation
and deallocation requests to the segment manager passed in the constructor,
just like std::allocator
forwards the requests to operator new[]
.
Using allocator
is straightforward:
#include <boost/interprocess/managed_shared_memory.hpp> #include <boost/interprocess/allocators/allocator.hpp> #include <cassert> using namespace boost::interprocess; int main () { //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; //Create shared memory managed_shared_memory segment(create_only,"MyName", 65536); //Create an allocator that allocates ints from the managed segment allocator<int, managed_shared_memory::segment_manager> allocator_instance(segment.get_segment_manager()); //Copy constructed allocator is equal allocator<int, managed_shared_memory::segment_manager> allocator_instance2(allocator_instance); assert(allocator_instance2 == allocator_instance); //Allocate and deallocate memory for 100 ints allocator_instance2.deallocate(allocator_instance.allocate(100), 100); return 0; }
Variable size memory algorithms waste some space in management information for each allocation. Sometimes, usually for small objects, this is not acceptable. Memory algorithms can also fragment the managed memory segment under some allocation and deallocation schemes, reducing their performance. When allocating many objects of the same type, a simple segregated storage becomes a fast and space-friendly allocator, as explained in the Boost.Pool library.
Segregate storage node allocators allocate large memory chunks from a general purpose memory allocator and divide that chunk into several nodes. No bookkeeping information is stored in the nodes to achieve minimal memory waste: free nodes are linked using a pointer constructed in the memory of the node.
Boost.Interprocess offers 3 allocators based
on this segregated storage algorithm: node_allocator
,
private_node_allocator
and cached_node_allocator
.
To know the details of the implementation of of the segregated storage pools see the Implementation of Boost.Interprocess segregated storage pools section.
node_allocator
,
private_node_allocator
and cached_node_allocator
implement the standard allocator interface and the functions explained
in the Properties
of Boost.Interprocess allocators.
All these allocators are templatized by 3 parameters:
class T
:
The type to be allocated.
class SegmentManager
:
The type of the segment manager that will be passed in the constructor.
std::size_t NodesPerChunk
:
The number of nodes that a memory chunk will contain. This value will
define the size of the memory the pool will request to the segment
manager when the pool runs out of nodes. This parameter has a default
value.
These allocators also offer the deallocate_free_chunks()
function. This function will traverse
all the memory chunks of the pool and will return to the managed memory
segment the free chunks of memory. If this function is not used, deallocating
the free chunks does not happen until the pool is destroyed so the only
way to return memory allocated by the pool to the segment before destructing
the pool is calling manually this function. This function is quite time-consuming
because it has quadratic complexity (O(N^2)).
For heap-memory node allocators (like Boost.Pool's
boost::fast_pool_allocator
usually a global,
thread-shared singleton pool is used for each node size. This is not possible
if you try to share a node allocator between processes. To achieve this
sharing node_allocator
uses the segment manager's unique type allocation service (see Unique
instance construction section).
In the initialization, a node_allocator
object searches this unique object in the segment. If it is not preset,
it builds one. This way, all node_allocator
objects built inside a memory segment share a unique memory pool.
The common segregated storage is not only shared between node_allocators of the same type, but it is also shared between all node allocators that allocate objects of the same size, for example, node_allocator<uint32> and node_allocator<float32>. This saves a lot of memory but also imposes an synchronization overhead for each node allocation.
The dynamically created common segregated storage integrates a reference
count so that a node_allocator
can know if any other node_allocator
is attached to the same common segregated storage. When the last allocator
attached to the pool is destroyed, the pool is destroyed.
Equality: Two node_allocator
instances constructed with the same segment manager compare equal. If an
instance is created using copy constructor, that instance compares equal
with the original one.
Allocation thread-safety: Allocation and deallocation are implemented as calls to the shared pool. The shared pool offers the same synchronization guarantees as the segment manager.
To use node_allocator
,
you must include the following header:
#include <boost/interprocess/allocators/node_allocator.hpp>
node_allocator
has the following declaration:
namespace boost { namespace interprocess { template<class T, class SegmentManager, std::size_t NodesPerChunk = ...> class node_allocator; } //namespace interprocess { } //namespace boost {
An example using node_allocator
:
#include <boost/interprocess/managed_shared_memory.hpp> #include <boost/interprocess/allocators/node_allocator.hpp> #include <cassert> using namespace boost::interprocess; int main () { //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; //Create shared memory managed_shared_memory segment(create_only, "MyName", //segment name 65536); //Create a node_allocator that allocates ints from the managed segment //The number of chunks per segment is the default value typedef node_allocator<int, managed_shared_memory::segment_manager> node_allocator_t; node_allocator_t allocator_instance(segment.get_segment_manager()); //Create another node_allocator. Since the segment manager address //is the same, this node_allocator will be //attached to the same pool so "allocator_instance2" can deallocate //nodes allocated by "allocator_instance" node_allocator_t allocator_instance2(segment.get_segment_manager()); //Create another node_allocator using copy-constructor. This //node_allocator will also be attached to the same pool node_allocator_t allocator_instance3(allocator_instance2); //All allocators are equal assert(allocator_instance == allocator_instance2); assert(allocator_instance2 == allocator_instance3); //So memory allocated with one can be deallocated with another allocator_instance2.deallocate(allocator_instance.allocate(1), 1); allocator_instance3.deallocate(allocator_instance2.allocate(1), 1); //The common pool will be destroyed here, since no allocator is //attached to the pool return 0; }
As said, the node_allocator shares a common segregated storage between node_allocators that allocate objects of the same size and this optimizes memory usage. However, it needs a unique/named object construction feature so that this sharing can be possible. Also imposes a synchronization overhead per node allocation because of this share. Sometimes, the unique object service is not available (for example, when building index types to implement the named allocation service itself) or the synchronization overhead is not acceptable. Many times the programmer wants to make sure that the pool is destroyed when the allocator is destroyed, to free the memory as soon as possible.
So private_node_allocator uses the same
segregated storage as node_allocator
,
but each private_node_allocator has its
own segregated storage pool. No synchronization is used when allocating
nodes, so there is far less overhead for an operation that usually involves
just a few pointer operations when allocating and deallocating a node.
Equality: Two private_node_allocator
instances never compare equal. Memory
allocated with one allocator can't be
deallocated with another one.
Allocation thread-safety: Allocation and deallocation are not thread-safe.
To use private_node_allocator
,
you must include the following header:
#include <boost/interprocess/allocators/private_node_allocator.hpp>
private_node_allocator
has the following declaration:
namespace boost { namespace interprocess { template<class T, class SegmentManager, std::size_t NodesPerChunk = ...> class private_node_allocator; } //namespace interprocess { } //namespace boost {
An example using private_node_allocator
:
#include <boost/interprocess/managed_shared_memory.hpp> #include <boost/interprocess/allocators/private_node_allocator.hpp> #include <cassert> using namespace boost::interprocess; int main () { //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; //Create shared memory managed_shared_memory segment(create_only, "MyName", //segment name 65536); //Create a private_node_allocator that allocates ints from the managed segment //The number of chunks per segment is the default value typedef private_node_allocator<int, managed_shared_memory::segment_manager> private_node_allocator_t; private_node_allocator_t allocator_instance(segment.get_segment_manager()); //Create another private_node_allocator. private_node_allocator_t allocator_instance2(segment.get_segment_manager()); //Although the segment manager address //is the same, this private_node_allocator will have its own pool so //"allocator_instance2" CAN'T deallocate nodes allocated by "allocator_instance". //"allocator_instance2" is NOT equal to "allocator_instance" assert(allocator_instance != allocator_instance2); //Create another node_allocator using copy-constructor. private_node_allocator_t allocator_instance3(allocator_instance2); //This allocator is also unequal to allocator_instance2 assert(allocator_instance2 != allocator_instance3); //Pools are destroyed with the allocators return 0; }
The total node sharing of node_allocator
can impose a high overhead for some applications and the minimal synchronization
overhead of private_node_allocator
can impose a unacceptable memory waste for other applications.
To solve this, Boost.Interprocess offers
an allocator, cached_node_allocator
,
that allocates nodes from the common pool but caches some of them privately
so that following allocations have no synchronization overhead. When the
cache is full, the allocator returns some cached nodes to the common pool,
and those will be available to other allocators.
Equality: Two cached_node_allocator
instances constructed with the same segment manager compare equal. If an
instance is created using copy constructor, that instance compares equal
with the original one.
Allocation thread-safety: Allocation and deallocation are not thread-safe.
To use cached_node_allocator
,
you must include the following header:
#include <boost/interprocess/allocators/cached_node_allocator.hpp>
cached_node_allocator
has the following declaration:
namespace boost { namespace interprocess { template<class T, class SegmentManager, std::size_t NodesPerChunk = ...> class cached_node_allocator; } //namespace interprocess { } //namespace boost {
A cached_node_allocator
instance and a node_allocator
instance share the same pool if both instances receive the same template
parameters. This means that nodes returned to the shared pool by one of
them can be reused by the other. Please note that this does not mean that
both allocators compare equal, this is just information for programmers
that want to maximize the use of the pool.
cached_node_allocator
,
offers additional functions to control the cache (the cache can be controlled
per instance):
void set_max_cached_nodes(std::size_t
n)
:
Sets the maximum cached nodes limit. If cached nodes reach the limit,
some are returned to the shared pool.
std::size_t get_max_cached_nodes() const
:
Returns the maximum cached nodes limit.
void deallocate_cache()
: Returns the cached nodes to the
shared pool.
An example using cached_node_allocator
:
#include <boost/interprocess/managed_shared_memory.hpp> #include <boost/interprocess/allocators/cached_node_allocator.hpp> #include <cassert> using namespace boost::interprocess; int main () { //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; //Create shared memory managed_shared_memory segment(create_only, "MyName", 65536); //Create a cached_node_allocator that allocates ints from the managed segment //The number of chunks per segment is the default value typedef cached_node_allocator<int, managed_shared_memory::segment_manager> cached_node_allocator_t; cached_node_allocator_t allocator_instance(segment.get_segment_manager()); //The max cached nodes are configurable per instance allocator_instance.set_max_cached_nodes(3); //Create another cached_node_allocator. Since the segment manager address //is the same, this cached_node_allocator will be //attached to the same pool so "allocator_instance2" can deallocate //nodes allocated by "allocator_instance" cached_node_allocator_t allocator_instance2(segment.get_segment_manager()); //The max cached nodes are configurable per instance allocator_instance2.set_max_cached_nodes(5); //Create another cached_node_allocator using copy-constructor. This //cached_node_allocator will also be attached to the same pool cached_node_allocator_t allocator_instance3(allocator_instance2); //We can clear the cache allocator_instance3.deallocate_cache(); //All allocators are equal assert(allocator_instance == allocator_instance2); assert(allocator_instance2 == allocator_instance3); //So memory allocated with one can be deallocated with another allocator_instance2.deallocate(allocator_instance.allocate(1), 1); allocator_instance3.deallocate(allocator_instance2.allocate(1), 1); //The common pool will be destroyed here, since no allocator is //attached to the pool return 0; }
Node allocators based on simple segregated storage algorithm are both space-efficient and fast but they have a problem: they only can grow. Every allocated node avoids any payload to store additional data and that leads to the following limitation: when a node is deallocated, it's stored in a free list of nodes but memory is not returned to the segment manager so a deallocated node can be only reused by other containers using the same node pool.
This behaviour can be problematic if several containers use boost::interprocess::node_allocator
to temporarily allocate a lot of objects but they end storing a few of them:
the node pool will be full of nodes that won't be reused wasting memory from
the segment.
Adaptive pool based allocators trade some space (the overhead can be as low as 1%) and performance (acceptable for many applications) with the ability to return free chunks of nodes to the memory segment, so that they can be used by any other container or managed object construction. To know the details of the implementation of of "adaptive pools" see the Implementation of Boost.Intrusive adaptive pools section.
Like with segregated storage based node allocators, Boost.Interprocess offers
3 new allocators: adaptive_pool
,
private_adaptive_pool
,
cached_adaptive_pool
.
adaptive_pool
,
private_adaptive_pool
and cached_adaptive_pool
implement the standard allocator interface and the functions explained
in the Properties
of Boost.Interprocess allocators.
All these allocators are templatized by 4 parameters:
class T
:
The type to be allocated.
class SegmentManager
:
The type of the segment manager that will be passed in the constructor.
std::size_t NodesPerChunk
:
The number of nodes that a memory chunk will contain. This value will
define the size of the memory the pool will request to the segment
manager when the pool runs out of nodes. This parameter has a default
value.
std::size_t MaxFreeChunks
:
The maximum number of free chunks that the pool will hold. If this
limit is reached the pool returns the chunks to the segment manager.
This parameter has a default value.
These allocators also offer the deallocate_free_chunks()
function. This function will traverse
all the memory chunks of the pool and will return to the managed memory
segment the free chunks of memory. This function is much faster than for
segregated storage allocators, because the adaptive pool algorithm offers
constant-time access to free chunks.
Just like node_allocator
a global, process-thread pool is used for each node size. In the initialization,
adaptive_pool
searches the pool in the segment. If it is not preset, it builds one. The
adaptive pool, is created using a unique name. The adaptive pool it is
also shared between all node_allocators that allocate objects of the same
size, for example, adaptive_pool<uint32>
and adaptive_pool<float32>.
The common adaptive pool is destroyed when all the allocators attached to the pool are destroyed.
Equality: Two adaptive_pool
instances constructed with the same segment manager compare equal. If an
instance is created using copy constructor, that instance compares equal
with the original one.
Allocation thread-safety: Allocation and deallocation are implemented as calls to the shared pool. The shared pool offers the same synchronization guarantees as the segment manager.
To use adaptive_pool
,
you must include the following header:
#include <boost/interprocess/allocators/adaptive_pool.hpp>
adaptive_pool
has the following declaration:
namespace boost { namespace interprocess { template<class T, class SegmentManager, std::size_t NodesPerChunk = ..., std::size_t MaxFreeChunks = ...> class adaptive_pool; } //namespace interprocess { } //namespace boost {
An example using adaptive_pool
:
#include <boost/interprocess/managed_shared_memory.hpp> #include <boost/interprocess/allocators/adaptive_pool.hpp> #include <cassert> using namespace boost::interprocess; int main () { //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; //Create shared memory managed_shared_memory segment(create_only,"MyName", 65536); //Create a adaptive_pool that allocates ints from the managed segment //The number of chunks per segment is the default value typedef adaptive_pool<int, managed_shared_memory::segment_manager> adaptive_pool_t; adaptive_pool_t allocator_instance(segment.get_segment_manager()); //Create another adaptive_pool. Since the segment manager address //is the same, this adaptive_pool will be //attached to the same pool so "allocator_instance2" can deallocate //nodes allocated by "allocator_instance" adaptive_pool_t allocator_instance2(segment.get_segment_manager()); //Create another adaptive_pool using copy-constructor. This //adaptive_pool will also be attached to the same pool adaptive_pool_t allocator_instance3(allocator_instance2); //All allocators are equal assert(allocator_instance == allocator_instance2); assert(allocator_instance2 == allocator_instance3); //So memory allocated with one can be deallocated with another allocator_instance2.deallocate(allocator_instance.allocate(1), 1); allocator_instance3.deallocate(allocator_instance2.allocate(1), 1); //The common pool will be destroyed here, since no allocator is //attached to the pool return 0; }
Just like private_node_allocator
owns a private segregated storage pool, private_adaptive_pool
owns its own adaptive pool. If the user wants to avoid the excessive node
allocation synchronization overhead in a container private_adaptive_pool
is a good choice.
Equality: Two private_adaptive_pool
instances never compare equal. Memory
allocated with one allocator can't be
deallocated with another one.
Allocation thread-safety: Allocation and deallocation are not thread-safe.
To use private_adaptive_pool
,
you must include the following header:
#include <boost/interprocess/allocators/private_adaptive_pool.hpp>
private_adaptive_pool
has the following declaration:
namespace boost { namespace interprocess { template<class T, class SegmentManager, std::size_t NodesPerChunk = ..., std::size_t MaxFreeChunks = ...> class private_adaptive_pool; } //namespace interprocess { } //namespace boost {
An example using private_adaptive_pool
:
#include <boost/interprocess/managed_shared_memory.hpp> #include <boost/interprocess/allocators/private_adaptive_pool.hpp> #include <cassert> using namespace boost::interprocess; int main () { //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; //Create shared memory managed_shared_memory segment(create_only, "MyName", //segment name 65536); //Create a private_adaptive_pool that allocates ints from the managed segment //The number of chunks per segment is the default value typedef private_adaptive_pool<int, managed_shared_memory::segment_manager> private_adaptive_pool_t; private_adaptive_pool_t allocator_instance(segment.get_segment_manager()); //Create another private_adaptive_pool. private_adaptive_pool_t allocator_instance2(segment.get_segment_manager()); //Although the segment manager address //is the same, this private_adaptive_pool will have its own pool so //"allocator_instance2" CAN'T deallocate nodes allocated by "allocator_instance". //"allocator_instance2" is NOT equal to "allocator_instance" assert(allocator_instance != allocator_instance2); //Create another adaptive_pool using copy-constructor. private_adaptive_pool_t allocator_instance3(allocator_instance2); //This allocator is also unequal to allocator_instance2 assert(allocator_instance2 != allocator_instance3); //Pools are destroyed with the allocators return 0; }
Adaptive pools have also a cached version. In this allocator the allocator
caches some nodes to avoid the synchronization and bookkeeping overhead
of the shared adaptive pool. cached_adaptive_pool
allocates nodes from the common adaptive pool but caches some of them privately
so that following allocations have no synchronization overhead. When the
cache is full, the allocator returns some cached nodes to the common pool,
and those will be available to other cached_adaptive_pools
or adaptive_pools
of the same managed segment.
Equality: Two cached_adaptive_pool
instances constructed with the same segment manager compare equal. If an
instance is created using copy constructor, that instance compares equal
with the original one.
Allocation thread-safety: Allocation and deallocation are not thread-safe.
To use cached_adaptive_pool
,
you must include the following header:
#include <boost/interprocess/allocators/cached_adaptive_pool.hpp>
cached_adaptive_pool
has the following declaration:
namespace boost { namespace interprocess { template<class T, class SegmentManager, std::size_t NodesPerChunk = ..., std::size_t MaxFreeNodes = ...> class cached_adaptive_pool; } //namespace interprocess { } //namespace boost {
A cached_adaptive_pool
instance and an adaptive_pool
instance share the same pool if both instances receive the same template
parameters. This means that nodes returned to the shared pool by one of
them can be reused by the other. Please note that this does not mean that
both allocators compare equal, this is just information for programmers
that want to maximize the use of the pool.
cached_adaptive_pool
,
offers additional functions to control the cache (the cache can be controlled
per instance):
void set_max_cached_nodes(std::size_t
n)
:
Sets the maximum cached nodes limit. If cached nodes reach the limit,
some are returned to the shared pool.
std::size_t get_max_cached_nodes() const
:
Returns the maximum cached nodes limit.
void deallocate_cache()
: Returns the cached nodes to the
shared pool.
An example using cached_adaptive_pool
:
#include <boost/interprocess/managed_shared_memory.hpp> #include <boost/interprocess/allocators/cached_adaptive_pool.hpp> #include <cassert> using namespace boost::interprocess; int main () { //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; //Create shared memory managed_shared_memory segment(create_only,"MyName", 65536); //Create a cached_adaptive_pool that allocates ints from the managed segment //The number of chunks per segment is the default value typedef cached_adaptive_pool<int, managed_shared_memory::segment_manager> cached_adaptive_pool_t; cached_adaptive_pool_t allocator_instance(segment.get_segment_manager()); //The max cached nodes are configurable per instance allocator_instance.set_max_cached_nodes(3); //Create another cached_adaptive_pool. Since the segment manager address //is the same, this cached_adaptive_pool will be //attached to the same pool so "allocator_instance2" can deallocate //nodes allocated by "allocator_instance" cached_adaptive_pool_t allocator_instance2(segment.get_segment_manager()); //The max cached nodes are configurable per instance allocator_instance2.set_max_cached_nodes(5); //Create another cached_adaptive_pool using copy-constructor. This //cached_adaptive_pool will also be attached to the same pool cached_adaptive_pool_t allocator_instance3(allocator_instance2); //We can clear the cache allocator_instance3.deallocate_cache(); //All allocators are equal assert(allocator_instance == allocator_instance2); assert(allocator_instance2 == allocator_instance3); //So memory allocated with one can be deallocated with another allocator_instance2.deallocate(allocator_instance.allocate(1), 1); allocator_instance3.deallocate(allocator_instance2.allocate(1), 1); //The common pool will be destroyed here, since no allocator is //attached to the pool return 0; }