...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Warning | |
---|---|
These features are experimental and subject to change in future versions. There are not too much tests yet, so it is possible that you can find out some trivial bugs :( |
Note | |
---|---|
This tutorial is an adaptation of the paper of Anthony Williams "Enforcing Correct Mutex Usage with Synchronized Values" to the Boost library. |
The key problem with protecting shared data with a mutex is that there is no easy way to associate the mutex with the data. It is thus relatively easy to accidentally write code that fails to lock the right mutex - or even locks the wrong mutex - and the compiler will not help you.
std::mutex m1; int value1; std::mutex m2; int value2; int readValue1() { boost::lock_guard<boost::mutex> lk(m1); return value1; } int readValue2() { boost::lock_guard<boost::mutex> lk(m1); // oops: wrong mutex return value2; }
Moreover, managing the mutex lock also clutters the source code, making it harder to see what is really going on.
The use of synchronized_value solves both these problems - the mutex is intimately tied to the value, so you cannot access it without a lock, and yet access semantics are still straightforward. For simple accesses, synchronized_value behaves like a pointer-to-T; for example:
boost::synchronized_value<std::string> value3; std::string readValue3() { return *value3; } void setValue3(std::string const& newVal) { *value3=newVal; } void appendToValue3(std::string const& extra) { value3->append(extra); }
Both forms of pointer dereference return a proxy object rather than a real reference, to ensure that the lock on the mutex is held across the assignment or method call, but this is transparent to the user.
The pointer-like semantics work very well for simple accesses such as assignment and calls to member functions. However, sometimes you need to perform an operation that requires multiple accesses under protection of the same lock, and that's what the synchronize() method provides.
By calling synchronize() you obtain a strict_lock_ptr object that holds a lock on the mutex protecting the data, and which can be used to access the protected data. The lock is held until the strict_lock_ptr object is destroyed, so you can safely perform multi-part operations. The strict_lock_ptr object also acts as a pointer-to-T, just like synchronized_value does, but this time the lock is already held. For example, the following function adds a trailing slash to a path held in a synchronized_value. The use of the strict_lock_ptr object ensures that the string hasn't changed in between the query and the update.
void addTrailingSlashIfMissing(boost::synchronized_value<std::string> & path) { boost::strict_lock_ptr<std::string> u=path.synchronize(); if(u->empty() || (*u->rbegin()!='/')) { *u+='/'; } }
Though synchronized_value works very well for protecting a single object of type T, nothing that we've seen so far solves the problem of operations that require atomic access to multiple objects unless those objects can be combined within a single structure protected by a single mutex.
One way to protect access to two synchronized_value objects is to construct a strict_lock_ptr for each object and use those to access the respective protected values; for instance:
synchronized_value<std::queue<MessageType> > q1,q2; void transferMessage() { strict_lock_ptr<std::queue<MessageType> > u1 = q1.synchronize(); strict_lock_ptr<std::queue<MessageType> > u2 = q2.synchronize(); if(!u1->empty()) { u2->push_back(u1->front()); u1->pop_front(); } }
This works well in some scenarios, but not all -- if the same two objects are updated together in different sections of code then you need to take care to ensure that the strict_lock_ptr objects are constructed in the same sequence in all cases, otherwise you have the potential for deadlock. This is just the same as when acquiring any two mutexes.
In order to be able to use the dead-lock free lock algorithms we need to use instead unique_lock_ptr, which is Lockable.
synchronized_value<std::queue<MessageType> > q1,q2; void transferMessage() { unique_lock_ptr<std::queue<MessageType> > u1 = q1.unique_synchronize(boost::defer_lock); unique_lock_ptr<std::queue<MessageType> > u2 = q2.unique_synchronize(boost::defer_lock); boost::lock(u1,u2); // dead-lock free algorithm if(!u1->empty()) { u2->push_back(u1->front()); u1->pop_front(); } }
While the preceding takes care of dead-lock, the access to the synchronized_value via unique_lock_ptr requires a lock that is not forced by the interface. An alternative on compilers providing a standard library that supports movable std::tuple is to use the free synchronize function, which will lock all the mutexes associated to the synchronized values and return a tuple os strict_lock_ptr.
synchronized_value<std::queue<MessageType> > q1,q2; void transferMessage() { auto lks = synchronize(u1,u2); // dead-lock free algorithm if(!std::get<1>(lks)->empty()) { std::get<2>(lks)->push_back(u1->front()); std::get<1>(lks)->pop_front(); } }
synchronized_value has value semantics even if the syntax lets is close to a pointer (this is just because we are unable to define smart references).
#include <boost/thread/synchronized_value.hpp> namespace boost { template<typename T, typename Lockable = mutex> class synchronized_value; // Specialized swap algorithm template <typename T, typename L> void swap(synchronized_value<T,L> & lhs, synchronized_value<T,L> & rhs); template <typename T, typename L> void swap(synchronized_value<T,L> & lhs, T & rhs); template <typename T, typename L> void swap(T & lhs, synchronized_value<T,L> & rhs); // Hash support template<typename T, typename L> struct hash<synchronized_value<T,L> >; // Comparison template <typename T, typename L> bool operator==(synchronized_value<T,L> const&lhs, synchronized_value<T,L> const& rhs) template <typename T, typename L> bool operator!=(synchronized_value<T,L> const&lhs, synchronized_value<T,L> const& rhs) template <typename T, typename L> bool operator<(synchronized_value<T,L> const&lhs, synchronized_value<T,L> const& rhs) template <typename T, typename L> bool operator<=(synchronized_value<T,L> const&lhs, synchronized_value<T,L> const& rhs) template <typename T, typename L> bool operator>(synchronized_value<T,L> const&lhs, synchronized_value<T,L> const& rhs) template <typename T, typename L> bool operator>=(synchronized_value<T,L> const&lhs, synchronized_value<T,L> const& rhs) // Comparison with T template <typename T, typename L> bool operator==(T const& lhs, synchronized_value<T,L> const&rhs); template <typename T, typename L> bool operator!=(T const& lhs, synchronized_value<T,L> const&rhs); template <typename T, typename L> bool operator<(T const& lhs, synchronized_value<T,L> const&rhs); template <typename T, typename L> bool operator<=(T const& lhs, synchronized_value<T,L> const&rhs); template <typename T, typename L> bool operator>(T const& lhs, synchronized_value<T,L> const&rhs); template <typename T, typename L> bool operator>=(T const& lhs, synchronized_value<T,L> const&rhs); template <typename T, typename L> bool operator==(synchronized_value<T,L> const& lhs, T const& rhs); template <typename T, typename L> bool operator!=(synchronized_value<T,L> const& lhs, T const& rhs); template <typename T, typename L> bool operator<(synchronized_value<T,L> const& lhs, T const& rhs); template <typename T, typename L> bool operator<=(synchronized_value<T,L> const& lhs, T const& rhs); template <typename T, typename L> bool operator>(synchronized_value<T,L> const& lhs, T const& rhs); template <typename T, typename L> bool operator>=(synchronized_value<T,L> const& lhs, T const& rhs); #if ! defined(BOOST_THREAD_NO_SYNCHRONIZE) template <typename ...SV> std::tuple<typename synchronized_value_strict_lock_ptr<SV>::type ...> synchronize(SV& ...sv); #endif }
#include <boost/thread/synchronized_value.hpp> namespace boost { template<typename T, typename Lockable = mutex> class synchronized_value { public: typedef T value_type; typedef Lockable mutex_type; synchronized_value() noexcept(is_nothrow_default_constructible<T>::value); synchronized_value(T const& other) noexcept(is_nothrow_copy_constructible<T>::value); synchronized_value(T&& other) noexcept(is_nothrow_move_constructible<T>::value); synchronized_value(synchronized_value const& rhs); synchronized_value(synchronized_value&& other); // mutation synchronized_value& operator=(synchronized_value const& rhs); synchronized_value& operator=(value_type const& val); void swap(synchronized_value & rhs); void swap(value_type & rhs); //observers T get() const; #if ! defined(BOOST_NO_CXX11_EXPLICIT_CONVERSION_OPERATORS) explicit operator T() const; #endif strict_lock_ptr<T,Lockable> operator->(); const_strict_lock_ptr<T,Lockable> operator->() const; strict_lock_ptr<T,Lockable> synchronize(); const_strict_lock_ptr<T,Lockable> synchronize() const; deref_value operator*();; const_deref_value operator*() const; private: T value_; // for exposition only mutable mutex_type mtx_; // for exposition only }; }
Lockable
is Lockable
.
synchronized_value() noexcept(is_nothrow_default_constructible<T>::value);
T
is DefaultConstructible
.
Default constructs the cloaked value_type
Any exception thrown by value_type()
.
synchronized_value(T const& other) noexcept(is_nothrow_copy_constructible<T>::value);
T
is CopyConstructible
.
Copy constructs the cloaked value_type using the parameter other
Any exception thrown by value_type(other)
.
synchronized_value(synchronized_value const& rhs);
T
is DefaultConstructible
and Assignable
.
Assigns the value on a scope protected by the mutex of the rhs. The mutex is not copied.
Any exception thrown by value_type()
or value_type& operator=(value_type&)
or mtx_.lock()
.
synchronized_value(T&& other) noexcept(is_nothrow_move_constructible<T>::value);
T
is MoveConstructible
.
Move constructs the cloaked value_type
Any exception thrown by value_type(value_type&&)
.
synchronized_value(synchronized_value&& other);
T
is MoveConstructible
.
Move constructs the cloaked value_type
Any exception thrown by value_type(value_type&&)
or mtx_.lock()
.
synchronized_value& operator=(synchronized_value const& rhs);
T
is Assignable
.
Copies the underlying value on a scope protected by the two mutexes.
The mutex is not copied. The locks are acquired avoiding deadlock.
For example, there is no problem if one thread assigns a =
b
and the other assigns
b =
a
.
*this
Any exception thrown by value_type& operator(value_type
const&)
or mtx_.lock()
.
synchronized_value& operator=(value_type const& val);
T
is Assignable
.
Copies the value on a scope protected by the mutex.
*this
Any exception thrown by value_type& operator(value_type
const&)
or mtx_.lock()
.
T get() const;
T
is CopyConstructible
.
A copy
of the
protected value
obtained on
a scope
protected by
the mutex.
Any exception thrown by value_type(value_type
const&)
or mtx_.lock()
.
#if ! defined(BOOST_NO_CXX11_EXPLICIT_CONVERSION_OPERATORS) explicit operator T() const; #endif
T
is CopyConstructible
.
A copy
of the
protected value
obtained on
a scope
protected by
the mutex.
Any exception thrown by value_type(value_type
const&)
or mtx_.lock()
.
void swap(synchronized_value & rhs);
T
is Assignable
.
Swaps the data on a scope protected by both mutex. Both mutex are acquired to avoid dead-lock. The mutexes are not swapped.
Any exception thrown by swap(value_, rhs.value)
or mtx_.lock()
or rhs_.mtx_.lock()
.
void swap(value_type & rhs);
T
is Swapable
.
Swaps the data on a scope protected by both mutex. Both mutex are acquired to avoid dead-lock. The mutexes are not swapped.
Any exception thrown by swap(value_, rhs)
or mtx_.lock()
.
strict_lock_ptr<T,Lockable> operator->();
Essentially calling a method obj->foo(x, y, z)
calls the method foo(x, y, z)
inside a critical section as long-lived
as the call itself.
A strict_lock_ptr<>.
Nothing.
const_strict_lock_ptr<T,Lockable> operator->() const;
If the synchronized_value
object involved is const-qualified, then you'll only be able to call
const methods through operator->
. So, for example, vec->push_back("xyz")
won't work if vec
were
const-qualified. The locking mechanism capitalizes on the assumption
that const methods don't modify their underlying data.
A const_strict_lock_ptr
<>.
Nothing.
strict_lock_ptr<T,Lockable> synchronize();
The synchronize() factory make easier to lock on a scope. As discussed,
operator->
can only lock over the duration of a call, so it is insufficient for
complex operations. With synchronize()
you get to lock the object in a scoped
and to directly access the object inside that scope.
Example:
void fun(synchronized_value<vector<int>> & vec) { auto vec2=vec.synchronize(); vec2.push_back(42); assert(vec2.back() == 42); }
A strict_lock_ptr
<>.
Nothing.
const_strict_lock_ptr<T,Lockable> synchronize() const;
A const_strict_lock_ptr
<>.
Nothing.
deref_value operator*();;
A an
instance of
a class
that locks
the mutex
on construction
and unlocks
it on
destruction and
provides implicit
conversion to
a reference
to the
protected value.
Nothing.
const_deref_value operator*() const;
A an
instance of
a class
that locks
the mutex
on construction
and unlocks
it on
destruction and
provides implicit
conversion to
a constant
reference to
the protected
value.
Nothing.
#include <boost/thread/synchronized_value.hpp> namespace boost { #if ! defined(BOOST_THREAD_NO_SYNCHRONIZE) template <typename ...SV> std::tuple<typename synchronized_value_strict_lock_ptr<SV>::type ...> synchronize(SV& ...sv); #endif }
Warning | |
---|---|
These features are experimental and subject to change in future versions. There are not too much tests yet, so it is possible that you can find out some trivial bugs :( |
Note | |
---|---|
These features are based on the N3533 - C++ Concurrent Queues C++1y proposal from Lawrence Crowl and Chris Mysen and C++ Concurrency in Action from Anthony Williams. |
Queues provide a mechanism for communicating data between components of a system.
The existing deque in the standard library is an inherently sequential data structure. Its reference-returning element access operations cannot synchronize access to those elements with other queue operations. So, concurrent pushes and pops on queues require a different interface to the queue structure.
Moreover, concurrency adds a new dimension for performance and semantics. Different queue implementation must trade off uncontended operation cost, contended operation cost, and element order guarantees. Some of these trade-offs will necessarily result in semantics weaker than a serial queue.
Concurrent queues are a well know mechanism for communicating data between different threads.
Concurrent queues have inherently copy/move semantics for the data handling operation. Reference-returning interfaces are forbidden as multiple access to these references can not be thread-safe.
One of the major features of a concurrent queue is whether it has a bounded-unbounded capacity.
Locking queues can by nature block waiting for the queue to be non-empty or non-full.
Lock-free queues will have some trouble waiting for the queue to be non-empty or non-full queues. These queues can not define operations such as push (and pull for bounded queues). That is, it could have blocking operations (presumably emulated with busy wait) but not waiting operations.
Threads using a queue for communication need some mechanism to signal when the queue is no longer needed. The usual approach is add an additional out-of-band signal. However, this approach suffers from the flaw that threads waiting on either full or empty queues need to be woken up when the queue is no longer needed. Rather than require an out-of-band signal, we chose to directly support such a signal in the queue itself, which considerably simplifies coding.
To achieve this signal, a thread may close a queue. Once closed, no new elements may be pushed onto the queue. Push operations on a closed queue will either return queue_op_status::closed (when they have a queue_op_status return type), set the closed parameter if it has one or throw sync_queue::closed (when they do not). Elements already on the queue may be pulled off. When a queue is empty and closed, pull operations will either return queue_op_status::closed (when they have a status return), set the closed parameter if it has one or throw sync_queue::closed (when they do not).
All the functions are defined as if we had in addition to its specific Throw specification the following:
Any exception thrown by the internal locking.
All the functions that allocate a resource are defined as if we had in addition to its specific Throw specification the following:
Any exception due to allocation errors.
The essential solution to the problem of concurrent queuing is to shift to value-based operations, rather than reference-based operations.
The BasicConcurrentQueue concept models the basic operations of a concurrent queue.
A type Q
meets the
BasicConcurrentQueue requirements if the following expressions are
well-formed and have the specified semantics
q.push_back(e);
q.push_back(rve);
q.pull_front(lre);
lre =
q.pull_front();
b =
q.empty();
u =
q.size();
where
q
denotes a value
of type Q
,
e
denotes a value
of type Q::value_type,
u
denotes a value
of type Q::size_type,
lve
denotes an
lvalue reference of type Q::value_type,
rve
denotes an
rvalue reference of type Q::value_type:
qs
denotes a variable
of of type queue_op_status
,
Waits until the queue is not full (for bounded queues) and
then push back e
to the queue copying it (this could need an allocation for
unbounded queues).
Prior pull-like operations on the same object synchronizes with this operation.
! q.empty()
.
void
.
If the queue was closed, throws sync_queue_is_closed. Any exception
thrown by the copy of e
.
If an exception is thrown then the queue state is unmodified.
Waits until the queue is not full (for bounded queues) and
then push e
to the queue moving it back in the queue (this could need an
allocation for unbounded queues).
Prior pull-like operations on the same object synchronizes with this operation.
! q.empty()
.
void
.
If the queue is closed, throws sync_queue_is_closed. Any exception
thrown by the copy of e
.
If an exception is thrown then the queue state is unmodified.
Waits until the queue is not empty and then pull_front the
element from the queue q
and moves the pulled element into lve
(this could need an allocation for unbounded queues).
Prior pull-like operations on the same object synchronizes with this operation.
! q.full()
.
void
.
Any exception thrown by the move of e
.
If an exception is thrown then the queue state is unmodified.
Q::value_type is no throw move constructible. This is needed to ensure the exception safety.
Waits until the queue is not empty and not closed. If the queue
is empty and closed throws sync_queue_is_closed. Otherwise
pull the element from the queue q
and moves the pulled element.
Prior pull-like operations on the same object synchronizes with this operation.
! q.full()
.
Q::value_type
.
The pulled element.
Any exception thrown by the copy of e
.
If an exception is thrown then the queue state is unmodified.
The ConcurrentQueue concept models a queue with Non-waiting operations.
A type Q
meets the
ConcurrentQueue requirements if is a model of a BasicConcurrentQueue
and the following expressions are well-formed and have the specified
semantics
s =
q.try_push_back(e);
s =
q.try_push_back(rve);
s =
q.try_pull_front(lre);
where
q
denotes a value
of type Q
,
e
denotes a value
of type Q::value_type
,
s
denotes a value
of type queue_status
,
u
denotes a value
of type Q::size_type
,
lve
denotes an
lvalue reference of type Q::value_type,
rve
denotes an
rvalue reference of type Q::value_type:
If the queue q
is not full and not closed, push back the e
to the queue copying it.
Prior pull-like operations on the same object synchronizes with this operation when the operation succeeds.
queue_op_status
.
- If the queue is closed, returns queue_op_status::closed
,
- otherwise if the queue q
is full return queue_op_status::full
,
- otherwise return queue_op_status::success
;
If the call returns queue_op_status::success
,
! q.empty()
.
If the queue is closed, throws sync_queue_is_closed. Any exception
thrown by the copy of e
.
If an exception is thrown then the queue state is unmodified.
If the queue q
is not full and not closed, push back the e
onto the queue moving it.
Prior pull-like operations on the same object synchronizes with this operation.
queue_op_status
.
- If the queue is closed, returns queue_op_status::closed
,
- otherwise if the queue q
is full return queue_op_status::full
,
- otherwise return queue_op_status::success
;
If the call returns queue_op_status::success
,
! q.empty()
.
Any exception thrown by the copy of e
.
If an exception is thrown then the queue state is unmodified.
If the queue is not empty pulls the element from the queue
q
and moves
the pulled element into lve
(this could need an allocation for unbounded queues).
Prior pull-like operations on the same object synchronizes with this operation.
! q.full()
.
bool
.
- If the queue q
is empty return queue_op_status::empty
,
- otherwise return queue_op_status::success
;
Any exception thrown by the move of e
.
If an exception is thrown then the queue state is unmodified.
For cases when blocking for mutual exclusion is undesirable, we have non-blocking operations. The interface is the same as the try operations but is allowed to also return queue_op_status::busy in case the operation is unable to complete without blocking.
Non-blocking operations are provided only for lock based queues
s =
q.nonblocking_push_back(nb,
e);
s =
q.nonblocking_push_back(nb,
rve);
s =
q.nonblocking_pull_front(nb,
lre);
where
q
denotes a value
of type Q
,
e
denotes a value
of type Q::value_type,
s
denotes a value
of type queue_status
,
lve
denotes an
lvalue reference of type Q::value_type,
rve
denotes an
rvalue reference of type Q::value_type:
If the queue q
is not full and not closed, push back the e
to the queue copying it.
Prior pull-like operations on the same object synchronizes with this operation when the operation succeeds.
queue_op_status
.
- If the operation would block, return queue_op_status::busy,
- otherwise, if the queue is closed, return queue_op_status::closed
,
- otherwise, if the queue q
is full return queue_op_status::full
,
- otherwise return queue_op_status::success
;
If the call returns queue_op_status::success
,
! q.empty()
.
If the queue is closed, throws sync_queue_is_closed. Any exception
thrown by the copy of e
.
If an exception is thrown then the queue state is unmodified.
If the queue q
is not full and not closed, push back the e
onto the queue moving it.
Prior pull-like operations on the same object synchronizes with this operation.
queue_op_status
.
- If the operation would block, return queue_op_status::busy,
- otherwise if the queue is closed, returns queue_op_status::closed
,
- otherwise if the queue q
is full return queue_op_status::full
,
- otherwise return queue_op_status::success
;
If the call returns queue_op_status::success
,
! q.empty()
.
Any exception thrown by the copy of e
.
If an exception is thrown then the queue state is unmodified.
If the queue is not empty pulls the element from the queue
q
and moves
the pulled element into lve
(this could need an allocation for unbounded queues).
Prior pull-like operations on the same object synchronizes with this operation.
! q.full()
.
bool
.
- If the operation would block, return queue_op_status::busy,
- otherwise if the queue q
is empty return queue_op_status::empty
,
- otherwise return queue_op_status::success
;
Any exception thrown by the move of e
.
If an exception is thrown then the queue state is unmodified.
Bounded queues add the following valid expressions
Q q(u);
b =
q.full();
u =
q.capacity();
where
q
denotes a value
of type Q
,
b
denotes a value
of type bool
,
u
denotes a value
of type Q::size_type
,
bool
.
Return true
iff
the queue is full.
Not all queues will have a full state, and these would always return false if the function is provided.
Q::size_type
.
Return the capacity of queue.
Closed queues add the following valid expressions
q.close();
b =
q.closed();
s =
q.wait_push_back(e);
s =
q.wait_push_back(rve);
s =
q.wait_pull_front(lre);
Close the queue.
bool
.
Return true
iff
the queue is closed.
Waits until the queue is not full (for bounded queues) and
then push back e
to the queue copying it (this could need an allocation for
unbounded queues).
Prior pull-like operations on the same object synchronizes with this operation.
! q.empty()
.
queue_op_status
.
- If the queue is closed return queue_op_status::closed
,
- otherwise, return queue_op_status::success
if no exception is thrown.
Any exception thrown by the copy of e
.
If an exception is thrown then the queue state is unmodified.
Waits until the queue is not full (for bounded queues) and
then push e
to the queue moving it back in the queue (this could need an
allocation for unbounded queues).
Prior pull-like operations on the same object synchronizes with this operation.
! q.empty()
.
queue_op_status
.
- If the queue is closed return queue_op_status::closed
,
- otherwise, return queue_op_status::success
if no exception is thrown.
.
Any exception thrown by the copy of e
.
If an exception is thrown then the queue state is unmodified.
if the queue is not empty and not closed, waits until the queue
is not empty and then pull_front the element from the queue
q
and moves
the pulled element into lve
.
Prior pull-like operations on the same object synchronizes with this operation.
! q.full()
.
queue_op_status
.
- If the queue is empty and closed, return queue_op_status::closed
,
- otherwise, return queue_op_status::success
if no exception is thrown.
Any exception thrown by the move of e
.
If an exception is thrown then the queue state is unmodified.
#include <boost/thread/concurrent_queues/queue_op_status.hpp> namespace boost { enum class queue_op_status { success = 0, empty, full, closed, busy } }
#include <boost/thread/concurrent_queues/queue_base.hpp> namespace boost { template <typename ValueType, class SizeType=std::size_t> class queue_base { public: typedef ValueType value_type; typedef SizeType size_type; // Constructors/Assignment/Destructors virtual ~queue_base() {}; // Observers virtual bool empty() const = 0; virtual bool full() const = 0; virtual size_type size() const = 0; virtual bool closed() const = 0; // Modifiers virtual void close() = 0; virtual void push_back(const value_type& x) = 0; virtual void push_back(BOOST_THREAD_RV_REF(value_type) x) = 0; virtual void pull_front(value_type&) = 0; virtual value_type pull_front() = 0; virtual queue_op_status try_push_back(const value_type& x) = 0; virtual queue_op_status try_push_back(BOOST_THREAD_RV_REF(value_type) x) = 0; virtual queue_op_status try_pull_front(value_type&) = 0; virtual queue_op_status nonblocking_push_back(const value_type& x) = 0; virtual queue_op_status nonblocking_push_back(BOOST_THREAD_RV_REF(value_type) x) = 0; virtual queue_op_status nonblocking_pull_front(value_type&) = 0; virtual queue_op_status wait_push_back(const value_type& x) = 0; virtual queue_op_status wait_push_back(BOOST_THREAD_RV_REF(value_type) x) = 0; virtual queue_op_status wait_pull_front(value_type& elem) = 0; }; }
#include <boost/thread/concurrent_queues/queue_adaptor.hpp> namespace boost { template <typename Queue> class queue_adaptor : public queue_base<typename Queue::value_type, typename Queue::size_type> { public: typedef typename Queue::value_type value_type; typedef typename Queue::size_type size_type; // Constructors/Assignment/Destructors queue_adaptor(); // Observers bool empty() const; bool full() const; size_type size() const { return queue.size(); } bool closed() const; // Modifiers void close(); void push_back(const value_type& x); void push_back(BOOST_THREAD_RV_REF(value_type) x); void pull_front(value_type& x); value_type pull_front(); queue_op_status try_push_back(const value_type& x); queue_op_status try_push_back(BOOST_THREAD_RV_REF(value_type) x); queue_op_status try_pull_front(value_type& x); queue_op_status nonblocking_push_back(const value_type& x); queue_op_status nonblocking_push_back(BOOST_THREAD_RV_REF(value_type) x); queue_op_status nonblocking_pull_front(value_type& x); queue_op_status wait_push_back(const value_type& x); queue_op_status wait_push_back(BOOST_THREAD_RV_REF(value_type) x); queue_op_status wait_pull_front(value_type& x); }; }
#include <boost/thread/concurrent_queues/queue_views.hpp> namespace boost { template <typename Queue> class queue_back_view; template <typename Queue> class queue_front_view template <class T> using queue_back = queue_back_view<queue_base<T>>; template <class T> using queue_front = queue_front_view<queue_base<T>>; }
template <typename Queue> class queue_back_view { public: typedef typename Queue::value_type value_type; typedef typename Queue::size_type size_type; // Constructors/Assignment/Destructors queue_back_view(Queue& q) noexcept; // Observers bool empty() const; bool full() const; size_type size() const; bool closed() const; // Modifiers void close(); void push(const value_type& x); void push(BOOST_THREAD_RV_REF(value_type) x); queue_op_status try_push(const value_type& x); queue_op_status try_push(BOOST_THREAD_RV_REF(value_type) x); queue_op_status nonblocking_push(const value_type& x); queue_op_status nonblocking_push(BOOST_THREAD_RV_REF(value_type) x); queue_op_status wait_push(const value_type& x); queue_op_status wait_push(BOOST_THREAD_RV_REF(value_type) x); };
template <typename Queue> class queue_front_view { public: typedef typename Queue::value_type value_type; typedef typename Queue::size_type size_type; // Constructors/Assignment/Destructors queue_front_view(Queue& q) BOOST_NOEXCEPT; // Observers bool empty() const; bool full() const; size_type size() const; bool closed() const; // Modifiers void close(); void pull(value_type& x); value_type pull(); queue_op_status try_pull(value_type& x); queue_op_status nonblocking_pull(value_type& x); queue_op_status wait_pull(value_type& x); };
#include <boost/thread/sync_bounded_queue.hpp> namespace boost { struct sync_queue_is_closed : std::exception {}; template <typename ValueType> class sync_bounded_queue; // Stream-like operators template <typename ValueType> sync_bounded_queue<ValueType>& operator<<(sync_bounded_queue<ValueType>& sbq, ValueType&& elem); template <typename ValueType> sync_bounded_queue<ValueType>& operator<<(sync_bounded_queue<ValueType>& sbq, ValueType const&elem); template <typename ValueType> sync_bounded_queue<ValueType>& operator>>(sync_bounded_queue<ValueType>& sbq, ValueType &elem); }
#include <boost/thread/sync_bounded_queue.hpp> namespace boost { struct sync_queue_is_closed : std::exception {}; }
#include <boost/thread/sync_bounded_queue.hpp> namespace boost { template <typename ValueType> class sync_bounded_queue { public: typedef ValueType value_type; typedef std::size_t size_type; sync_bounded_queue(sync_bounded_queue const&) = delete; sync_bounded_queue& operator=(sync_bounded_queue const&) = delete; explicit sync_bounded_queue(size_type max_elems); template <typename Range> sync_bounded_queue(size_type max_elems, Range range); ~sync_bounded_queue(); // Observers bool empty() const; bool full() const; size_type capacity() const; size_type size() const; bool closed() const; // Modifiers void push_back(const value_type& x); void push_back(value_type&& x); queue_op_status try_push_back(const value_type& x); queue_op_status try_push_back(value_type&&) x); queue_op_status nonblocking_push_back(const value_type& x); queue_op_status nonblocking_push_back(value_type&& x); void pull_front(value_type&); value_type pull_front(); queue_op_status try_pull_front(value_type&); queue_op_status nonblocking_pull_front(value_type&); void close(); }; }
explicit sync_bounded_queue(size_type max_elems);
Constructs a sync_bounded_queue with a maximum number of elements
given by max_elems
.
any exception that can be throw because of resources unavailable.
template <typename Range> sync_bounded_queue(size_type max_elems, Range range);
Constructs a sync_bounded_queue with a maximum number of elements
given by max_elems
and push back the elements of the range.
any exception that can be throw because of resources unavailable.
#include <boost/thread/sync_bounded_queue.hpp> namespace boost { template <typename ValueType> sync_bounded_queue<ValueType>& operator<<(sync_bounded_queue<ValueType>& sbq, ValueType&& elem); template <typename ValueType> sync_bounded_queue<ValueType>& operator<<(sync_bounded_queue<ValueType>& sbq, ValueType const&elem); }
#include <boost/thread/sync_bounded_queue.hpp> namespace boost { template <typename ValueType> sync_bounded_queue<ValueType>& operator>>(sync_bounded_queue<ValueType>& sbq, ValueType &elem); }
#include <boost/thread/sync_queue.hpp> namespace boost { template <typename ValueType> class sync_queue; // Stream-like operators template <typename ValueType> sync_queue<ValueType>& operator<<(sync_queue<ValueType>& sbq, ValueType&& elem); template <typename ValueType> sync_queue<ValueType>& operator<<(sync_queue<ValueType>& sbq, ValueType const&elem); template <typename ValueType> sync_queue<ValueType>& operator>>(sync_queue<ValueType>& sbq, ValueType &elem); }
#include <boost/thread/sync_queue.hpp> namespace boost { template <typename ValueType, class Container = csbl::devector<ValueType>> class sync_queue { public: typedef ValueType value_type; typedef Container underlying_queue_type; typedef typename Container::size_type size_type; sync_queue(sync_queue const&) = delete; sync_queue& operator=(sync_queue const&) = delete; sync_queue(); explicit template <typename Range> sync_queue(Range range); // Not yet implemented ~sync_queue(); // Observers bool empty() const; bool full() const; size_type size() const; bool closed() const; // Modifiers void push_back(const value_type& x); void push_back(value_type&& x); queue_op_status try_push_back(const value_type& x); queue_op_status try_push_back(value_type&&) x); queue_op_status nonblocking_push_back(const value_type& x); queue_op_status nonblocking_push_back(value_type&& x); void pull_front(value_type&); value_type pull_front(); queue_op_status try_pull_front(value_type&); queue_op_status nonblocking_pull_front(value_type&); underlying_queue_type underlying_queue() noexcept; void close(); }; }
explicit sync_queue();
Constructs an empty sync_queue.
any exception that can be throw because of resources unavailable.
underlying_queue_type underlying_queue() noexcept;
Moves internal queue.
#include <boost/thread/sync_queue.hpp> namespace boost { template <typename ValueType> sync_queue<ValueType>& operator<<(sync_queue<ValueType>& sbq, ValueType&& elem); template <typename ValueType> sync_queue<ValueType>& operator<<(sync_queue<ValueType>& sbq, ValueType const&elem); }
#include <boost/thread/sync_queue.hpp> namespace boost { template <typename ValueType> sync_queue<ValueType>& operator>>(sync_queue<ValueType>& sbq, ValueType &elem); }