...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
This file is a top-level convenience header that includes all of the Boost.MPI library headers. Users concerned about compile time may wish to include only specific headers from the Boost.MPI library.
This header provides an STL-compliant allocator that uses the MPI-2 memory allocation facilities.
namespace boost { namespace mpi { template<> class allocator<void>; template<typename T> class allocator; template<typename T1, typename T2> bool operator==(const allocator< T1 > &, const allocator< T2 > &); template<typename T1, typename T2> bool operator!=(const allocator< T1 > &, const allocator< T2 > &); } }
This header contains MPI collective operations, which implement various parallel algorithms that require the coordination of all processes within a communicator. The header collectives_fwd.hpp
provides forward declarations for each of these operations. To include only specific collective algorithms, use the headers boost/mpi/collectives/algorithm_name.hpp
.
namespace boost { namespace mpi { template<typename T> void all_gather(const communicator &, const T &, std::vector< T > &); template<typename T> void all_gather(const communicator &, const T &, T *); template<typename T> void all_gather(const communicator &, const T *, int, std::vector< T > &); template<typename T> void all_gather(const communicator &, const T *, int, T *); template<typename T, typename Op> void all_reduce(const communicator &, const T &, T &, Op); template<typename T, typename Op> T all_reduce(const communicator &, const T &, Op); template<typename T, typename Op> void all_reduce(const communicator &, const T *, int, T *, Op); template<typename T> void all_to_all(const communicator &, const std::vector< T > &, std::vector< T > &); template<typename T> void all_to_all(const communicator &, const T *, T *); template<typename T> void all_to_all(const communicator &, const std::vector< T > &, int, std::vector< T > &); template<typename T> void all_to_all(const communicator &, const T *, int, T *); template<typename T> void broadcast(const communicator &, T &, int); template<typename T> void broadcast(const communicator &, T *, int, int); template<typename T> void broadcast(const communicator &, skeleton_proxy< T > &, int); template<typename T> void broadcast(const communicator &, const skeleton_proxy< T > &, int); template<typename T> void gather(const communicator &, const T &, std::vector< T > &, int); template<typename T> void gather(const communicator &, const T &, T *, int); template<typename T> void gather(const communicator &, const T &, int); template<typename T> void gather(const communicator &, const T *, int, std::vector< T > &, int); template<typename T> void gather(const communicator &, const T *, int, T *, int); template<typename T> void gather(const communicator &, const T *, int, int); template<typename T> void scatter(const communicator &, const std::vector< T > &, T &, int); template<typename T> void scatter(const communicator &, const T *, T &, int); template<typename T> void scatter(const communicator &, T &, int); template<typename T> void scatter(const communicator &, const std::vector< T > &, T *, int, int); template<typename T> void scatter(const communicator &, const T *, T *, int, int); template<typename T> void scatter(const communicator &, T *, int, int); template<typename T, typename Op> void reduce(const communicator &, const T &, T &, Op, int); template<typename T, typename Op> void reduce(const communicator &, const T &, Op, int); template<typename T, typename Op> void reduce(const communicator &, const T *, int, T *, Op, int); template<typename T, typename Op> void reduce(const communicator &, const T *, int, Op, int); template<typename T, typename Op> void scan(const communicator &, const T &, T &, Op); template<typename T, typename Op> T scan(const communicator &, const T &, Op); template<typename T, typename Op> void scan(const communicator &, const T *, int, T *, Op); } }
This header provides forward declarations for all of the collective operations contained in the header collectives.hpp
.
This header defines the communicator
class, which is the basis of all communication within Boost.MPI, and provides point-to-point communication operations.
namespace boost { namespace mpi { class communicator; enum comm_create_kind; const int any_source; // A constant representing "any process.". const int any_tag; // A constant representing "any tag.". BOOST_MPI_DECL bool operator==(const communicator &, const communicator &); bool operator!=(const communicator &, const communicator &); } }
This header provides MPI configuration details that expose the capabilities of the underlying MPI implementation, and provides auto-linking support on Windows.
MPICH_IGNORE_CXX_SEEK BOOST_MPI_HAS_MEMORY_ALLOCATION BOOST_MPI_HAS_NOARG_INITIALIZATION BOOST_MPI_CALLING_CONVENTION
This header provides the mapping from C++ types to MPI data types.
BOOST_IS_MPI_DATATYPE(T)
namespace boost { namespace mpi { template<typename T> struct is_mpi_integer_datatype; template<typename T> struct is_mpi_floating_point_datatype; template<typename T> struct is_mpi_logical_datatype; template<typename T> struct is_mpi_complex_datatype; template<typename T> struct is_mpi_byte_datatype; template<typename T> struct is_mpi_builtin_datatype; template<typename T> struct is_mpi_datatype; template<typename T> MPI_Datatype get_mpi_datatype(const T &); } }
This header provides forward declarations for the contents of the header datatype.hpp
. It is expected to be used primarily by user-defined C++ classes that need to specialize is_mpi_datatype
.
namespace boost { namespace mpi { struct packed; template<typename T> MPI_Datatype get_mpi_datatype(); } }
This header provides the environment
class, which provides routines to initialize, finalization, and query the status of the Boost MPI environment.
namespace boost { namespace mpi { class environment; } }
This header provides exception classes that report MPI errors to the user and macros that translate MPI error codes into Boost.MPI exceptions.
BOOST_MPI_CHECK_RESULT(MPIFunc, Args)
namespace boost { namespace mpi { class exception; } }
This header defines facilities to support MPI communicators with graph topologies, using the graph interface defined by the Boost Graph Library. One can construct a communicator whose topology is described by any graph meeting the requirements of the Boost Graph Library's graph concepts. Likewise, any communicator that has a graph topology can be viewed as a graph by the Boost Graph Library, permitting one to use the BGL's graph algorithms on the process topology.
namespace boost { template<> struct graph_traits<mpi::graph_communicator>; namespace mpi { class graph_communicator; // Returns the source vertex from an edge in the graph topology of a communicator. int source(const std::pair< int, int > & edge, const graph_communicator &); // Returns the target vertex from an edge in the graph topology of a communicator. int target(const std::pair< int, int > & edge, const graph_communicator &); // Returns an iterator range containing all of the edges outgoing from the given vertex in a graph topology of a communicator. unspecified out_edges(int vertex, const graph_communicator & comm); // Returns the out-degree of a vertex in the graph topology of a communicator. int out_degree(int vertex, const graph_communicator & comm); // Returns an iterator range containing all of the neighbors of the given vertex in the communicator's graph topology. unspecified adjacent_vertices(int vertex, const graph_communicator & comm); // Returns an iterator range that contains all of the vertices with the communicator's graph topology, i.e., all of the process ranks in the communicator. std::pair< counting_iterator< int >, counting_iterator< int > > vertices(const graph_communicator & comm); // Returns the number of vertices within the graph topology of the communicator, i.e., the number of processes in the communicator. int num_vertices(const graph_communicator & comm); // Returns an iterator range that contains all of the edges with the communicator's graph topology. unspecified edges(const graph_communicator & comm); // Returns the number of edges in the communicator's graph topology. int num_edges(const graph_communicator & comm); identity_property_map get(vertex_index_t, const graph_communicator &); int get(vertex_index_t, const graph_communicator &, int); } }
This header defines the group
class, which allows one to manipulate and query groups of processes.
namespace boost { namespace mpi { class group; BOOST_MPI_DECL bool operator==(const group &, const group &); bool operator!=(const group &, const group &); BOOST_MPI_DECL group operator|(const group &, const group &); BOOST_MPI_DECL group operator&(const group &, const group &); BOOST_MPI_DECL group operator-(const group &, const group &); } }
This header defines the intercommunicator
class, which permits communication between different process groups.
namespace boost { namespace mpi { class intercommunicator; } }
This header defines operations for completing non-blocking communication requests.
namespace boost { namespace mpi { template<typename ForwardIterator> std::pair< status, ForwardIterator > wait_any(ForwardIterator, ForwardIterator); template<typename ForwardIterator> optional< std::pair< status, ForwardIterator > > test_any(ForwardIterator, ForwardIterator); template<typename ForwardIterator, typename OutputIterator> OutputIterator wait_all(ForwardIterator, ForwardIterator, OutputIterator); template<typename ForwardIterator> void wait_all(ForwardIterator, ForwardIterator); template<typename ForwardIterator, typename OutputIterator> optional< OutputIterator > test_all(ForwardIterator, ForwardIterator, OutputIterator); template<typename ForwardIterator> bool test_all(ForwardIterator, ForwardIterator); template<typename BidirectionalIterator, typename OutputIterator> std::pair< OutputIterator, BidirectionalIterator > wait_some(BidirectionalIterator, BidirectionalIterator, OutputIterator); template<typename BidirectionalIterator> BidirectionalIterator wait_some(BidirectionalIterator, BidirectionalIterator); template<typename BidirectionalIterator, typename OutputIterator> std::pair< OutputIterator, BidirectionalIterator > test_some(BidirectionalIterator, BidirectionalIterator, OutputIterator); template<typename BidirectionalIterator> BidirectionalIterator test_some(BidirectionalIterator, BidirectionalIterator); } }
This header provides a mapping from function objects to MPI_Op
constants used in MPI collective operations. It also provides several new function object types not present in the standard <functional> header that have direct mappings to
MPI_Op
.
namespace boost { namespace mpi { template<typename Op, typename T> struct is_commutative; template<typename T> struct maximum; template<typename T> struct minimum; template<typename T> struct bitwise_and; template<typename T> struct bitwise_or; template<typename T> struct logical_xor; template<typename T> struct bitwise_xor; template<typename Op, typename T> struct is_mpi_op; } }
This header provides the facilities for packing Serializable data types into a buffer using MPI_Pack
. The buffers can then be transmitted via MPI and then be unpacked either via the facilities in packed_oarchive.hpp
or MPI_Unpack
.
namespace boost { namespace mpi { class packed_iarchive; typedef packed_iprimitive iprimitive; } }
This header provides the facilities for unpacking Serializable data types from a buffer using MPI_Unpack
. The buffers are typically received via MPI and have been packed either by via the facilities in packed_iarchive.hpp
or MPI_Pack
.
namespace boost { namespace mpi { class packed_oarchive; typedef packed_oprimitive oprimitive; } }
This header interacts with the Python bindings for Boost.MPI. The routines in this header can be used to register user-defined and library-defined data types with Boost.MPI for efficient (de-)serialization and separate transmission of skeletons and content.
namespace boost { namespace mpi { namespace python { template<typename T> void register_serialized(const T & = T(), PyTypeObject * = 0); template<typename T> void register_skeleton_and_content(const T & = T(), PyTypeObject * = 0); } } }
This header defines the class request
, which contains a request for non-blocking communication.
namespace boost { namespace mpi { class request; } }
This header provides facilities that allow the structure of data types (called the "skeleton") to be transmitted and received separately from the content stored in those data types. These facilities are useful when the data in a stable data structure (e.g., a mesh or a graph) will need to be transmitted repeatedly. In this case, transmitting the skeleton only once saves both communication effort (it need not be sent again) and local computation (serialization need only be performed once for the content).
namespace boost { namespace mpi { template<typename T> struct skeleton_proxy; class content; class packed_skeleton_iarchive; class packed_skeleton_oarchive; template<typename T> const skeleton_proxy< T > skeleton(T &); template<typename T> const content get_content(const T &); } }
This header contains all of the forward declarations required to use transmit skeletons of data structures and the content of data structures separately. To actually transmit skeletons or content, include the header boost/mpi/skeleton_and_content.hpp
.
This header defines the class status
, which reports on the results of point-to-point communication.
namespace boost { namespace mpi { class status; } }
This header provides the timer
class, which provides access to the MPI timers.
namespace boost { namespace mpi { class timer; } }