...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
Boost.Redis is a high-level Redis client library built on top of Boost.Asio that implements the Redis protocol RESP3. The requirements for using Boost.Redis are:
The latest release can be downloaded on https://github.com/boostorg/redis/releases. The library headers can be found in the include
subdirectory and a compilation of the source
is required. The simplest way to do it is to included this header in no more than one source file in your applications. To build the examples and tests cmake is supported, for example
Let us start with a simple application that uses a short-lived connection to send a ping command to Redis
The roles played by the async_run
and async_exec
functions are
async_exec
: Execute the commands contained in the request and store the individual responses in the resp
object. Can be called from multiple places in your code concurrently.async_run
: Resolve, connect, ssl-handshake, resp3-handshake, health-checks, reconnection and coordinate low-level read and write operations (among other things).Redis servers can also send a variety of pushes to the client, some of them are
The connection class supports server pushes by means of the boost::redis::connection::async_receive
function, which can be called in the same connection that is being used to execute commands. The coroutine below shows how to used it
Redis requests are composed of one or more commands (in the Redis documentation they are called pipelines). For example
Sending a request to Redis is performed with boost::redis::connection::async_exec
as already stated.
The boost::redis::request::config
object inside the request dictates how the boost::redis::connection
should handle the request in some important situations. The reader is advised to read it carefully.
Boost.Redis uses the following strategy to support Redis responses
boost::redis::request
is used for requests whose number of commands are not dynamic.boost::redis::generic_response
.For example, the request below has three commands
and its response also has three comamnds and can be read in the following response object
The response behaves as a tuple and must have as many elements as the request has commands (exceptions below). It is also necessary that each tuple element is capable of storing the response to the command it refers to, otherwise an error will occur. To ignore responses to individual commands in the request use the tag boost::redis::ignore_t
, for example
The following table provides the resp3-types returned by some Redis commands
Command | RESP3 type | Documentation |
---|---|---|
lpush | Number | https://redis.io/commands/lpush |
lrange | Array | https://redis.io/commands/lrange |
set | Simple-string, null or blob-string | https://redis.io/commands/set |
get | Blob-string | https://redis.io/commands/get |
smembers | Set | https://redis.io/commands/smembers |
hgetall | Map | https://redis.io/commands/hgetall |
To map these RESP3 types into a C++ data structure use the table below
RESP3 type | Possible C++ type | Type |
---|---|---|
Simple-string | std::string | Simple |
Simple-error | std::string | Simple |
Blob-string | std::string , std::vector | Simple |
Blob-error | std::string , std::vector | Simple |
Number | long long , int , std::size_t , std::string | Simple |
Double | double , std::string | Simple |
Null | std::optional<T> | Simple |
Array | std::vector , std::list , std::array , std::deque | Aggregate |
Map | std::vector , std::map , std::unordered_map | Aggregate |
Set | std::vector , std::set , std::unordered_set | Aggregate |
Push | std::vector , std::map , std::unordered_map | Aggregate |
For example, the response to the request
can be read in the tuple below
Where both are passed to async_exec
as showed elsewhere
If the intention is to ignore responses altogether use ignore
Responses that contain nested aggregates or heterogeneous data types will be given special treatment later in The general case. As of this writing, not all RESP3 types are used by the Redis server, which means in practice users will be concerned with a reduced subset of the RESP3 specification.
Commands that have no response like
"SUBSCRIBE"
"PSUBSCRIBE"
"UNSUBSCRIBE"
must NOT be included in the response tuple. For example, the request below
must be read in this tuple response<std::string, std::string>
, that has static size two.
It is not uncommon for apps to access keys that do not exist or that have already expired in the Redis server, to deal with these cases Boost.Redis provides support for std::optional
. To use it, wrap your type around std::optional
like this
Everything else stays pretty much the same.
To read responses to transactions we must first observe that Redis will queue the transaction commands and send their individual responses as elements of an array, the array is itself the response to the EXEC
command. For example, to read the response to this request
use the following response type
For a complete example see cpp20_containers.cpp.
There are cases where responses to Redis commands won't fit in the model presented above, some examples are
set
) whose responses don't have a fixed RESP3 type. Expecting an int
and receiving a blob-string will result in error.response
.To deal with these cases Boost.Redis provides the boost::redis::resp3::node
type abstraction, that is the most general form of an element in a response, be it a simple RESP3 type or the element of an aggregate. It is defined like this
Any response to a Redis command can be received in a boost::redis::generic_response
. The vector can be seen as a pre-order view of the response tree. Using it is not different than using other types
For example, suppose we want to retrieve a hash data structure from Redis with HGETALL
, some of the options are
boost::redis::generic_response
: Works always.std::vector<std::string>
: Efficient and flat, all elements as string.std::map<std::string, std::string>
: Efficient if you need the data as a std::map
.std::map<U, V>
: Efficient if you are storing serialized data. Avoids temporaries and requires boost_redis_from_bulk
for U
and V
.In addition to the above users can also use unordered versions of the containers. The same reasoning applies to sets e.g. SMEMBERS
and other data structures in general.
Boost.Redis supports serialization of user defined types by means of the following customization points
These functions are accessed over ADL and therefore they must be imported in the global namespace by the user. In the Examples section the reader can find examples showing how to serialize using json and protobuf.
The examples below show how to use the features discussed so far
async_run
in a separate thread and performs synchronous calls to async_exec
.The main function used in some async examples has been factored out in the main.cpp file.
This document benchmarks the performance of TCP echo servers I implemented in different languages using different Redis clients. The main motivations for choosing an echo server are
I also imposed some constraints on the implementations
To reproduce these results run one of the echo-server programs in one terminal and the echo-server-client in another.
First I tested a pure TCP echo server, i.e. one that sends the messages directly to the client without interacting with Redis. The result can be seen below
The tests were performed with a 1000 concurrent TCP connections on the localhost where latency is 0.07ms on average on my machine. On higher latency networks the difference among libraries is expected to decrease.
The code used in the benchmarks can be found at
This is similar to the echo server described above but messages are echoed by Redis and not by the echo-server itself, which acts as a proxy between the client and the Redis server. The results can be seen below
The tests were performed on a network where latency is 35ms on average, otherwise it uses the same number of TCP connections as the previous example.
As the reader can see, the Libuv and the Rust test are not depicted in the graph, the reasons are
The code used in the benchmarks can be found at
Redis clients have to support automatic pipelining to have competitive performance. For updates to this document follow https://github.com/boostorg/redis.
The main reason for why I started writing Boost.Redis was to have a client compatible with the Asio asynchronous model. As I made progresses I could also address what I considered weaknesses in other libraries. Due to time constraints I won't be able to give a detailed comparison with each client listed in the official list, instead I will focus on the most popular C++ client on github in number of stars, namely
Before we start it is important to mention some of the things redis-plus-plus does not support
The remaining points will be addressed individually. Let us first have a look at what sending a command a pipeline and a transaction look like
Some of the problems with this API are
According to the documentation, pipelines in redis-plus-plus have the following characteristics
NOTE: By default, creating a Pipeline object is NOT cheap, since it creates a new connection.
This is clearly a downside in the API as pipelines should be the default way of communicating and not an exception, paying such a high price for each pipeline imposes a severe cost in performance. Transactions also suffer from the very same problem.
NOTE: Creating a Transaction object is NOT cheap, since it creates a new connection.
In Boost.Redis there is no difference between sending one command, a pipeline or a transaction because requests are decoupled from the IO objects.
redis-plus-plus also supports async interface, however, async support for Transaction and Subscriber is still on the way.
The async interface depends on third-party event library, and so far, only libuv is supported.
Async code in redis-plus-plus looks like the following
As the reader can see, the async interface is based on futures which is also known to have a bad performance. The biggest problem however with this async design is that it makes it impossible to write asynchronous programs correctly since it starts an async operation on every command sent instead of enqueueing a message and triggering a write when it can be sent. It is also not clear how are pipelines realised with this design (if at all).
The High-Level page documents all public types.
Acknowledgement to people that helped shape Boost.Redis
AUTH
and HELLO
command can influence each other.async_exec
should fail when the connection is lost.Also many thanks to all individuals that participated in the Boost review
The Reviews can be found at: https://lists.boost.org/Archives/boost/2023/01/date.php. The thread with the ACCEPT from the review manager can be found here: https://lists.boost.org/Archives/boost/2023/01/253944.php.
wait_for_one_error
instead of wait_for_all
. The function connection::async_run
was also changed to return EOF to the user when that error is received from the server. That is a breaking change.disabled
to debug
."default"
as the default value of config::username
. This makes it simpler to use the requirepass
configuration in Redis.std::size_t
instead of std::uint64_t
for the sizes of bulks and aggregates. The code relies now on std::from_chars
returning an error if a value greater than 32 is received on platforms on which the size ofstd::size_t
is 32.async_receive
overload that takes a response. Users should now first call set_receive_response
to avoid constantly and unnecessarily setting the same response.std::function
to type erase the response adapter. This change should not influence users in any way but allowed important simplification in the connections internals. This resulted in massive performance improvement.get_usage()
that returns the connection usage information, such as number of bytes written, received etc.asio::channel
and therefore can be buffered which avoids blocking the socket read-loop. Batch reads are also supported by means of channel.try_send
and buffered messages can be consumed synchronously with connection::receive
. The function boost::redis::cancel_one
has been added to simplify processing multiple server pushes contained in the same generic_response
. IMPORTANT: These changes may result in more than one push in the response when connection::async_receive
resumes. The user must therefore be careful when calling resp.clear()
: either ensure that all message have been processed or just use consume_one
.boost::redis::config::database_index
to make it possible to choose a database before starting running commands e.g. after an automatic reconnection.boost::redis
.to_bulk
and from_bulk
names were too generic for ADL customization points. They gained the prefix boost_redis_
.boost::redis::resp3::request
to boost::redis::request
.boost::redis::response
that should be used instead of std::tuple
.boost::redis::generic_response
that should be used instead of std::vector<resp3::node<std::string>>
.redis::ignore
to redis::ignore_t
.async_exec
to receive a redis::response
instead of an adapter, namely, instead of passing adapt(resp)
users should pass resp
directly.boost::redis::adapter::result
to store responses to commands including possible resp3 errors without losing the error diagnostic part. To access values now use std::get<N>(resp).value()
instead of std::get<N>(resp)
.request::coalesce
became unnecessary and was removed. I could measure significative performance gains with theses changes.boost::redis::connection::async_run
will automatically resolve, connect, reconnect and perform health checks.retry_on_connection_lost
to cancel_if_unresponded
. (v1.4.1)boost::string_view
, Boost.Variant2 and Boost.Spirit.HELLO
command. This can't be implemented properly without bloating the connection class. It is now a user responsibility to send HELLO. Requests that contain it have priority over other requests and will be moved to the front of the queue, see aedis::request::config
aedis::connection::async_run
. Users have to do this step manually now. The reason for this change is that having them built-in doesn't offer enough flexibility that is need for boost users.aedis::connection
is now using a typeddef to a net::ip::tcp::socket
and aedis::ssl::connection
to net::ssl::stream<net::ip::tcp::socket>
. Users that need to use other stream type must now specialize aedis::basic_connection
.aedis::adapt
supports now tuples created with std::tie
. aedis::ignore
is now an alias to the type of std::ignore
.aedis::connection
class.async_run
to complete with success if asio::error::eof is received. This makes it easier to write composed operations with awaitable operators.aedis::request
(a contribution from Klemens Morgenstern).aedis::request::push_range2
to push_range
. The suffix 2 was used for disambiguation. Klemens fixed it with SFINAE.fail_on_connection_lost
to aedis::request::config::cancel_on_connection_lost
. Now, it will only cause connections to be canceled when async_run
completes.aedis::request::config::cancel_if_not_connected
which will cause a request to be canceled if async_exec
is called before a connection has been established.aedis::request::config::retry
that if set to true will cause the request to not be canceled when it was sent to Redis but remained unresponded after async_run
completed. It provides a way to avoid executing commands twice.aedis::connection::async_run
overload that takes request and adapter as parameters.aedis::adapt()
behaves with std::vector<aedis::resp3::node<T>>
. Receiving RESP3 simple errors, blob errors or null won't causes an error but will be treated as normal response. It is the user responsibility to check the content in the vector.connection::cancel(operation::exec)
. Now this call will only cancel non-written requests.aedis::connection::async_exec
. The following call will co_await (conn.async_exec(...) || timer.async_wait(...))
will cancel the request as long as it has not been written.aedis::connection::async_run
completion signature to f(error_code)
. This is how is was in the past, the second parameter was not helpful.operation::receive_push
to aedis::operation::receive
.coalesce_requests
from the aedis::connection::config
, it became a request property now, see aedis::request::config::coalesce
.max_read_size
from the aedis::connection::config
. The maximum read size can be specified now as a parameter of the aedis::adapt()
function.aedis::sync
class, see intro_sync.cpp for how to perform synchronous and thread safe calls. This is possible in Boost. 1.80 only as it requires boost::asio::deferred
.boost::optional
to std::optional
. This is part of moving to C++17.aedis::connection::async_run
overload so that it always returns an error when the connection is lost.aedis::connection::timeouts::resp3_handshake_timeout
. This is timeout used to send the HELLO
command.aedis::endpoint
where in addition to host and port, users can optionally provide username, password and the expected server role (see aedis::error::unexpected_server_role
).aedis::connection::async_run
checks whether the server role received in the hello command is equal to the expected server role specified in aedis::endpoint
. To skip this check let the role variable empty.aedis::connection
. It is possible in simple reconnection strategies but bloats the class in more complex scenarios, for example, with sentinel, authentication and TLS. This is trivial to implement in a separate coroutine. As a result the enum event
and async_receive_event
have been removed from the class too.connection::async_receive_push
that prevented passing any response adapter other that adapt(std::vector<node>)
.aedis::adapt()
that caused RESP3 errors to be ignored. One consequence of it is that connection::async_run
would not exit with failure in servers that required authentication.connection::async_run
that would cause it to complete with success when an error in the connection::async_exec
occurred.aedis::sync
that wraps an aedis::connection
in a thread-safe and synchronous API. All free functions from the sync.hpp
are now member functions of aedis::sync
.aedis::connection::async_receive_event
in two functions, one to receive events and another for server side pushes, see aedis::connection::async_receive_push
.aedis::adapter::adapt
and aedis::adapt
.connection::operation
enum to replace cancel_*
member functions with a single cancel function that gets the operations that should be cancelled as argument.connection
object had unsent commands. It could cause async_exec
to never complete under certain conditions.adapt()
functions were missing from Doxygen.experimental::exec
and receive_event
functions to offer a thread safe and synchronous way of executing requests across threads. See intro_sync.cpp
and subscriber_sync.cpp
for examples.connection::async_read_push
was renamed to async_receive_event
.connection::async_receive_event
is now being used to communicate internal events to the user, such as resolve, connect, push etc. For examples see cpp20_subscriber.cpp and connection::event
.aedis
directory has been moved to include
to look more similar to Boost libraries. Users should now replace -I/aedis-path
with -I/aedis-path/include
in the compiler flags.AUTH
and HELLO
commands are now sent automatically. This change was necessary to implement reconnection. The username and password used in AUTH
should be provided by the user on connection::config
.connection::enable_reconnect
.connection::async_run(host, port)
overload that was causing crashes on reconnection.any_io_executor
on users.connection::async_receiver_event
is not cancelled anymore when connection::async_run
exits. This change makes user code simpler.connection::async_exec
with host and port overload has been removed. Use the other connection::async_run
overload.connection::async_run
have been move to connection::config
to better support authentication and failover.chat_room
example.echo_server
example. (v0.1.2)client::async_wait_for_data
with make_parallel_group
to launch operation. (v0.1.2)