Optimizing Performance of the Azure Service Bus .Net Standard SDK
From my archive - originally published on 21 April 2019
Microsoft have long since published advice for maximising performance with Azure Service Bus, but there doesn’t appear to be any explicit advice for optimising the newer .Net Standard based SDK.
Much of the general advice remains the same in terms of designing for high throughput scenarios. However, there are some subtle changes to the behaviour of the new SDK which can affect performance.
Batching messages
The older SDK offered client-side batching via the MessagingFactory class. This feature batched asynchronous send and complete operations under the hood with a default interval of 20ms. This could deliver some significant performance improvements, though it was dependent on the proprietary NetMessaging protocol which is not supported in the new SDK.
You can batch both send and receive operations with the new SDK, though the implementation requires a little care.
Batches of messages can be sent from any client using an overload of the SendAsync() method. Note that each the batch size is bound by the size limit for individual messages, i.e. 256KB for the standard tier and 1MB for premium. An exception is thrown if you exceed this limit and there is nothing in the SDK to help you accurately guess the size of a batch. Each message contains a small amount of meta data over and above the message body.
For incoming messages, the MessageReceiver can be used to request and complete batches of messages using the syntax below:
var client = new MessageReceiver(connection, entityPath); // Receive up to 50 messages var messages = await client.ReceiveAsync(maxMessageCount: 50); // Complete the messages in a batch var tokens = messages.Select(m => m.SystemProperties.LockToken); await client.CompleteAsync(tokens);
You can offset this trade-off and increase the size of batches by setting the PrefetchCount property on a client object. This enables clients to maintain their own cache of messages. Note that these messages will be locked so they cannot be received by another client object. You will need to ensure that the prefetch count takes account of the rate of messaging processing or you risk having expired messages in the cache.
Message pump
Batching send and receives can provide the highest throughput, though is makes it trickier to manage the transactional characteristics of each individual message. Dealing with failure in the middle of processing a batch can get messy.
The message pump interface provides a more elegant means of processing individual messages, though this does come at the cost of throughput. Under the hood, it is a wrapper around the ReceiveAsync() operation that uses semaphores to control the rate of message processing. Any incoming messages are routed to a handler of your choice, as shown below:
var client = new MessageReceiver(connectionString, entityPath); client.RegisterMessageHandler( async (message, token) => { // Complete the message after processing... await client.CompleteAsync(message.SystemProperties.LockToken); }, new MessageHandlerOptions((async args => Console.WriteLine(args.Exception)) { MaxConcurrentCalls = 50, AutoComplete = false });
The sweet spot for concurrent calls depends very much on your infrastructure, but you will ultimately be bound by the limitation of IO operations. There will be a point where raising the maximum number of concurrent calls ceases to be effective.
Transport type
Where the older SDK supported a proprietary NetMessaging protocol, the new SDK only supports AMQP based connections. You have a choice of using TCP via port 5671 or web sockets using port 443. The latter makes it easier to punch through firewalls and there is no discernible difference between the two in terms of performance.
Note that there is also a “pure” HTTP option that uses the REST API. This will always be slower than the web sockets option due to the extra overhead in creating new HTTP calls as opposed to using an open socket. Each request to the REST API also goes through the authentication layer while a web socket connection will only authenticate when you create a sender or receiver.
Connection management
With the previous SDK the advice was to re-use factories and clients where possible. There’s no connection pooling happening under the hood and new connections are relatively expensive to create. The MessagingFactory was the anchor class used to manage the underlying connection to the bus and you were expected to share a single instance between clients.
The principles of connection management remain the same for the new SDK, i.e. re-creating connections is expensive. You can create client objects that connect directly to the bus or share a single connection between clients by creating a ServiceBusConnection instance. This approach makes client object creation relatively inexpensive as they do not actually "own" the underlying connection.
There are occasions where you may want to use multiple connections. For example, if you want to send as many messages as possible to a single queue then you can increase throughput by spinning up multiple ServiceBusConnection and client objects on separate threads. You can also segment your connections so that they are dedicated towards specific high-volume queues and topics.
Plug-ins
The new SDK’s support for plug-ins could also emerge as a means of improving performance. For example, there is already a compression plug-in that uses GZIP to reduce the payload size for larger messages. This is something to keep an eye on as it could be used to drive performance for more specific use cases.