Enterprise Integration Queue-Based Load Leveling Pattern


In a Point-to-point integration, a Sender may overwhelm the Receiver by sending large volumes of messages that the Receiver cannot process. One option here is to use the Competing Consumer EIP to share the workloads, but in some cases, the Competing Consumer Enterprise Integration Pattern (EIP) may be cost-prohibitive. The Queue-based Load Leveling pattern tries to solve this problem by forcing messages through a single channel. Azure Service Bus is a pull-based model so the Receiver will only ever consume messages from the Service Bus queue when the Receiver is ready.

Queue-Based Load Leveling

Queue-based Load Leveling patterns are centred around the premise that the Message Queue system will manage the load balancing without having to rely on the Sender, Receiver or any Middleware to manage the workloads.

Queue-based Load Leveling should not be confused with the Singleton or a Throttled Consumer. In a Singleton Consumer pattern, the design is centred around forcing a single Consumer to accept messages so that the message sequence/order is maintained. With a Throttled Consumer, the Consumer endpoint is restricted from processing multiple messages at the same time even though it may have the ability to parallel process tasks.

The Receiver application still needs to manage the volume of messages it processes, however in this EIP, the Receiver controls the consumption at the endpoint.

Azure Service Bus Queues

In a Queue-based load levelling scenario, the features of the Queue can be used to manage the workloads. Azure Service Bus Queues provides the following mechanisms for controlling the Queue load:

  1. Max delivery count; Service Bus will attempt to deliver a message to a Receiver if the Receiver fails to send an ACK response; the Service Bus will keep the message in the Queue and try again later. After the total number of retries, the Service Bus will move the message to the DLQ.
  2. Message time to live; after a specific time period, the message is moved to the DLQ.
  3. Lock duration; maximum length of time that a message is locked from consumption by other Receivers.
  4. Enable dead letting on message expiration; works in conjunction with Message time to live. When a message expires, the message is moved to the DLQ.
  5. Enable partitioning; When a message is sent to a partitioned queue or topic, Service Bus assigns the message to one of the partitions. Each partition is stored in a different messaging store and handled by a different message broker. 

These features of Azure Service Bus help to manage the volume the Receiver is subject to.

Workloads & Volume Testing

The code below sends 1000 messages to the Service Bus Queue.

using ServiceBusMessageBatch messageBatch = await sender.CreateMessageBatchAsync();

int numOfMessages = 1000;

for (int i = 1; i <= numOfMessages; i++)
    if (!messageBatch.TryAddMessage(new ServiceBusMessage($"item {i}")))
        throw new Exception($"The message {i} is too large to fit in the batch.");

    await sender.SendMessagesAsync(messageBatch);
    Console.WriteLine($"A batch of {numOfMessages} messages has been published to the queue.");
    await sender.DisposeAsync();
    await client.DisposeAsync();

By forcing messages through a single channel, the Receiver application can easily handle the volume of requests coming through into the queue.

Leave a comment