This Web Doc describes how to increase the availability of connectivity solutions involving MQ, WebSphere Application Server JMS messaging, Message Broker, and DataPower. It includes hints and best practices and shows how to configure for high availability.
Asynchronous messages arriving to WebSphere Application Server are processed using message-driven beans (MDBs). The beans listen to queues for incoming messages and then kick-off the processing of the messages by invoking the appropriate application. Message sequencing and asynchronous messaging using MDBs seems like an unlikely pair. Message sequencing nullifies the assumption of message independence and demands a singleton MDB to ensure that messages are processed in an FIFO order. It is still an asynchronous operation but places a constraint on scalability and adds complexity to high availability.
WebSphere Application Server supports several messaging providers for messaging applications, including its own default messaging provider and WebSphere MQ. In this Web Doc, we discuss message sequencing with regard to the default messaging provider. The default messaging provider relies on the service integration bus (SIB) component to provide the mechanisms associated with the transport of messages. The messaging components that are involved when applications use the default messaging provider include, but are not limited to, the following components:
- Bus member
- Messaging engine
- Message store
- Message-driven beans
This Web Doc describes how to configure the messaging-related components in WebSphere Application Server to enforce message sequencing in a highly available system.
Stand-alone application servers that are registered as bus members do not participate in high availability. To take advantage of high availability for failover, application servers must be cluster members registered with an SIB configured for high availability.
The SIB provides high availability and workload management for messaging. To ensure message ordering, a cluster bus member must be configured for highly available messaging only. Highly available messaging with workload management will not satisfy the message ordering requirement. With the highly available messaging configuration, a single messaging engine is active only on one application server in the cluster. The queue destination is assigned solely to the single messaging engine without partitioning. When the primary server for the messaging engine fails, the messaging engine fails over and activates on a backup server in the cluster. The choice of backup server depends upon configuration of the messaging engine policy. With a single messaging engine handling all the messages routed to the destination, there is no workload management.
A messaging engine uses a message store to hold in-flight messages and operation information required for failover. Each messaging engine has one and only one message store that can either be a file store or a data store. In a high availability messaging engine configuration, all messages that are being processed or queued will continue to be processed when the messaging engine fails over and restarts on the back up server. For this to happen, it is critical to ensure that the message store is accessible by any of the cluster servers. Localizing the message store and the messaging engine on the same system constitutes a single point of failure. A fail over of the messaging engine, due to hardware failure, attempts to establish connectivity to the message store unsuccessfully. The failover attempt continues through the servers in the cluster or per messaging engine policy without success. In this case, the failover fails. A file store can be made highly available though a combination of hardware or software such as a storage area network (SAN). High availability for a data store can be made through careful design of the data store location relative to the messaging engine and use of highly available databases.
For details on achieving high availability for file and data stores, refer to the following article in the WebSphere Application Server Information Center:
Message-driven beans (MDBs)
An MDB application deployed to a cluster starts on all application servers in the cluster, but only the MDB on the application server with the active messaging engine will receive messages. If either the active messaging engine or the application server on which it runs fails, the messaging engine fails over to another application server in the cluster. The MDB on the backup server connects to the messaging engine and starts receiving messages. For an ordered queue destination, concurrent MDBs cannot be used, because a race condition occurs that disrupts sequencing. A singleton MDB must be configured by setting the minimum and maximum amount of MDB instances in the EJB container that runs at a single point in time to a value of 1. These properties can be configured with the com.ibm.websphere.ejbcontainer.poolSize system property. For more information, search for “EJB container system properties” in the WebSphere Application Server Information Center at:
A JMS resource adapter monitors destinations. When messages arrive, the JMS resource adapter retrieves the messages from the message queue and passes them to the MDB. There are several settings in a JMS resource adapter that must be considered for message sequencing:
- Maximum batch size for the JMS activation specification
The batch size configures the maximum number of messages in a single batch delivered serially to a single MDB instance. For message ordering, set the batch size to 1 (default).
- Maximum concurrent endpoints for the JMS activation specification
This setting configures the number of endpoints to which messages are delivered concurrently. Increasing this number increases the number of concurrent threads. To ensure message ordering across failed deliveries, set the maximum concurrent endpoints to 1.
- Maximum EJB container pool size
See the section for Message-Driven Beans (MDBs) earlier in this Web Doc.
- Maximum failed deliveries
When an error occurs during message processing, the MDB will roll back and the JMS resource adapter returns the message to the queue for subsequent attempts. The number of attempts is defined by the "Maximum failed deliveries" setting on each messaging destination. When the maximum failed delivery attempts are reached, messages will be routed to the exception destination. This will result in out of sequence for an ordered queue. To circumvent this situation, set maximum failed deliveries to 0 for an ordered message queue so that undelivered messages are delivered repeatedly.
- Exception destination
An exception destination provides a repository for undeliverable messages, preventing message loss. Every messaging engine has a default exception destination. Custom exception destinations can also be configured. It is not possible to delete a default exception destination. However, where message ordering is important, avoid defining an exception destination so that undelivered messages will be redelivered repeatedly, which has a blocking effect and results in messages building up. Message queue depth should be taken into consideration or mechanisms designed to block message producers when approaching queue depth limit.
An ordered queue destination should be configured with the highest reliability of "assured persistent." Lower reliability levels can result in lost or duplicated messages.
This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment.