Skip to main content

Building a multi-purpose WebSphere MQ infrastructure with scalability and high availability - Middleware News

  • Continuous availability to send MQ messages, with no single point of failure
  • Linear horizontal scale of throughput, for both MQ and the attaching applications
  • Exactly once delivery, with high availability of individual persistent messages
  • Three messaging styles: Request/response, fire-and-forget, and publish/subscribe
  • A hub model, with a centralized MQ infrastructure scaled independently from the application
Part 1 of this article series describes the overall infrastructure topology and summarizes how it meets the above non-functional requirements for a wide range of applications. Subsequent parts show you how to configure the various components, including how to code applications that connect to the infrastructure. The topology contains four logical tiers:

Figure 1. Topology overview

Sender
Applications sending the message.
Sender gateway
MQ queue managers that the sending applications connect to.
Receiver gateway
MQ queue managers that the receiving applications connect to. Sending and receiving gateway queue managers can be the same queue managers.
Receiver
Applications receiving the message.
MQ hub
The only MQ installations are queue managers acting as the sending and receiving gateways. The sending and receiving applications attach to these queue managers as clients, as described below.
The term gateway in this instance indicates that these queue managers are the way that application gets messages into or out of the MQ network, and that each application is assigned a set of queue managers to use in the sending and receiving gateway roles. A group of queue managers that a set of applications connect to is called an MQ hub.
An individual queue manager in an MQ hub can act as both a sender and receiver gateway. A sender gateway in one MQ hub can communicate with a receiver gateway in another MQ hub. An MQ hub can be the gateway for multiple applications, or dedicated to a single application, depending on the isolation and performance requirements of that application.
The minimum number of queue managers required for the topology is two, in order to avoid a single point of failure. These two queue managers can act as both sending and receiving gateways. If automated recovery of individual persistent messages is required after a hardware failure, then these queue managers should themselves be made recoverable via a high availability (HA) failover technology. Automatic recovery of persistent messages helps prevent stranded messages, and is important in many exactly once delivery scenarios to ensure the timely delivery of messages.
Figure 2 below shows this minimum size topology, or MQ hub, with the MQ multi-instance feature used to provide queue manager HA recovery across two servers. You can use an HA cluster, such as IBM PowerHA, to achieve the same purpose with direct (fiber) attachment to a file system, such as a Storage Area Network (SAN). For more information on choosing a suitable HA failover technology, see Using WebSphere MQ with high availability configurations in the WebSphere MQ information center.

Figure 2. Two-queue-manager MQ hub with HA

Sending and receiving gateways
If the same set of queue managers are being used for the sending and receiving gateway roles within the MQ hub, why do you distinguish between the two roles in the topology?
Firstly, because messages that are sent by an application through a particular sending gateway queue manager might be workload balanced by the MQ cluster to a different receiving gateway queue manager in the same MQ hub, or in a different MQ hub somewhere else in the enterprise.
And secondly, because the queue managers provide fundamentally different features to the application when acting in these roles, summarized as follows:
  • Sending gateway role:
    • Provides continuously available store and forward capabilities, so fire-and-forget and publish actions can always be performed
    • Contains response queues for applications performing request/reply actions
  • Receiving gateway role:
    • Contains queues from which applications host a service that needs to be continually available
    • Delivers messages to applications with subscriptions to messages published on a topic
In order to access these features, applications connect differently to a queue manager, depending on whether they need it to act in the sending or receiving gateway role.
Extending the messaging hub
You can place additional messaging infrastructure tiers between the sending and receiving gateways, including using WebSphere Message Broker to perform message filtering, routing, and prioritization based on message content. An example is shown in Figure 3. Again, the sending and receiving gateways can be the same queue managers:

Figure 3. Extending topology to include WebSphere Message Broker

For more information on WebSphere Message Broker, see Resources at the bottom of the article.
Connecting applications
The continuous availability and scalability characteristics of the topology are based on some fundamental principles:
Each application instance connects to exactly two queue managers.
When sending messages, the messages are workload-balanced across the two. When listening for messages to arrive, the application listens to both queue managers for messages to arrive. The special case of receiving replies in request/reply messaging scenarios will be discussed later.
Every receiving gateway configured for an application has at least two application instances attached.
This arrangement prevents messages from becoming stranded if one application instance fails.
There must be at least as many receiving application instances as receiver gateways configured for that application.
If you are building a shared MQ infrastructure for many applications, some applications might have fewer instances than others, and hence be able to connect to fewer receiver gateway instances. As a result, some of your receiver gateways may be configured for different subsets of your applications.
For considerations for scenarios requiring non-durable publish/subscribe or message ordering, see Scenarios below.
Figure 4 below shows an example of how these principals are applied. The diagram shows a scenario with five queue managers in an MQ hub, acting as both sending and receiving gateways. A sending application is shown with eight instances, which utilize all five queue managers as sending gateways. A receiving application is shown with only four instances, which can utilize a maximum of four queue managers. One of the queue managers is not configured as a receiving gateway for the application, in order to prevent messages being routed to that queue manager and becoming stranded.

Figure 4. Example MQ hub showing application connections configured to meet the above principals

Connection types in detail
Applications connecting to the MQ hub are likely to be performing one of the following activities:
  • Sending a message to a queue or a topic where no response is expected, such as sending a data update or emitting an event. We shall call this fire and forget.
  • Beginning a long-running listener for messages arriving for processing, on a queue or a durable subscription. We shall call this a message listener.
  • Sending a response message to a request it has processed via a message listener. We shall treat this identically to fire and forget.
  • Sending a request message where the response is required immediately for processing to continue, such as querying some data. We shall call this synchronous request/response
  • Sending a message that might generate one or more responses, and these responses are able to arrive at any time in the future. We shall treat this two-way asynchronous messaging pattern as a fire and forget of a request combined with a message listener for responses.
Each of these activities has different considerations for how an application connects to an MQ hub, which are described below along with the role that the queue managers in the MQ hub play as a sender or receiver gateway for the application. A future article in this series will show you how to achieve these connection patterns in common programming environments such as Java™ Enterprise Edition (Java EE), Java Standard Edition (Java SE), and Microsoft® .NET®.
Connecting for fire and forget
When an application connects for fire and forget messaging, it can connect to any available sender gateway -- any gateway queue manager in its local MQ hub. This queue manager is then responsible for delivering messages to the target queue, which might be on that same queue manager, workload balanced across the other queue managers in the local MQ hub, or workload balanced across a cluster to another MQ hub where the target application connects.
In order to avoid any single point of failure, and to spread the workload across all of the queue managers in the application's local MQ hub, the application should workload balance the connection it uses for its requests across multiple queue managers. WebSphere MQ features such as the Client Connection Definition Table (CCDT) can help, but to fully capitalize on connection caching and pooling, and to be able to use XA transactions for exactly-once delivery, using a small amount of custom code to balance messages between the two connections is often preferred. Figure 5 shows an application workload balancing fire and forget messages across gateways:

Figure 5. An application connecting for fire and forget messaging.

Connecting a message listener to a queue
In order to provide continuous availability, there should be more than one clustered target MQ queue, on different queue managers, for each receiving application. Having such multiple queues means that if one queue manager fails, the only requests that are stranded on that queue manager (or lost in the case of non-persistent messages) are those waiting to be processed on that queue manager when it failed. New requests are routed to the queue managers that are still available.
It is also important that messages do not become stranded on a particular queue manager if an instance of the application fails. The approach recommended in this article is to make each instance of the application listen to two receiving gateways, and configure those connections such that every queue manager has two applications listening to its queue. The benefit of this dual-listener approach is that handling the failure of the receiving application instance is instantaneous, as messages are already being processed by another instance connected to the same queue. The MQ feature AMQSCLM can also provide a solution here, by detecting the failure of the application and rerouting messages to other queues in the cluster. For more information, see The Cluster Queue Monitoring sample program (AMQSCLM) in the WebSphere MQ information center.
Figure 6 shows an application listening for messages against two receiving gateway queue managers:

Figure 6. An application listening for messages

Connecting a message listener to a durable subscription
Providing the same level of reliability for a durable subscription as described above for a queue is slightly more complex. If an application were to connect to two queue managers and create a durable subscription on each, then it would receive two copies of each message.
Instead, you can get the same level of reliability by administratively creating the subscription on each of the sending gateways to which applications connect to send messages, and pointing that subscription at a clustered queue that is defined on the receiving gateways. To prevent duplication of the messages within a cluster, it is important to set SUBSCOPE to QMGR on the subscriptions. When using this SUBSCOPE(QMGR) approach to durable subscriptions, you do not have to share the topic objects in the cluster -- in fact it is preferable to not cluster any topic objects.
The receiving application then attaches its listeners to the clustered queue, using the procedure described under Connecting a message listener to a queue above. Figure 7 shows the subscriptions and queues configured to allow a single logical durable subscription to exist with no single point of failure:

Figure 7. An application listening for messages on a subscriptions on the sending gateways>

Connecting for synchronous request/response
In synchronous request/response scenarios, an application sends a request, and then blocks waiting for a response or a timeout. It is possible for either the request or the response to get delayed (or lost for non-persistent messages), and for the requester to time out waiting for a response. The requester cannot determine whether the request has succeeded. It is good practice to configure requests and responses with an expiry to prevent orphaned response messages building up on queues when the requesting application times out waiting for a response. Another alternative is to configure the application to search for and handle orphaned response messages.
The simplest coding pattern for achieving request/response messaging with an MQ hub is shown in Figure 8 below, where the requests are workload-balanced across the available sending gateways, and the application looks for the response only on the queue manager to which it was connected when it sent the request. Using this approach, the application must use the same connection to the MQ hub for sending the request and receiving the response. If it were to reconnect before receiving the response, it might connect to a different queue manager, and it would not see the response message sent to the first queue manager.

Figure 8. An application performing simple request/response messaging

Minimizing timeout failures for synchronous request/response
There is an extension to the synchronous request/response pattern that minimizes the number of failed requests if a queue manager in the environment fails. The extension involves listening to two sending gateways for response messages on a clustered response queue. The clustered queues need to be managed so that a separate clustered response queue (or clustered queue manager alias) exists for each requesting application instance.
The additional complexity of listening to two response queues has the most benefit if the latency of the messaging environment is much smaller than the latency involved in performing the business logic (which is most commonly the case), and if there are a large number of parallel receiving instances or threads processing requests. In this scenario, if a sending gateway queue manager fails, it is likely that most requests will be in the middle of processing within application threads ,rather than waiting for delivery within MQ, so the responses can be routed by MQ to the alternative sending gateway queue manager. Figure 9 shows an application performing request/response messaging with a clustered response queue:

Figure 9. An application performing request/response messaging with a clustered queue

Scenarios requiring non-durable publish/subscribe or message ordering
The above patterns of messaging cover a wide variety of uses of MQ. However, there are some scenarios in which the principles described under Connecting applications above are more complicated to apply. Solutions for some of these scenarios are summarized below:
Non-durable publish/subscribe
For non-durable publish/subscribe, if an application attaches multiple times, it receives multiple copies of each publication. Unlike with durable subscriptions, you cannot work around this in the topology described in this article by redirecting the subscription to cluster queues. Alternative approaches include:
Using durable publish/subscribe
The administrative overhead of using a durable subscription is worthwhile if a principal concern is to avoid loss of messages, or to scale message delivery across multiple queue managers in the MQ hub.
Attaching to only one receiving gateway.
Connecting to a single queue manager when receiving messages is a simple approach that is suitable for the majority of nondurable applications. The application does not need to connect to the same queue manager each time it connects, as the MQ cluster can be used to route publications to the application wherever it connects. The limitation of this approach is that the application cannot scale beyond a single receiving instance.
Partitioning your topics
If you need to scale across multiple application instances, you can partition your topics, and embed logic in your publishing applications to workload-balance across the partitions of a topic. With this approach, each application instance attaches to a single gateway, but you can have multiple application instances, each consuming one partition of the topic.
Using Multicast publish/subscribe
If you investigate partitioning your topics to scale across multiple application instances, then you might also want to investigate using the MQ Multicast publish/subscribe feature. It is particularly suitable if you have a large fan-out between publishers and subscribers, or if the equality and fairness of the latency between subscribers is important.
Message ordering
MQ assures order of delivery only when there is exactly one path between the single sending and receiving application threads within the MQ network. All of the approaches described in this article for providing a continuously available MQ infrastructure create multiple paths that messages might take through the MQ infrastructure.
Alternative approaches include:
Allocate a single, highly available sending and receiving gateway to each ordered application
High availability of individual queue managers is still achieved through MQ multi-instance or a HA cluster, as described above in MQ hub.
Use the logical order feature of MQ
Well-defined groups of messages with a beginning and an end can be sent through the MQ infrastructure as a logical group, and targeted to an individual destination queue manager.
Perform reordering within the application
The most flexible solutions involve the sending application adding sequencing information to the messages, which the receiving application then uses to reorder messages that arrive out of sequence. For example, you could use a database shared between the sending application instances to synchronize updates and generate a sequence number, and then the receiving application instances could maintain a similar sequence in their own database when processing the updates.

Comments

adsrerrapop

Popular posts from this blog

IBM Websphere MQ interview Questions Part 5

MQ Series: - It is an IBM web sphere product which is evolved in 1990’s. MQ series does transportation from one point to other. It is an EAI tool (Middle ware) VERSIONS:-5.0, 5.1, 5.3, 6.0, 7.0(new version). The currently using version is 6.2 Note: – MQ series supports more than 35+ operating systems. It is platform Independent. For every OS we have different MQ series software’s. But the functionality of MQ series Default path for installing MQ series is:- C: programfiles\BM\clipse\SDK30 C: programfiles\IBM\WebsphereMQ After installation it will create a group and user. Some middleware technologies are Tibco, SAP XI. MQ series deals with two things, they are OBJECTS, SERVICES. In OBJECTS we have • QUEUES • CHANNELS • PROCESS • AUTHENTICATION • QUERY MANAGER. In SERVICES we have LISTENERS. Objects: – objects are used to handle the transactions with the help of services. QUEUE MANAGER maintains all the objects and services. QUEUE: – it is a database structure

IBM Websphere MQ Reason code list / mq reason codes / websphere mq error codes / mq error messages

Reason code list ================= The following is a list of reason codes, in numeric order, providing detailed information to help you understand them, including: * An explanation of the circumstances that have caused the code to be raised * The associated completion code * Suggested programmer actions in response to the code * 0 (0000) (RC0): MQRC_NONE * 900 (0384) (RC900): MQRC_APPL_FIRST * 999 (03E7) (RC999): MQRC_APPL_LAST * 2001 (07D1) (RC2001): MQRC_ALIAS_BASE_Q_TYPE_ERROR * 2002 (07D2) (RC2002): MQRC_ALREADY_CONNECTED * 2003 (07D3) (RC2003): MQRC_BACKED_OUT * 2004 (07D4) (RC2004): MQRC_BUFFER_ERROR * 2005 (07D5) (RC2005): MQRC_BUFFER_LENGTH_ERROR * 2006 (07D6) (RC2006): MQRC_CHAR_ATTR_LENGTH_ERROR * 2007 (07D7) (RC2007): MQRC_CHAR_ATTRS_ERROR * 2008 (07D8) (RC2008): MQRC_CHAR_ATTRS_TOO_SHORT * 2009 (07D9) (RC2009): MQRC_CONNECTION_BROKEN * 2010 (07DA) (RC2010): MQRC_DATA_LENGTH_ERROR * 2011 (07DB) (RC2011): MQRC_DYNAMIC_Q_NAME_ERROR * 2012 (07DC) (RC201

IBM WebSphere MQ – Common install/uninstall issues for MQ Version on Windows - Middleware News

Creating a log file when you install or uninstall WebSphere MQ WebSphere MQ for Windows is installed using the Microsoft Installer (MSI). If you install the MQ server or client through launchpad , MQPARMS or setup.exe , then a log file is automatically generated in %temp% during installation. Alternatively you can supply parameters on the installation MSI command msiexec to generate a log file, or enable MSI logging system-wide (which generates MSI logs for all install and uninstall operations). If you uninstall through the Windows Add/Remove programs option, no log file is generated. You should either uninstall from the MSI command line and supply parameters to generate a log file, or enable MSI logging system-wide (which generates MSI logs for all install and uninstall operations). For details on how to enable MSI logging, see the following article in the WebSphere MQ product documentation: Advanced installation using msiexec For details on how to enable system-w