Configuring and administering multi-instance brokers for high availability in IBM WebSphere Message Broker - Middleware News
Introduction
Part 1 in this article series covered the basics of multi-instance queue managers and multi-instance brokers, and described the active-passive technique of high availability and horizontal clustering. This article describes the new multi-instance broker feature in IBM® WebSphere® Message Broker, and shows you how to use it used to configure an active-active load-balanced environment. To implement this environment, you need to cluster the WebSphere Message Broker and WebSphere MQ components both horizontally and vertically, as shown in Figure 1:Vertical clustering
Vertical clustering is achieved by clustering the queue managers using WebSphere MQ clustering, which optimizes processing and provides the following advantages:- Increased availability of queues, since multiple instances are exposed as cluster queues
- Faster throughput of messages, since messages can be delivered on multiple queues
- Better distribution of workload based on non-functional requirements
Horizontal clustering
Horizontal clustering is achieved by clustering the queue managers and brokers using the multi-instance feature, which provides the following advantages:- Provides software redundancy similar to vertical clustering
- Provides the additional benefit of hardware redundancy
- Lets you configure multiple instances of the queue manager and broker on separate physical servers, providing a high-availability (HA) solution
- Saves the administrative overhead of a commercial HA solution, such as PowerHA
System information
Examples in this article were run on a system using WebSphere MQ V7.0.1.4 and WebSphere Message Broker V7.0.0.3, with four servers running on SUSE Linux 10.0. Here is the topology of active-active configurations on these four servers:wmbmi1.in.ibm.com hosts the:
- Active instance of the multi-instance queue manager IBMESBQM1
- Passive instance of the multi-instance queue manager IBMESBQM2
- Active instance of the multi-instance broker IBMESBBRK1
- Passive instance of the multi-instance broker IBMESBBRK2
- Active instance of the multi-instance queue manager IBMESBQM2
- Passive instance of the multi-instance queue manager IBMESBQM1
- Active instance of the multi-instance broker IBMESBBRK2
- Passive instance of the multi-instance broker IBMESBBRK1
wmbmi4.in.ibm.com hosts the NSF V4 mount points, which are used by the multi-instance queue manager and multi-instance broker to store their runtime data.
WebSphere MQ cluster IBMESBCLUSTER has three participating queue managers:
- Queue manager IBMESBQM1 acts as a full repository queue manager
- Queue manager IBMESBQM2 acts as a full repository queue manager
- Queue manager IBMESBQM3 acts as a partial repository queue manager
Configuring a shared file system using NFS
For more information on configuring and exporting the /mqha file system hosted on wmbmi4.in.ibm.com., see Configuring and administering multi-instance brokers for high availability in IBM WebSphere Message Broker - Part 1.Set the uid and gid of the mqm group to be identical on all systems. Create log and data directories in a common shared folder named /mqha. Make sure that the mqha directory is owned by the user and group mqm, and that the access permissions are set to rwx for both user and group. The commands below are executed as root user on wmbmi4.in.ibm.com:
Creating and setting ownership for directories under the shared folder /mqha
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM1/data [root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM1/logs [root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMB/IBMESBBRK1 [root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM2/data [root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM2/logs [root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMB/IBMESBBRK2 [root@wmbmi4.in.ibm.com]$ chown -R mqm:mqm /mqha [root@wmbmi4.in.ibm.com]$ chmod -R ug+rwx /mqha
Creating a queue manager
Start by creating the multi-instance queue manager IBMESBQM1 on the first server, wmbmi1.in.ibm.com. Log on as the user mqm and issue the command:Creating queue manager IBMESBQM1 on wmbmi1.in.ibm.com
[mqm@wmbmi1.in.ibm.com]$ crtmqm -md /mqha/WMQ/IBMESBQM1/data -ld /mqha/WMQ/IBMESBQM1/logs IBMESBQM1
Displaying the properties of queue manager IBMESBQM1
[mqm@wmbmi1.in.ibm.com]$ dspmqinf -o command IBMESBQM1 addmqinf -s QueueManager -v Name=IBMESBQM1 -v Directory=IBMESBQM1 -v Prefix=/var/mqm -v DataPath=/mqha/WMQ/IBMESBQM1/data/IBMESBQM1
Creating a reference of IBMESBQM1 on wmbmi2.in.ibm.com
[mqm@wmbmi2.in.ibm.com]$ addmqinf -s QueueManager -v Name=IBMESBQM1 -v Directory=IBMESBQM1 -v Prefix=/var/mqm -v DataPath=/mqha/WMQ/IBMESBQM1/data/IBMESBQM1 WebSphere MQ configuration information added.
Creating queue manager IBMESBQM2 on wmbmi2.in.ibm.com
[mqm@wmbmi2.in.ibm.com]$ crtmqm -md /mqha/WMQ/IBMESBQM2/data -ld /mqha/WMQ/IBMESBQM2/logs IBMESBQM2
Displaying the properties of queue manager IBMESBQM2
[mqm@wmbmi2.in.ibm.com]$ dspmqinf -o command IBMESBQM2 addmqinf -s QueueManager -v Name=IBMESBQM2 -v Directory=IBMESBQM2 -v Prefix=/var/mqm -v DataPath=/mqha/WMQ/IBMESBQM2/data/IBMESBQM2
Creating a reference of IBMESBQM2 on wmbmi1.in.ibm.com
[mqm@wmbmi1.in.ibm.com]$ addmqinf -s QueueManager -v Name=IBMESBQM2 -v Directory=IBMESBQM2 -v Prefix=/var/mqm -v DataPath=/mqha/WMQ/IBMESBQM2/data/IBMESBQM2 WebSphere MQ configuration information added.
Displaying the queue manager on Both Servers
[mqm@wmbmi1.in.ibm.com]$ dspmq QMNAME(IBMESBQM1) STATUS(Ended immediately) QMNAME(IBMESBQM2) STATUS(Ended immediately) [mqm@wmbmi2.in.ibm.com]$ dspmq QMNAME(IBMESBQM1) STATUS(Ended immediately) QMNAME(IBMESBQM2) STATUS(Ended immediately)
Start multi-instance queue managers
start IBMESBQM1 on wmbmi1.in.ibm.com using command 'strmqm -x IBMESBQM1' start IBMESBQM2 on wmbmi2.in.ibm.com using command 'strmqm -x IBMESBQM2' start IBMESBQM2 on wmbmi1.in.ibm.com using command 'strmqm -x IBMESBQM2' start IBMESBQM1 on wmbmi2.in.ibm.com using command 'strmqm -x IBMESBQM1'
Creating a WebSphere MQ Cluster
After the queue managers IBMESBQM1, IBMESBQM2, and IBMESBQM3 are created and started, create listeners on each of these queue managers and then add them in a WebSphere MQ cluster:Define Listeners from runmqsc console
'define listener(IBMESBLISTENER1) trptype(tcp) port(1414) control(qmgr)' on IBMESBQM1 'define listener(IBMESBLISTENER2) trptype(tcp) port(1415) control(qmgr)' on IBMESBQM2 'define listener(IBMESBLISTENER3) trptype(tcp) port(1416) control(qmgr)' on IBMESBQM3
Start listeners from runmqsc console
'START LISTENER(IBMESBLISTENER1)' on IBMESBQM1 'START LISTENER(IBMESBLISTENER2)' on IBMESBQM2 'START LISTENER(IBMESBLISTENER3)' on IBMESBQM3
Add multi-instance queue manager in cluster as full repository
ALTER QMGR REPOS (IBMESBCLUSTER)
Create channels between full repository and partial repository queue managers
Command to be issued on IBMESBQM1 DEFINE CHANNEL (TO.IBMESBQM1) CHLTYPE (CLUSRCVR) TRPTYPE (TCP) CONNAME ('wmbmi1.in.ibm.com (1414), wmbmi2.in.ibm.com (1414)') CLUSTER (IBMESBCLUSTER) DEFINE CHANNEL (TO.IBMESBQM2) CHLTYPE (CLUSSDR) TRPTYPE (TCP) CONNAME ('wmbmi1.in.ibm.com (1415), wmbmi2.in.ibm.com (1415)') CLUSTER (IBMESBCLUSTER) Command to be issued on IBMESBQM2 DEFINE CHANNEL (TO.IBMESBQM2) CHLTYPE (CLUSRCVR) TRPTYPE (TCP) CONNAME ('wmbmi1.in.ibm.com (1415), wmbmi2.in.ibm.com (1415)') CLUSTER (IBMESBCLUSTER) DEFINE CHANNEL (TO.IBMESBQM1) CHLTYPE (CLUSSDR) TRPTYPE (TCP) CONNAME ('wmbmi1.in.ibm.com (1414), wmbmi2.in.ibm.com (1414)') CLUSTER (IBMESBCLUSTER)
Channels between partial repository and full repository queue managers
DEFINE CHANNEL (TO.IBMESBQM3) CHLTYPE (CLUSRCVR) TRPTYPE (TCP) CONNAME ('wmbmi3.in.ibm.com (1416)') CLUSTER (IBMESBCLUSTER) DEFINE CHANNEL (TO.IBMESBQM1) CHLTYPE (CLUSSDR) TRPTYPE (TCP) CONNAME ('wmbmi1.in.ibm.com (1414), wmbmi2.in.ibm.com (1414)') CLUSTER (IBMESBCLUSTER)
IBMESBCLUSTER with three queue managers
All of the queue managers have now been added under the cluster. Next, define local queues on the full repository queue manager and expose them on the cluster for workload balancing. Execute the following command on the full repository queue managers IBMESQM1 and IBMESBQM2:
Defining cluster queues
DEFINE QLOCAL (IBM.ESB.IN) DEFBIND (NOTFIXED) CLWLUSEQ (ANY) CLUSTER (IBMESBCLUSTER)
INPUT
is exposed so the application can put messages on the cluster queue:Queue alias for cluster queue
DEFINE QALIAS (INPUT) TARGQ (IBM.ESB.IN)
Configuring WebSphere Message Broker
At this point the multi-instance queue managers have been created and added to the WebSphere MQ cluster. Next, create multi-instance brokers IBMESBBRK1 and IBMESBBRK2, and then execution groups (DataFlowEngine) will be added to these brokers. Execute the commands below as mqm user:Create multi-instance broker IBMESBBRK1 on wmbmi1.in.ibm.com
mqsicreatebroker IBMESBBRK1 -q IBMESBQM1 -e /mqha/WMB/IBMESBBRK1
Create multi-instance broker IBMESBBRK2 on wmbmi2.in.ibm.com
mqsicreatebroker IBMESBBRK2 -q IBMESBQM2 -e /mqha/WMB/IBMESBBRK2
Create additional instance of IBMESBBRK1 on wmbmi2.in.ibm.com
mqsiaddbrokerinstance IBMESBBRK1 -e /mqha/WMB/IBMESBBRK1
Create additional instance of IBMESBBRK2 on wmbmi1.in.ibm.com
mqsiaddbrokerinstance IBMESBBRK2 -e /mqha/WMB/IBMESBBRK2
Start multi-instance brokers
mqsistart IBMESBBRK1 mqsistart IBMESBBRK2
Create DataFlowEngine
mqsicreateexecutiongroup IBMESBBRK1 -e IBMESBEG mqsicreateexecutiongroup IBMESBBRK2 -e IBMESBEG
Creating a message flow application for WebSphere Message Broker
Below is a simple message flow that you will need to create. This flow will read messages from WebSphere MQ cluster input queues and process messages. The flow consists of a JMSInput node followed by a Compute node and a JMSOutput node:- Input node (JMS.IBM.ESB.IN ) reads from queue IBM.ESB.IN.
- Output node (JMS.IBM.ESB.OUT ) reads from queue IBM.ESB.OUT.
- Compute node (AddBrokerName ) reads and copies the message tree.
Message flow
- The input and output queues (IBM.ESB.IN and IBM.ESB.OUT) have their persistence properties set to Persistent, which means that all messages arriving on these queues are made persistent, to prevent any message loss during failover.
- Input messages are sent through a JmsProducer utility (available in WebSphere MQ JMS Samples). This standalone JMS Client is modified to generate messages having a sequence number in the payload. Input message is a simple XML Message.
- JMSProducer.java appends the sequence number in the input message:
.Hello World #Seq_Num# - The broker message flow reads the message and adds two more values to the message: the name of the broker processing the message, and the timestamp when the message was processed. These two values are added to the message to help in the testing process.
Setting up the message flow
- Create a message flow as shown above in the Message flow graphic. Add the ESQL below to the Compute node:
Compute node ESQL
- Configure the JMSInput node to have the following properties:
- Source Queue = IBM.ESB.IN
- Local JNDI bindings = file:///home/mqm/qcf/QCF1
- Connection factory name = QCF
- Configure the JMSOutput node to have the following properties:
- Source Queue = IBM.ESB.OUT
- Local JNDI bindings = file:///home/mqm/qcf/QCF1
- Connection factory name = QCF
- Change Local JNDI bindings to file:///home/mqm/qcf/QCF2 in the flow and deploy in the second broker IBMESBBRK2. Both brokers will have their own copies of the connection factories.
- Create the bindings for the JMS queues
using the JMSAdmin tool for queue manager IBMESBQM1. The JMSInput node
and JMSOutput node in the flow use the binding file under the
directory /home/mqm/qcf/QCF1/ of the Linux machine used for testing.
To generate the binding file, define the JMS objects first in a file
called JMSobjectsdef:
JMS objects definition
DEF QCF(QCF1) + TRANSPORT(CLIENT) + QMANAGER(IBMESBQM1) + HOSTNAME(127.0.0.1) + PORT(1414) DEF Q(IBM.ESB.IN) + QUEUE(IBM.ESB.IN) + QMANAGER(IBMESBQM1) DEF Q(IBM.ESB.OUT) + QUEUE(IBM.ESB.OUT) + QMANAGER(IBMESBQM1)
- Edit
the JMSAdmin.config file in the /opt/mqm/java/bin directory to have the
following entry, which is the location where the bindings file will be
generated:
Provider URL
PROVIDER_URL=file:/home/mqm/qcf/QCF1
- Run the JMSAdmin command to create the above JMS Objects:
Run JMSAdmin
mqm@wmbmi1:/opt/mqm/java/bin>./JMSAdmin < /home/mqm/JMSobjectsdef
- Repeat the above steps for generating the bindings for queue manager IBMESBQM2 and place it in the directory /home/mqm/qcf/QCF2.
- Deploy the flow on the brokers IBMESBBRK1 and IBMESBBRK2 in the cluster.
- Use
the JmsProducer utility as shown below to send the messages to the
gateway queue manager, which in turn will send the messages to the input
queues of the message flows:
Run JMSProducer utility
java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "
Hello World
JmsConsumer.java is customized to read messages and then log information to a file on shared file system. Entries added in the log file are shown below. You can use this in test scenarios to evaluate failover results. For every read by JMSConsumer.java, the following entry is added to ConsumerLog.txt:
- - < Server name on which multi-instance QM is running > -
Testing the failover scenarios in the MQ cluster
Scenario 1. Controlled failover of WebSphere MQ
In Scenario 1, a large number of messages are sent to the flow and processed by both queue managers in the cluster. Then one of the multi-instance queue managers (IBMESBQM1) is shut down using the endmqm command. When the active instance of queue manager IBMESBQM1 goes down and before the passive instance comes up on the other machine, the messages are processed by the other queue manager IBMESBQM2 in the cluster. You can verify this processing by checking the timestamp and broker names in the messages in the output queue IBM.ESB.OUT. After the passive queue manager of IBMESBQM1 comes up, both queue managers in the cluster continue processing the messages.- Deploy the message flows to both brokers IBMESBBRK1(wmbmi1.in.ibm.com) and IBMESBBRK2 (wmbmi2.in.ibm.com) as described above in Setting up the message flow.
- From
the third server wmbmi3.in.ibm.com, run the JMSProducer utility as
shown below to start publishing the messages to the gateway queue
manager IBMESBQM3:
Run JMSProducer utility
java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "
Hello World - As
the messages are processed, you can observe the data being written to
ConsumerLog.txt in the directory /mqha/logs by the triggered JmsConsumer
JMS Client:
ConsumerLog.txt sample output
IBMESBQM1 - IBM.ESB.OUT - wmbmi1 -
IBMESBQM2 - IBM.ESB.OUT - wmbmi2 -Hello World1 2011-06-02T19:40:49.576381 IBMESBBRK1 Hello World2 2011-06-02T19:39:51.703341 IBMESBBRK2 - Stop the IBMESBQM1 queue manager using the following command in the wmbmi1.in.ibm.com machine:
Stop IBMESBQM1
endmqm -s IBMESBQM1
- As
the active instance of IBMESBQM1 goes down, the passive instance in the
wmbmi2.in.ibm.com machine comes up. But meanwhile, the incoming
messages are processed by multi-instance broker IBMESBBRK2 on the queue
manager IBMESBQM2 shared in the cluster (the messages highlighted in red
in the output below). After the passive instance comes up, the messages
are processed by both members
of the cluster once again, so that there is absolutely no downtime for
the systems. Results from Scenario 1 are shown below:
Output of controlled failover test
Scenario 2. Immediate failover of WebSphere MQ
Scenario 2 is the same as Scenario 1, but instead of shutting down the queue manager using the endmqm command, the MQ process is killed using the kill command.- Deploy the message flows to both brokers IBMESBBRK1(wmbmi1.in.ibm.com) and IBMESBBRK2 (wmbmi2.in.ibm.com) as described above in Setting up the message flow.
- From
the third server wmbmi3.in.ibm.com, run the JMSProducer utility as
shown below to start publishing the messages to the gateway queue
manager IBMESBQM3:
Run JMSProducer utility
java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "
Hello World - As
the messages are processed, you can observe the data being written to
ConsumerLog.txt in the directory /mqha/logs by the triggered JmsConsumer
JMS Client:
ConsumerLog.txt sample output
IBMESBQM1 - IBM.ESB.OUT - winmb1 -
IBMESBQM2 - IBM.ESB.OUT - winmb2 -Hello World 1 2011-06-02T19:40:49.576381 IBMESBBRK1 Hello World 2 2011-06-02T19:39:51.703341 IBMESBBRK2 - Stop the IBMESBQM2 queue manager by killing the execution controller process amqzxma0 of the queue manager IBMESBQM2:
Immediate stop of IBMESBQM2
mqm@wmbmi2:~> ps -ef | grep amqzx mqm 24632 1 0 18:10 ? 00:00:00 amqzxma0 -m IBMESBQM2 -x mqm 13112 1 0 19:31 ? 00:00:00 amqzxma0 -m IBMESBQM1 -x mqm@wmbmi2:~> kill -9 24632
- As
the active instance of IBMESBQM2 goes down, the passive instance in the
wmbmi1.in.ibm.com machine comes up. But meanwhile, the incoming messages
are processed by multi-instance broker IBMESBBRK1 on the queue manager
IBMESBQM1 shared in cluster (the messages highlighted in red in the
output below). After the passive instance comes up, the messages are
processed by both members
of the cluster once again, so that there is absolutely no downtime for
the systems. Results from Scenario 2 are shown below:
Output of Immediate failover test
Scenario 3. Shutting down server wmbmi2.in.ibm.com
In Scenario 3, server wmbmi2.in.ibm.com is rebooted, the passive instance of IBMESBQM2 running in wmbmi1.in.ibm.com is notified that the active instance has gone down, and the passive instance comes up. Meanwhile the messages coming in are processed by the cluster queue manager IBMESBQM1 on wmbmi1.in.ibm.com.- Deploy the message flows to both brokers IBMESBBRK1(wmbmi1.in.ibm.com) and IBMESBBRK2 (wmbmi2.in.ibm.com) as described above in Setting up the message flow.
- From
the third server wmbmi3.in.ibm.com, run the JMSProducer utility as
shown below to start publishing the messages to the gateway queue
manager IBMESBQM3:
Run JMSProducer utility
java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "
Hello World - As
the messages are processed, you can observe the data being written to
ConsumerLog.txt in the directory /mqha/logs by the triggered JmsConsumer
JMS Client:
ConsumerLog.txt sample output
IBMESBQM1 - IBM.ESB.OUT - winmb1 -
IBMESBQM2 - IBM.ESB.OUT - winmb2 -Hello World 2 2011-06-06T17:03:51.838884 IBMESBBRK1 Hello World 1 2011-06-06T17:29:04.264681 IBMESBBRK2 - Reboot the server wmbmi2.in.ibm.com by issuing the following command as root user:
Reboot wmbmi2.in.ibm.com
wmbmi2:/home/mqm # reboot Broadcast message from root (pts/2) (Mon Jun 6 17:30:22 2011): The system is going down for reboot NOW! wmbmi2:/home/mqm # date Mon Jun 6 17:30:40 IST 2011
- As the active instance
of IBMESBQM2 goes down, the passive instance in the wmbmi1.in.ibm.com
machine comes up. But meanwhile, the incoming messages are processed by
multi-instance broker IBMESBBRK1 on the queue manager IBMESBQM1 shared
in cluster (the messages highlighted in red in the output below). After
the passive instance comes up, the messages are processed by both
members
of the cluster once again, so that there is absolutely no downtime for
the systems. Results from Scenario 2 are shown below:
Output of system shutdown test
Comments
Post a Comment