Skip to main content

Configuring and administering multi-instance brokers for high availability in WebSphere Message Broker V7, Part 2 - Middleware News


Configuring and administering multi-instance brokers for high availability in WebSphere Message Broker V7, Part 2 - Middleware News




Part 1 in this article series covered the basics of multi-instance queue managers and multi-instance brokers, and described the active-passive technique of high availability and horizontal clustering. This article describes the new multi-instance broker feature in IBM® WebSphere® Message Broker, and shows you how to use it used to configure an active-active load-balanced environment. To implement this environment, you need to cluster the WebSphere Message Broker and WebSphere MQ components both horizontally and vertically, as shown in Figure 1:

Overview of horizontal and vertical clustering

Vertical clustering
Vertical clustering is achieved by clustering the queue managers using WebSphere MQ clustering, which optimizes processing and provides the following advantages:
  • Increased availability of queues, since multiple instances are exposed as cluster queues
  • Faster throughput of messages, since messages can be delivered on multiple queues
  • Better distribution of workload based on non-functional requirements
Horizontal clustering
Horizontal clustering is achieved by clustering the queue managers and brokers using the multi-instance feature, which provides the following advantages:
  • Provides software redundancy similar to vertical clustering
  • Provides the additional benefit of hardware redundancy
  • Lets you configure multiple instances of the queue manager and broker on separate physical servers, providing a high-availability (HA) solution
  • Saves the administrative overhead of a commercial HA solution, such as PowerHA
Combining vertical and horizontal clustering leverages the use of individual physical servers for availability, scalability, throughput, and performance. WebSphere MQ and WebSphere Message Broker enable you to use both clustering techniques individually or together.
System information
Examples in this article were run on a system using WebSphere MQ V7.0.1.4 and WebSphere Message Broker V7.0.0.3, with four servers running on SUSE Linux 10.0. Here is the topology of active-active configurations on these four servers:

Active-active HA topology

wmbmi1.in.ibm.com hosts the:
  • Active instance of the multi-instance queue manager IBMESBQM1
  • Passive instance of the multi-instance queue manager IBMESBQM2
  • Active instance of the multi-instance broker IBMESBBRK1
  • Passive instance of the multi-instance broker IBMESBBRK2
wmbmi2.in.ibm.com hosts the:
  • Active instance of the multi-instance queue manager IBMESBQM2
  • Passive instance of the multi-instance queue manager IBMESBQM1
  • Active instance of the multi-instance broker IBMESBBRK2
  • Passive instance of the multi-instance broker IBMESBBRK1
wmbmi3.in.ibm.com hosts the queue manager IBMESBQM3, which acts a gateway queue manager for the WebSphere MQ cluster IBMESBCLUSTER. This queue manager is used by clients for sending messages.
wmbmi4.in.ibm.com hosts the NSF V4 mount points, which are used by the multi-instance queue manager and multi-instance broker to store their runtime data.
WebSphere MQ cluster IBMESBCLUSTER has three participating queue managers:
  • Queue manager IBMESBQM1 acts as a full repository queue manager
  • Queue manager IBMESBQM2 acts as a full repository queue manager
  • Queue manager IBMESBQM3 acts as a partial repository queue manager
Using two instances of multi-instance queue managers and a multi-instance broker overlapped with a WebSphere MQ cluster provides a continuously available solution with no downtime. When the active instance of the queue manager goes down, then by the time the passive instance starts up, the other queue manager takes over the complete load, meeting the goal to enhance system availability.
Configuring a shared file system using NFS
For more information on configuring and exporting the /mqha file system hosted on wmbmi4.in.ibm.com., see Configuring and administering multi-instance brokers for high availability in WebSphere Message Broker V7, Part 1.
Set the uid and gid of the mqm group to be identical on all systems. Create log and data directories in a common shared folder named /mqha. Make sure that the mqha directory is owned by the user and group mqm, and that the access permissions are set to rwx for both user and group. The commands below are executed as root user on wmbmi4.in.ibm.com:

Creating and setting ownership for directories under the shared folder /mqha
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM1/data
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM1/logs
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMB/IBMESBBRK1
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM2/data
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMQ/IBMESBQM2/logs
[root@wmbmi4.in.ibm.com]$ mkdir -p /mqha/WMB/IBMESBBRK2
[root@wmbmi4.in.ibm.com]$ chown -R mqm:mqm /mqha
[root@wmbmi4.in.ibm.com]$ chmod -R ug+rwx /mqha

Creating a queue manager
Start by creating the multi-instance queue manager IBMESBQM1 on the first server, wmbmi1.in.ibm.com. Log on as the user mqm and issue the command:

Creating queue manager IBMESBQM1 on wmbmi1.in.ibm.com
[mqm@wmbmi1.in.ibm.com]$ crtmqm -md /mqha/WMQ/IBMESBQM1/data -ld 
/mqha/WMQ/IBMESBQM1/logs IBMESBQM1

After the queue manager is created, display the properties of this queue manager using the command below:

Displaying the properties of queue manager IBMESBQM1
[mqm@wmbmi1.in.ibm.com]$ dspmqinf -o command IBMESBQM1
addmqinf -s QueueManager -v Name=IBMESBQM1 -v Directory=IBMESBQM1 -v Prefix=/var/mqm -v
DataPath=/mqha/WMQ/IBMESBQM1/data/IBMESBQM1

Copy the output from the dspmqinf command and paste it on the command line on wmbmi2.in.ibm.com from the console of user mqm:

Creating a reference of IBMESBQM1 on wmbmi2.in.ibm.com
[mqm@wmbmi2.in.ibm.com]$ addmqinf -s QueueManager  -v Name=IBMESBQM1 -v 
Directory=IBMESBQM1 -v Prefix=/var/mqm -v DataPath=/mqha/WMQ/IBMESBQM1/data/IBMESBQM1
WebSphere MQ configuration information added.

The multi-instance queue manager IBMESBQM1 is created. Next create the second multi-instance queue manager IBMESBQM2 on wmbmi2.in.ibm.com. Log on as the user mqm and issue the command:

Creating queue manager IBMESBQM2 on wmbmi2.in.ibm.com
[mqm@wmbmi2.in.ibm.com]$ crtmqm -md /mqha/WMQ/IBMESBQM2/data -ld 
/mqha/WMQ/IBMESBQM2/logs IBMESBQM2

After the queue manager is created, display the properties of the queue manager using the command below:

Displaying the properties of queue manager IBMESBQM2
[mqm@wmbmi2.in.ibm.com]$ dspmqinf -o command IBMESBQM2
addmqinf -s QueueManager -v Name=IBMESBQM2 -v Directory=IBMESBQM2 -v Prefix=/var/mqm -v
DataPath=/mqha/WMQ/IBMESBQM2/data/IBMESBQM2

Copy the output from the dspmqinf command and paste it on the command line on wmbmi1.in.ibm.com from the console of user mqm:

Creating a reference of IBMESBQM2 on wmbmi1.in.ibm.com
[mqm@wmbmi1.in.ibm.com]$ addmqinf -s QueueManager  -v Name=IBMESBQM2 -v 
Directory=IBMESBQM2 -v Prefix=/var/mqm -v DataPath=/mqha/WMQ/IBMESBQM2/data/IBMESBQM2
WebSphere MQ configuration information added.

Next, display the queue managers on both servers using the dspmq command on each. The results should look like this:

Displaying the queue manager on Both Servers
[mqm@wmbmi1.in.ibm.com]$ dspmq
QMNAME(IBMESBQM1)                                          STATUS(Ended immediately)
QMNAME(IBMESBQM2)                                          STATUS(Ended immediately)

[mqm@wmbmi2.in.ibm.com]$ dspmq
QMNAME(IBMESBQM1)                                          STATUS(Ended immediately)
QMNAME(IBMESBQM2)                                          STATUS(Ended immediately)

The multi-instance queue managers IBMESBQM1 and IBMESBQM2 have been created on the servers wmbmi1.in.ibm.com and wmbmi2.in.ibm.com. Start the multi-instance queue managers in the following sequence:

Start multi-instance queue managers
start IBMESBQM1 on wmbmi1.in.ibm.com using command 'strmqm -x IBMESBQM1'
start IBMESBQM2 on wmbmi2.in.ibm.com using command 'strmqm -x IBMESBQM2'
start IBMESBQM2 on wmbmi1.in.ibm.com using command 'strmqm -x IBMESBQM2'
start IBMESBQM1 on wmbmi2.in.ibm.com using command 'strmqm -x IBMESBQM1'

Create a gateway queue manager IBMESBQM3 on wmbmi3.in.ibm.com. Create a queue manager using the crtmqm command and then start it using the strrmqm command. This queue manager is not a multi-instance one. After creating and starting the two multi-instance queue managers, add them into a WebSphere MQ Cluster, as described below.
Creating a WebSphere MQ Cluster
After the queue managers IBMESBQM1, IBMESBQM2, and IBMESBQM3 are created and started, create listeners on each of these queue managers and then add them in a WebSphere MQ cluster::

Define Listeners from runmqsc console
'define listener(IBMESBLISTENER1) trptype(tcp) port(1414) control(qmgr)' on IBMESBQM1
'define listener(IBMESBLISTENER2) trptype(tcp) port(1415) control(qmgr)' on IBMESBQM2
'define listener(IBMESBLISTENER3) trptype(tcp) port(1416) control(qmgr)' on IBMESBQM3

After the listeners are created, start them:

Start listeners from runmqsc console
'START LISTENER(IBMESBLISTENER1)' on IBMESBQM1
'START LISTENER(IBMESBLISTENER2)' on IBMESBQM2
'START LISTENER(IBMESBLISTENER3)' on IBMESBQM3

After the listeners are created and started, add queue managers in the cluster and then create channels between the full repository queue managers. Issue this command on multi-instance queue managers IBMESBQM1 and IBMESBQM2:

Add multi-instance queue manager in cluster as full repository
ALTER QMGR REPOS (IBMESBCLUSTER)

After completion, create CLUSTER sender and CLUSTER receiver channels between the full repository queue managers by issuing the commands below:

Create channels between full repository and partial repository queue managers
Command to be issued on IBMESBQM1

DEFINE CHANNEL (TO.IBMESBQM1) CHLTYPE (CLUSRCVR) TRPTYPE (TCP) CONNAME 
('wmbmi1.in.ibm.com (1414), wmbmi2.in.ibm.com (1414)') CLUSTER (IBMESBCLUSTER)

DEFINE CHANNEL (TO.IBMESBQM2) CHLTYPE (CLUSSDR) TRPTYPE (TCP) CONNAME 
('wmbmi1.in.ibm.com (1415), wmbmi2.in.ibm.com (1415)') CLUSTER (IBMESBCLUSTER)

Command to be issued on IBMESBQM2

DEFINE CHANNEL (TO.IBMESBQM2) CHLTYPE (CLUSRCVR) TRPTYPE (TCP) CONNAME 
('wmbmi1.in.ibm.com (1415), wmbmi2.in.ibm.com (1415)') CLUSTER (IBMESBCLUSTER)

DEFINE CHANNEL (TO.IBMESBQM1) CHLTYPE (CLUSSDR) TRPTYPE (TCP) CONNAME 
('wmbmi1.in.ibm.com (1414), wmbmi2.in.ibm.com (1414)') CLUSTER (IBMESBCLUSTER)

Channels are set up between the two multi-instance queue managers for sharing MQ cluster repository related information. Next, set up the channels between the partial repository gateway QMGR (IBMESBQM3) and one of the full repository queue managers, such as IBMESBQM1. Execute the commands below on queue manager IBMESBQM3:

Channels between partial repository and full repository queue managers
DEFINE CHANNEL (TO.IBMESBQM3) CHLTYPE (CLUSRCVR) TRPTYPE (TCP) CONNAME 
('wmbmi3.in.ibm.com (1416)') CLUSTER (IBMESBCLUSTER)

DEFINE CHANNEL (TO.IBMESBQM1) CHLTYPE (CLUSSDR) TRPTYPE (TCP) CONNAME 
('wmbmi1.in.ibm.com (1414), wmbmi2.in.ibm.com (1414)') CLUSTER (IBMESBCLUSTER)

After the three queue managers are added in cluster, the WebSphere MQ cluster topology should look like this:

IBMESBCLUSTER with three queue managers

All of the queue managers have now been added under the cluster. Next, define local queues on the full repository queue manager and expose them on the cluster for workload balancing. Execute the following command on the full repository queue managers IBMESQM1 and IBMESBQM2:

Defining cluster queues
DEFINE QLOCAL (IBM.ESB.IN) DEFBIND (NOTFIXED) CLWLUSEQ (ANY) CLUSTER (IBMESBCLUSTER)

Queue IBM.ESB.IN will be used by WebSphere Message Broker flows for processing messages. On queue manager IBMESBQM3, create a REMOTE QUEUE definition for cluster queue IBM.ESB.IN. Execute the command below on IBMESBQM3. The alias queue INPUT is also exposed so the application can put messages on the queue:

Remote Queue Definition of Cluster Queue
DEFINE QREMOTE (IBM.ESB.IN) RNAME (IBM.ESB.IN) RQMNAME (IBMESBQM1)
DEFINE QALIAS (INPUT) TARGQ (IBM.ESB.IN)

Configuring WebSphere Message Broker
At this point the multi-instance queue managers have been created and added to the WebSphere MQ cluster. Next, create multi-instance brokers IBMESBBRK1 and IBMESBBRK2, and then execution groups (DataFlowEngine) will be added to these brokers. Execute the commands below as mqm user:

Create multi-instance broker IBMESBBRK1 on wmbmi1.in.ibm.com
mqsicreatebroker IBMESBBRK1 -q IBMESBQM1 -e /mqha/WMB/IBMESBBRK1


Create multi-instance broker IBMESBBRK2 on wmbmi2.in.ibm.com
mqsicreatebroker IBMESBBRK2 -q IBMESBQM2 -e /mqha/WMB/IBMESBBRK2


Create additional instance of IBMESBBRK1 on wmbmi2.in.ibm.com
mqsiaddbrokerinstance IBMESBBRK1 -e /mqha/WMB/IBMESBBRK1


Create additional instance of IBMESBBRK2 on wmbmi1.in.ibm.com
mqsiaddbrokerinstance IBMESBBRK2 -e /mqha/WMB/IBMESBBRK2

Start the multi-instance broker before the next step of creating DataFlowEngine on these brokers. Execute the commands below to start the broker. The active instance of the multi-instance broker will be instantiated on the server, where the corresponding multi-instance queue manager will be running in active state.

Start multi-instance brokers
mqsistart IBMESBBRK1
mqsistart IBMESBBRK2

Execute the commands below on the server, where the corresponding active instances of the multi-instance broker are running. The execution group IBMESBEG will be created:

Create DataFlowEngine
mqsicreateexecutiongroup IBMESBBRK1 -e IBMESBEG
mqsicreateexecutiongroup IBMESBBRK2 -e IBMESBEG

Creating a message flow application for WebSphere Message Broker
Below is a simple message flow that you will need to create. This flow will read messages from WebSphere MQ cluster input queues and process messages. The flow consists of a JMSInput node followed by a Compute node and a JMSOutput node:
  1. Input node (JMS.IBM.ESB.IN ) reads from queue IBM.ESB.IN.
  2. Output node (JMS.IBM.ESB.OUT ) reads from queue IBM.ESB.OUT.
  3. Compute node (AddBrokerName ) reads and copies the message tree.
Input queues are marked as persistent queues, and in case of failure, if messages are already on the INPUT Queue (IBM.ESB.IN) and not yet picked up by broker flow processing, the messages are not lost. The transaction mode on JMS nodes is set to Local, which means that the messages are received under the local sync point of the node, and any messages later sent by an output node in the flow are not put under the local sync point, unless an individual output node specifies that the message must be put under the local sync point.

Message flow

  1. The input and output queues (IBM.ESB.IN and IBM.ESB.OUT) have their persistence properties set to Persistent, which means that all messages arriving on these queues are made persistent, to prevent any message loss during failover.
  2. Input messages are sent through a JmsProducer utility (available in WebSphere MQ JMS Samples). This standalone JMS Client is modified to generate messages having a sequence number in the payload. Input message is a simple XML Message.
  3. JMSProducer.java appends the sequence number in the input message: Hello World#Seq_Num#.
  4. The broker message flow reads the message and adds two more values to the message: the name of the broker processing the message, and the timestamp when the message was processed. These two values are added to the message to help in the testing process.
Setting up the message flow
  1. Create a message flow as shown above in the Message flow graphic. Add the ESQL below to the Compute node:

    Compute node ESQL
  2. Configure the JMSInput node to have the following properties:
    • Source Queue = IBM.ESB.IN
    • Local JNDI bindings = file:///home/mqm/qcf/QCF1
    • Connection factory name = QCF
  3. Configure the JMSOutput node to have the following properties:
    • Source Queue = IBM.ESB.OUT
    • Local JNDI bindings = file:///home/mqm/qcf/QCF1
    • Connection factory name = QCF
  4. Change Local JNDI bindings to file:///home/mqm/qcf/QCF2 in the flow and deploy in the second broker IBMESBBRK2. Both brokers will have their own copies of the connection factories.
  5. Create the bindings for the JMS queues using the JMSAdmin tool for queue manager IBMESBQM1. The JMSInput node and JMSOutput node in the flow use the binding file under the directory /home/mqm/qcf/QCF1/ of the Linux machine used for testing. To generate the binding file, define the JMS objects first in a file called JMSobjectsdef:

    JMS objects definition
        
     DEF QCF(QCF1) + TRANSPORT(CLIENT) + QMANAGER(IBMESBQM1) 
       + HOSTNAME(127.0.0.1) + PORT(1414) 
     DEF Q(IBM.ESB.IN) + QUEUE(IBM.ESB.IN) + QMANAGER(IBMESBQM1)
     DEF Q(IBM.ESB.OUT) + QUEUE(IBM.ESB.OUT) + QMANAGER(IBMESBQM1)
        
  6. Edit the JMSAdmin.config file in the /opt/mqm/java/bin directory to have the following entry, which is the location where the bindings file will be generated:

    Provider URL
        
    PROVIDER_URL=file:/home/mqm/qcf/QCF1
        
  7. Run the JMSAdmin command to create the above JMS Objects:

    Run JMSAdmin
        
    mqm@wmbmi1:/opt/mqm/java/bin>./JMSAdmin < /home/mqm/JMSobjectsdef 
        

    The .bindings file is now available for use in /home/mqm/qcf/QCF1/. You can also do JMS configurations using MQ Explorer. For more information, see Using the WebSphere MQ JMS administration tool in the WebSphere MQ V7 information center.
  8. Repeat the above steps for generating the bindings for queue manager IBMESBQM2 and place it in the directory /home/mqm/qcf/QCF2.
  9. Deploy the flow on the brokers IBMESBBRK1 and IBMESBBRK2 in the cluster.
  10. Use the JmsProducer utility as shown below to send the messages to the gateway queue manager, which in turn will send the messages to the input queues of the message flows:

    Run JMSProducer utility
        
     java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "
     Hello World" 
        
You can use JmsConsumer (available in the WebSphere MQ JMS Samples) to consume the messages generated out of the given message flow. The output queue of the flow (IBM.ESB.OUT) is configured to trigger the JmsConsumer utility whenever the first message is received on the queue. When triggered, this JmsConsumer consumes messages from the IBM.ESB.OUT queue, and writes them to a common flat file called ConsumerLog.txt in the directory /mqha/logs. One instance of this JmsConsumer utility is triggered for each of the queue managers in the cluster. Add runmqtrm as a WebSphere MQ service so that when the queue manager starts, it will start the trigger monitor.
JmsConsumer.java is customized to read messages and then log information to a file on shared file system. Entries added in the log file are shown below. You can use this in test scenarios to evaluate failover results. For every read by JMSConsumer.java, the following entry is added to ConsumerLog.txt:
 - 
 - 
< Server name on which multi-instance QM is running > - 
 

Testing the failover scenarios in the MQ cluster
Scenario 1. Controlled failover of WebSphere MQ
In Scenario 1, a large number of messages are sent to the flow and processed by both queue managers in the cluster. Then one of the multi-instance queue managers (IBMESBQM1) is shut down using the endmqm command. When the active instance of queue manager IBMESBQM1 goes down and before the passive instance comes up on the other machine, the messages are processed by the other queue manager IBMESBQM2 in the cluster. You can verify this processing by checking the timestamp and broker names in the messages in the output queue IBM.ESB.OUT. After the passive queue manager of IBMESBQM1 comes up, both queue managers in the cluster continue processing the messages.
  1. Deploy the message flows to both brokers IBMESBBRK1(wmbmi1.in.ibm.com) and IBMESBBRK2 (wmbmi2.in.ibm.com) as described above in Setting up the message flow.
  2. From the third server wmbmi3.in.ibm.com, run the JMSProducer utility as shown below to start publishing the messages to the gateway queue manager IBMESBQM3:

    Run JMSProducer utility
        
     java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "
     Hello World" 
        
  3. As the messages are processed, you can observe the data being written to ConsumerLog.txt in the directory /mqha/logs by the triggered JmsConsumer JMS Client:

    ConsumerLog.txt sample output
        
    IBMESBQM1 - IBM.ESB.OUT - wmbmi1 - 
       Hello World1
    2011-06-02T19:40:49.576381 IBMESBBRK1
    IBMESBQM2 - IBM.ESB.OUT - wmbmi2 - 
       Hello World2
    2011-06-02T19:39:51.703341 IBMESBBRK2
        
  4. Stop the IBMESBQM1 queue manager using the following command in the wmbmi1.in.ibm.com machine:

    Stop IBMESBQM1
        
    endmqm -s IBMESBQM1
        
  5. As the active instance of IBMESBQM1 goes down, the passive instance in the wmbmi2.in.ibm.com machine comes up. But meanwhile, the incoming messages are processed by multi-instance broker IBMESBBRK2 on the queue manager IBMESBQM2 shared in the cluster (the messages highlighted in red in the output below). After the passive instance comes up, the messages are processed by both members of the cluster once again, so that there is absolutely no downtime for the systems. Results from Scenario 1 are shown below:

    Output of controlled failover test
Scenario 2. Immediate failover of WebSphere MQ
Scenario 2 is the same as Scenario 1, but instead of shutting down the queue manager using the endmqm command, the MQ process is killed using the kill command.
  1. Deploy the message flows to both brokers IBMESBBRK1(wmbmi1.in.ibm.com) and IBMESBBRK2 (wmbmi2.in.ibm.com) as described above in Setting up the message flow.
  2. From the third server wmbmi3.in.ibm.com, run the JMSProducer utility as shown below to start publishing the messages to the gateway queue manager IBMESBQM3:

    Run JMSProducer utility
        
     java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "
     Hello World" 
        
  3. As the messages are processed, you can observe the data being written to ConsumerLog.txt in the directory /mqha/logs by the triggered JmsConsumer JMS Client:

    ConsumerLog.txt sample output
        
    IBMESBQM1 - IBM.ESB.OUT - winmb1 - 
       Hello World 1
    2011-06-02T19:40:49.576381IBMESBBRK1
    IBMESBQM2 - IBM.ESB.OUT - winmb2 - 
        Hello World 2
    2011-06-02T19:39:51.703341IBMESBBRK2
        
  4. Stop the IBMESBQM2 queue manager by killing the execution controller process amqzxma0 of the queue manager IBMESBQM2:

    Immediate stop of IBMESBQM2
        
    mqm@wmbmi2:~> ps -ef | grep amqzx
    mqm      24632     1  0 18:10 ?        00:00:00 amqzxma0 -m IBMESBQM2 -x
    mqm      13112     1  0 19:31 ?        00:00:00 amqzxma0 -m IBMESBQM1 -x
    mqm@wmbmi2:~> kill -9 24632
        
  5. As the active instance of IBMESBQM2 goes down, the passive instance in the wmbmi1.in.ibm.com machine comes up. But meanwhile, the incoming messages are processed by multi-instance broker IBMESBBRK1 on the queue manager IBMESBQM1 shared in cluster (the messages highlighted in red in the output below). After the passive instance comes up, the messages are processed by both members of the cluster once again, so that there is absolutely no downtime for the systems. Results from Scenario 2 are shown below:

    Output of Immediate failover test
Scenario 3. Shutting down server wmbmi2.in.ibm.com
In Scenario 3, server wmbmi2.in.ibm.com is rebooted, the passive instance of IBMESBQM2 running in wmbmi1.in.ibm.com is notified that the active instance has gone down, and the passive instance comes up. Meanwhile the messages coming in are processed by the cluster queue manager IBMESBQM1 on wmbmi1.in.ibm.com.
  1. Deploy the message flows to both brokers IBMESBBRK1(wmbmi1.in.ibm.com) and IBMESBBRK2 (wmbmi2.in.ibm.com) as described above in Setting up the message flow.
  2. From the third server wmbmi3.in.ibm.com, run the JMSProducer utility as shown below to start publishing the messages to the gateway queue manager IBMESBQM3:

    Run JMSProducer utility
        
    java JmsProducer -m IBMESBQM3 -d IBM.ESB.IN -p 1416 -i "
    Hello World"
        
  3. As the messages are processed, you can observe the data being written to ConsumerLog.txt in the directory /mqha/logs by the triggered JmsConsumer JMS Client:

    ConsumerLog.txt sample output
        
    IBMESBQM1 - IBM.ESB.OUT - winmb1 - 
       Hello World 2
    2011-06-06T17:03:51.838884IBMESBBRK1
    IBMESBQM2 - IBM.ESB.OUT - winmb2 - 
       Hello World 1
    2011-06-06T17:29:04.264681IBMESBBRK2
        
  4. Reboot the server wmbmi2.in.ibm.com by issuing the following command as root user:

    Reboot wmbmi2.in.ibm.com
        
    wmbmi2:/home/mqm # reboot
    Broadcast message from root (pts/2) (Mon Jun  6 17:30:22 2011):
    The system is going down for reboot NOW!
    wmbmi2:/home/mqm # date
    Mon Jun  6 17:30:40 IST 2011
        
  5. As the active instance of IBMESBQM2 goes down, the passive instance in the wmbmi1.in.ibm.com machine comes up. But meanwhile, the incoming messages are processed by multi-instance broker IBMESBBRK1 on the queue manager IBMESBQM1 shared in cluster (the messages highlighted in red in the output below). After the passive instance comes up, the messages are processed by both members of the cluster once again, so that there is absolutely no downtime for the systems. Results from Scenario 2 are shown below:

    Output of system shutdown test

Comments

adsrerrapop

Popular posts from this blog

IBM Websphere MQ interview Questions Part 5

MQ Series: - It is an IBM web sphere product which is evolved in 1990’s. MQ series does transportation from one point to other. It is an EAI tool (Middle ware) VERSIONS:-5.0, 5.1, 5.3, 6.0, 7.0(new version). The currently using version is 6.2 Note: – MQ series supports more than 35+ operating systems. It is platform Independent. For every OS we have different MQ series software’s. But the functionality of MQ series Default path for installing MQ series is:- C: programfiles\BM\clipse\SDK30 C: programfiles\IBM\WebsphereMQ After installation it will create a group and user. Some middleware technologies are Tibco, SAP XI. MQ series deals with two things, they are OBJECTS, SERVICES. In OBJECTS we have • QUEUES • CHANNELS • PROCESS • AUTHENTICATION • QUERY MANAGER. In SERVICES we have LISTENERS. Objects: – objects are used to handle the transactions with the help of services. QUEUE MANAGER maintains all the objects and services. QUEUE: – it is a database structure

IBM Websphere MQ Reason code list / mq reason codes / websphere mq error codes / mq error messages

Reason code list ================= The following is a list of reason codes, in numeric order, providing detailed information to help you understand them, including: * An explanation of the circumstances that have caused the code to be raised * The associated completion code * Suggested programmer actions in response to the code * 0 (0000) (RC0): MQRC_NONE * 900 (0384) (RC900): MQRC_APPL_FIRST * 999 (03E7) (RC999): MQRC_APPL_LAST * 2001 (07D1) (RC2001): MQRC_ALIAS_BASE_Q_TYPE_ERROR * 2002 (07D2) (RC2002): MQRC_ALREADY_CONNECTED * 2003 (07D3) (RC2003): MQRC_BACKED_OUT * 2004 (07D4) (RC2004): MQRC_BUFFER_ERROR * 2005 (07D5) (RC2005): MQRC_BUFFER_LENGTH_ERROR * 2006 (07D6) (RC2006): MQRC_CHAR_ATTR_LENGTH_ERROR * 2007 (07D7) (RC2007): MQRC_CHAR_ATTRS_ERROR * 2008 (07D8) (RC2008): MQRC_CHAR_ATTRS_TOO_SHORT * 2009 (07D9) (RC2009): MQRC_CONNECTION_BROKEN * 2010 (07DA) (RC2010): MQRC_DATA_LENGTH_ERROR * 2011 (07DB) (RC2011): MQRC_DYNAMIC_Q_NAME_ERROR * 2012 (07DC) (RC201

IBM WebSphere MQ – Common install/uninstall issues for MQ Version on Windows - Middleware News

Creating a log file when you install or uninstall WebSphere MQ WebSphere MQ for Windows is installed using the Microsoft Installer (MSI). If you install the MQ server or client through launchpad , MQPARMS or setup.exe , then a log file is automatically generated in %temp% during installation. Alternatively you can supply parameters on the installation MSI command msiexec to generate a log file, or enable MSI logging system-wide (which generates MSI logs for all install and uninstall operations). If you uninstall through the Windows Add/Remove programs option, no log file is generated. You should either uninstall from the MSI command line and supply parameters to generate a log file, or enable MSI logging system-wide (which generates MSI logs for all install and uninstall operations). For details on how to enable MSI logging, see the following article in the WebSphere MQ product documentation: Advanced installation using msiexec For details on how to enable system-w