Multi instance Queue Managers - Middleware News
The concept of these multi-instance queue managers is to share the queue manager data in a high available storage place (line SAN), which should be accessible by more than 1 queue manager. For example, we have QM1 and QM2 on different machine. For both these queue managers we keep the logs and QM data in a shared location /shared/QMdata. Now if you start the QM1 first, then QM2 becomes passive. When QM1 is not operational or crashed then QM2 will take over the proceedings.
When you intend to use a queue manager as a multi-instance queue manager, create a single queue manager on one of the servers using the crtmqm command, placing its queue manager data and logs in shared network storage. On the other server, rather than create the queue manager again, use the addmqinf command to create a reference to the queue manager data and logs on the network storage.
To create and use this multi-instance queue managers, we need:
# Client and channel reconnection to transfer WebSphere MQ connections to the computer that takes over running the active queue manager instance.
# A high performance shared network file system that manages locks correctly and provides protection against media and file server failure.
# Resilient networks and power supplies to eliminate single points of failure in the basic infrastructure.
# Applications that tolerate failover. In particular you need to pay close attention to the behavior of transactional applications, and to applications that browse WebSphere MQ queues.
# Monitoring and management of the active and standby instances to ensure that they are running, and to restart active instances that have failed.
How to create the multi-instance Queue Manager?
Note: mqm user and mqm group should have required access permissions to the shared data file system.
the mqm user and group should be available across all the machines which has the QMs and they UID/GID should be same in Linux.
Steps:
I’m assuming that you already have MQ installed on server1 and server2.
Aslo assuming that, you have a NFS/SAN share with name /MQHA with full access to mqm user on both machines. [otherwise, contact your Linux admin to set this]
On Server1, Which will host QM1:
1. Create the shared directory which holds the logs and QM data on the shared network/NFS (like MQHA)
2. Now verify that locking to ensure that multi-instance queue mangers are supported on both machines.
# Run amqmfsck, without any options, on each system to check basic locking
# Run amqmfsck on both WebSphere MQ systems simultaneously, using the -c option, to test writing to the directory concurrently.
# Run amqmfsck on both WebSphere MQ systems at the same time, using the -w option, to test waiting for and releasing a lock on the directory concurrently.
3. check the UID/GID of mqm user
4. Create the logs and data directories in the shared file system
1. mkdir /MQHA
2. mkdir /MQHA/logs
3. mkdir /MQHA/qmgrs
5. Create the queue manager
crtmqm -ld /MQHA/logs -md /MQHA/qmgrs -q QM1
6. Copy the queue manager configuration details from Server 1
dspmqinf -o command QM1
and copy the result to the clip board,
addmqinf -s QueueManager
-v Name=QM1
-v Directory=QM1
-v Prefix=/var/mqm
-v DataPath=/MQHA/qmgrs/QM1
On server2. which will host QM2:
1. Verify the access to shared directory (MQHA)
2. Verify the locking to ensure that multi-instance queue managers are supported [see step2, in Machine1/QM1]
3. Check the GID and UID of mqm and ensure that they are same as Machine1
4. Now, paste the QM configuration information which you got in Step6 of QM1
addmqinf -s QueueManager
-v Name=QM1
-v Directory=QM1
-v Prefix=/var/mqm
-v DataPath=/MQHA/qmgrs/QM1
Start the queue manager instances, in either order, with the -x parameter:
strmqm -x QM1
If you start QM1 first, QM1 will be active and QM2 is passive and vise versa if you start QM2 first.
Testing
Stop the active Queue Manager using, endmqm command with -s option. The client programs reconnect to the new queue manager instance and continue to work with the new instance after a slight delay.
The concept of these multi-instance queue managers is to share the queue manager data in a high available storage place (line SAN), which should be accessible by more than 1 queue manager. For example, we have QM1 and QM2 on different machine. For both these queue managers we keep the logs and QM data in a shared location /shared/QMdata. Now if you start the QM1 first, then QM2 becomes passive. When QM1 is not operational or crashed then QM2 will take over the proceedings.
When you intend to use a queue manager as a multi-instance queue manager, create a single queue manager on one of the servers using the crtmqm command, placing its queue manager data and logs in shared network storage. On the other server, rather than create the queue manager again, use the addmqinf command to create a reference to the queue manager data and logs on the network storage.
To create and use this multi-instance queue managers, we need:
# Client and channel reconnection to transfer WebSphere MQ connections to the computer that takes over running the active queue manager instance.
# A high performance shared network file system that manages locks correctly and provides protection against media and file server failure.
# Resilient networks and power supplies to eliminate single points of failure in the basic infrastructure.
# Applications that tolerate failover. In particular you need to pay close attention to the behavior of transactional applications, and to applications that browse WebSphere MQ queues.
# Monitoring and management of the active and standby instances to ensure that they are running, and to restart active instances that have failed.
How to create the multi-instance Queue Manager?
Note: mqm user and mqm group should have required access permissions to the shared data file system.
the mqm user and group should be available across all the machines which has the QMs and they UID/GID should be same in Linux.
Steps:
I’m assuming that you already have MQ installed on server1 and server2.
Aslo assuming that, you have a NFS/SAN share with name /MQHA with full access to mqm user on both machines. [otherwise, contact your Linux admin to set this]
On Server1, Which will host QM1:
1. Create the shared directory which holds the logs and QM data on the shared network/NFS (like MQHA)
2. Now verify that locking to ensure that multi-instance queue mangers are supported on both machines.
# Run amqmfsck, without any options, on each system to check basic locking
# Run amqmfsck on both WebSphere MQ systems simultaneously, using the -c option, to test writing to the directory concurrently.
# Run amqmfsck on both WebSphere MQ systems at the same time, using the -w option, to test waiting for and releasing a lock on the directory concurrently.
3. check the UID/GID of mqm user
4. Create the logs and data directories in the shared file system
1. mkdir /MQHA
2. mkdir /MQHA/logs
3. mkdir /MQHA/qmgrs
5. Create the queue manager
crtmqm -ld /MQHA/logs -md /MQHA/qmgrs -q QM1
6. Copy the queue manager configuration details from Server 1
dspmqinf -o command QM1
and copy the result to the clip board,
addmqinf -s QueueManager
-v Name=QM1
-v Directory=QM1
-v Prefix=/var/mqm
-v DataPath=/MQHA/qmgrs/QM1
On server2. which will host QM2:
1. Verify the access to shared directory (MQHA)
2. Verify the locking to ensure that multi-instance queue managers are supported [see step2, in Machine1/QM1]
3. Check the GID and UID of mqm and ensure that they are same as Machine1
4. Now, paste the QM configuration information which you got in Step6 of QM1
addmqinf -s QueueManager
-v Name=QM1
-v Directory=QM1
-v Prefix=/var/mqm
-v DataPath=/MQHA/qmgrs/QM1
Start the queue manager instances, in either order, with the -x parameter:
strmqm -x QM1
If you start QM1 first, QM1 will be active and QM2 is passive and vise versa if you start QM2 first.
Testing
Stop the active Queue Manager using, endmqm command with -s option. The client programs reconnect to the new queue manager instance and continue to work with the new instance after a slight delay.
Comments
Post a Comment