Skip to main content

Creating a multi-instance queue manager for IBM WebSphere MQ on UNIX with auto client reconnect - Middleware News

IBM WebSphere MQ V7 can help you increase messaging availability without requiring specialist skills or additional hardware. It provides automatic failover via multi-instance queue managers in the event of an unplanned outage, or controlled switchover for planned outages such as applying software maintenance.

With this new availability option, the messages and data for a multi-instance queue manager are held on networked storage accessed via a network file system (NFS) protocol, such as NFS V4. You can then define and start multiple instances of this queue manager on different machines, with one active instance and one standby instance. The active queue manager instance processes messages and accepts connections from applications and other queue managers. It holds a lock on the queue manager data to ensure that there is only one active instance of the queue manager. The standby queue manager instance periodically checks whether the active queue manager instance is still running. If the active queue manager instance fails or is no longer connected, the standby instance acquires the lock on the queue manager data as soon as it is released, performs queue manager restart processing, and becomes the active queue manager instance.
Here is an illustration of the multi-instance queue manager and client auto-reconnect:

Prerequisites

  • Set up Websphere MQ V7.0.1 on both the server and the client machines according to the guidelines and instructions in the information center.
  • The machines should have mqm and mqtest users belonging to the mqm group.
  • The user id and the group id of mqm and mqtest users should be the same on both machines.
  • Machine1:
    • id mqm: uid=301(mqm), gid=301(mqm)
    • id mqtest: uid=501(mqtest), gid=301(mqm)
  • Machine2:
    • id mqm: uid=301(mqm), gid=301(mqm)
    • id mqtest: uid=501(mqtest), gid=301(mqm)

Setting up NFS on HP-UX

In this example, NFS server = hpate1, exported directory = /HA, and NFS client = hostile.

NFS server configuration on HP-UX

  1. Log in to the server machine as root and configure.
  2. Edit the file /etc/rc.config.d/nfsconf to change the values for NFS_SERVER and START_MOUNTD to 1:
    #more /etc/rc.config.d/nfsconf
    NFS_SERVER=1
    START_MOUNTD=1
  3. Start the nfs.server script:
    /sbin/init.d/nfs.server start
  4. Edit /etc/exports to add an entry for each directory that is to be exported:
    # more /etc/exports
    /HA
    #
  5. Force the NFS daemon nfsd to reread /etc/exports :
    #/usr/sbin/exportfs -a
  6. Verify the NFS setup using showmount -e:
    # showmount -e
    export list for hpate1:
    HA (everyone)
    #

NFS client configuration on HP-UX

  1. Log in as root.
  2. Check that the directory that you are importing to on the NFS client machine is either empty or doesn't exist.
  3. Create a directory if the directory doesn't exist:
    #mkdir /HA
  4. Add an entry to /etc/fstab so the file system will be automatically mounted at boot-up:
    nfs_server:/nfs_server_dir /client_dir  nfs defaults 0 0
    # more /etc/fstab
    hpate1:/ha /ha NFS DEFAULTS 0 0
  5. Mount the remote file system:
    #/usr/sbin/mount -a
  6. Verify the NFS setup:
    # mount -v
    hpate1:/HA on /HA type nfs rsize=32768,wsize=32768,NFSv4,dev=4000004 
        on Tue Aug  3 14:15:18 2010
    #

Setting up NFS on AIX

In this example, NFS server = axion, exported directory = /HA, and NFS client = hurlqc.

NFS Server configuration on AIX

  1. Login as root.
  2. Enter smitty mknfsexp on the command line and specify the directory that has to be exported:
    #smitty mknfsexp
    
    Pathname of directory to export                   [/HA]
    Anonymous UID                                     [-2]
    Public filesystem?                                no
    * Export directory now, system restart, or both?  Both
    Pathname of alternate exports file                []
    Allow access by NFS versions                      []
    External name of directory (NFS V4 access only)   []
    Referral locations (NFS V4 access only)           []
    Replica locations                                 []
    Ensure primary hostname in replica list           Yes
    Allow delegations?                                No
    Scatter                                           None
    * Security method 1                               [sys,krb5p,krb5i,krb5,dh]
    * Mode to export directory                        Read-write
    Hostname list. If exported read-mostly            []
    Hosts and netgroups allowed client access         []
    Hosts allowed root access                         []
    Security method 2                                 []
    Mode to export directory                          []
    If there were no problems, you should see an "OK".
  3. Verify the server setup:
    # showmount -e
    export list for axion:
    /HA (everyone)
    #

NFS Client configuration on AIX

  1. Login as root.
  2. Check that the directory that you are importing to the NFS client machine is either empty or doesn't exist.
  3. Create a directory if the directory doesn't exist: #mkdir /HA
    smitty mknfsmnt
    
    Type or select values in entry fields.
    Press Enter AFTER making all desired changes.
    [TOP]                                                [Entry Fields]
    * Pathname of mount point                            [/HA]
    * Pathname of remote directory                       [/HA]
    * Host where remote directory resides                [axion]
      Mount type name                                    []
    * Security method                                    [sys]
    * Mount now, add entry to /etc/filesystems or both?  Both
    * /etc/filesystems entry will mount the directory    Yes
         on system restart.
    * Mode for this NFS file system                      Read-write
    * Attempt mount in foreground or background          Background
    * Number of times to attempt mount                   []
    * Buffer size for read                               []
    * Buffer size for writes                             []
    [MORE...26]
    
    F1=Help         F2=Refresh        F3=Cancel          F4=List
    F5=Reset        F6=Command        F7=Edit            F8=Image
    F9=Shell        F10=Exit          Enter=Do
    If you received an "OK" message, it worked. You should be able to see and access the mounted NFS.

Setting up NFS on Solaris

In this example, NFS server = stallion.in.ibm.com, Exported directory = /HA, and NFS client = saigon.in.ibm.com
NFS Server configuration on Solaris
  1. Login as root.
  2. Check that the directory that you are importing to the NFS client machine is either empty or doesn't exist.
  3. Create a directory if the directory doesn't exist and change the permissions accordingly:
    #mkdir /HA
    #chmod 777 HA
  4. Edit /etc/dfs/dfstab to add an entry for sharing the HA directory with NFS clients:
    # more /etc/dfs/dfstab
    share -F nfs -o rw /HA
  5. Start the NFS server:
    #/etc/init.d/nfs.server start
  6. Verify the setup using showmount command:
    #showmount -e
    export list for stallion:
    /HA (everyone)
    #

NFS client configuration on Solaris

  1. Login as root on the client machine.
  2. Create a directory and give the appropriate permissions:
    #mkdir /HA
    #chmod 777 HA
  3. Enter:
    mount -f nfs stallion.in.ibm.com:/ /HA
  4. Verify the setup using the mount -v command:
    #mount -v
    stallion.in.ibm.com:/HA on /HA type nfs remote/read/write/setuid/devices/xattr/dev=
        5280002 on Mon Aug 23 15:00:50  2010
    #

Executing amqmfsck to verify that the file system is compliant with POSIX standards

In this example: Server1 = stallion.in.ibm.com and Server2 = saigon.in.ibm.com.
  1. Execute amqmfsck with no options to check the basic locking:
    su - mqtest
    export PATH=/opt/mqm/bin:$PATH
    
    On Server1:
    $ amqmfsck /HA/mqdata
    The tests on the directory completed successfully.
    
    On Server2:
    $ amqmfsck /HA/mqdata
    The tests on the directory completed successfully.
  2. Execute amqmfsck with the -c option to test the writing to a directory:
    On Server1:
    $ amqmfsck -c /HA/mqdata
    Start a second copy of this program with the same parameters on another server. 
    Writing to test file. 
    This will normally complete within about 60 seconds.
    .................
    The tests on the directory completed successfully.
    
    On Server2:
    
    $ amqmfsck -c /HA/mqdata
    Start a second copy of this program with the same parameters on another server.
    Writing to test file. 
    This will normally complete within about 60 seconds.
    .................
    The tests on the directory completed successfully.
  3. Execute amqmfschk with the -w option on both the machines simultaneously to test waiting for and releasing a lock on the directory concurrently:
    On Server1:
    $ $ amqmfsck -wv /HA/mqdata
    System call: stat("/HA/mqdata",&statbuf)
    System call: statvfs("/HA/mqdata")
    System call: fd = open("/HA/mqdata/amqmfsck.lkw",O_CREAT|O_RDWR,0666)
    System call: fchmod(fd,0666)
    System call: fstat(fd,&statbuf)
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Start a second copy of this program with the same parameters on another server.
    File lock acquired.
    Press Enter or terminate this process to release the lock.
    
    On Server2:
    
    $ amqmfsck -wv /HA/mqdata
    System call: stat("/HA/mqdata",&statbuf)
    System call: statvfs("/HA/mqdata")
    System call: fd = open("/HA/mqdata/amqmfsck.lkw",O_CREAT|O_RDWR,0666)
    System call: fchmod(fd,0666)
    System call: fstat(fd,&statbuf)
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    Waiting for the file lock.
    System call: fcntl(fd,F_SETLK,F_WRLCK)
    File lock acquired.
    Press Enter or terminate this process to release the lock.
    
    System call: close(fd)
    File lock released.
    
    The tests on the directory completed successfully.

Setting up a multi-instance queue manager

In this example, Server1 = stallion.in.ibm.com and Server2 = saigon.in.ibm.com.

Server 1

  1. Create the logs and qmgrs directory in the shared file system:
    # mkdir logs
    # mkdir qmgrs
    # chown -R mqm:mqm /HA
    # chmod -R ug+rwx /HA
  2. Create the queue manager:
    # crtmqm -ld /HA/logs -md /HA/qmgrs -q QM1
    WebSphere MQ queue manager created.
    Directory '/HA/qmgrs/QM1' created.
    Creating or replacing default objects for QM1.
    Default objects statistics : 65 created. 0 replaced. 0 failed.
    Completing setup.
    Setup completed.
    #
  3. Copy the queuemanager configuration details from Server1:
    #  dspmqinf -o command QM1
  4. Copy the output of the above command to Notepad. The output will be in the following format:
    addmqinf -s QueueManager -v Name=QM1 -v Directory=QM1 -v Prefix=/var/mqm -v  
        DataPath=/HA/qmgrs/QM1

Server 2

  1. Paste the output of the command was saved in Notepad in Step 4:
    # addmqinf -s QueueManager -v Name=QM1 -v Directory=QM1 -v Prefix=/var
    /mqm -v DataPath=/HA/qmgrs/QM1
    WebSphere MQ configuration information added.
    #
  2. Start the active instance of queue manager on Server 1:
    # strmqm -x QM1
    WebSphere MQ queue manager 'QM1' starting.
    5 log records accessed on queue manager 'QM1' during the log replay phase.
    Log replay for queue manager 'QM1' complete.
    Transaction manager state recovered for queue manager 'QM1'.
    WebSphere MQ queue manager 'QM1' started.
    #
  3. Start the standby instance of queue manager on Server 2:
    # strmqm -x QM1
    WebSphere MQ queue manager QM1 starting.
    A standby instance of queue manager QM1 has been started. 
    The active instance is running elsewhere.
    #
  4. Verify the setup using dspmq -x:
    On Server1 (stallion)
    # dspmq -x
    QMNAME(QM1) STATUS(Running)
        INSTANCE(stallion) MODE(Active)
        INSTANCE(saigon) MODE(Standby)
    #
    
    On Server2 (saigon)
    # dspmq -x
    QMNAME(QM1) STATUS(Running as standby)
        INSTANCE(stallion) MODE(Active)
        INSTANCE(saigon) MODE(Standby)
    #

Creating a client auto-reconnect setup

In this example, Server1 = lins.in.ibm.com and Server2 = gtstress42.in.ibm.com. On Server 1:
  1. Create a local queue called Q with defpsist(yes).
  2. Create a svrconn channel called CHL.
  3. Start a listener running at port 9898:
    [root@lins ~]# runmqsc QM1
    5724-H72 
    (C) Copyright IBM Corp. 1994, 2009.  ALL RIGHTS RESERVED.
    Starting MQSC for queue manager QM1.
    
    def ql(Q) defpsist(yes)
        1 : def ql(Q) defpsist(yes)
    AMQ8006: WebSphere MQ queue created.
    define channel(CHL) chltype(SVRCONN) trptype(tcp) MCAUSER('mqm') replace
        2 : define channel(CHL) chltype(SVRCONN) trptype(tcp) MCAUSER('mqm') replace
    AMQ8014: WebSphere MQ channel created.
    end
    
    [root@lins ~]# runmqlsr -m SAMP -t tcp -p 9898 &
    [1] 26866
    [root@lins ~]# 5724-H72 
    (C) Copyright IBM Corp. 1994, 2009.  ALL RIGHTS RESERVED.
  4. Set MQSERVER variable on Server1:
    Export MQSERVER=/tcp/
    
    For example: export MQSERVER=CHL/TCP/'9.122.163.105(9898),9.122.163.77(9898)'
  5. On Server 2, start a listener at port 9898:
    [root@gtstress42 ~]# runmqlsr -m QM1 -t tcp -p 9898 &
    [1] 24467
    [root@gtstress42 ~]# 5724-H72 
    (C) Copyright IBM Corp. 1994, 2009.  ALL RIGHTS RESERVED.

Executing the client auto-reconnect samples

Server 1

  1. Invoke the amqsphac sample program:
    [root@lins ~]# amqsphac Q QM1
    Sample AMQSPHAC start
    target queue is Q
    message < Message 1 >
    message < Message 2 >
    message < Message 3 >
    message < Message 4 >
    message < Message 5 >
    message < Message 6 >
    message < Message 7 >
    message < Message 8 >
    message < Message 9 >
    message < Message 10 >
  2. In another window on Server 1, end the queue manager with the -is option so that it will switch over to a standby queue manager:
    Server 1(new session):
    
    [root@lins ~]# endmqm -is QM1
    WebSphere MQ queue manager 'QM1' ending.
    WebSphere MQ queue manager 'QM1' ended, permitting switchover to a standby
  3. Verify that a switchover has occurred:
    On Server2:
    
    [root@gtstress42 ~]# dspmq -x -o standby
    QMNAME(QM1)         STANDBY(Permitted)
        INSTANCE(gtstress42.in.ibm.com) MODE(Active) instance.
  4. The connection will break and a reconnection will occur on the standby queue manager:
    On Server 1
    16:12:28 : EVENT : Connection Reconnecting (Delay: 57ms)
    10/06/2010 04:12:35 PM AMQ9508: Program cannot connect to the queue manager.
    10/06/2010 04:12:35 PM AMQ9999: Channel program ended abnormally.
    16:12:37 : EVENT : Connection Reconnecting (Delay: 0ms)
    10/06/2010 04:12:37 PM AMQ9508: Program cannot connect to the queue manager.
    10/06/2010 04:12:37 PM AMQ9999: Channel program ended abnormally.
    16:12:37 : EVENT : Connection Reconnected
    16:12:38 : EVENT : Connection Broken
    message < Message 11 >
    message < Message 12 >
    message < Message 13 >
    message < Message 14 >
    message < Message 15 >
    message < Message 16 >
    message < Message 17 >
    message < Message 18 >
    message < Message 19 >
    message < Message 20 >
    message < Message 21 >
    message < Message 22 >
  5. Run the sample program amsghac on Server 1 to get the messages:
    [root@lins ~]# amqsghac Q SAMP
    Sample AMQSGHAC start
    10/06/2010 04:14:33 PM AMQ9508: Program cannot connect to the queue manager.
    10/06/2010 04:14:33 PM AMQ9999: Channel program ended abnormally.
    message < Message 1 >
    message < Message 2 >
    message < Message 3 >
    message < Message 4 >
    message < Message 5 >
    message < Message 6 >
    message < Message 7 >
    message < Message 8 >
    message < Message 9 >
    message < Message 10 >
    message < Message 11 >
    message < Message 12 >
    message < Message 13 >
    message < Message 14 >
    message < Message 15 >
    message < Message 16 >
    message < Message 17 >
    message < Message 18 >
    message < Message 19 >
    message < Message 20 >
    message < Message 21 >
    message < Messagee 22 >

Conclusion

This article showed you how to set up a multi-instance queue manager on various Unix flavors, including AIX, HP-UX, and Solaris, and how to run sample programs to check client auto-reconnection.

Comments

adsrerrapop

Popular posts from this blog

IBM Websphere MQ interview Questions Part 5

MQ Series: - It is an IBM web sphere product which is evolved in 1990’s. MQ series does transportation from one point to other. It is an EAI tool (Middle ware) VERSIONS:-5.0, 5.1, 5.3, 6.0, 7.0(new version). The currently using version is 6.2 Note: – MQ series supports more than 35+ operating systems. It is platform Independent. For every OS we have different MQ series software’s. But the functionality of MQ series Default path for installing MQ series is:- C: programfiles\BM\clipse\SDK30 C: programfiles\IBM\WebsphereMQ After installation it will create a group and user. Some middleware technologies are Tibco, SAP XI. MQ series deals with two things, they are OBJECTS, SERVICES. In OBJECTS we have • QUEUES • CHANNELS • PROCESS • AUTHENTICATION • QUERY MANAGER. In SERVICES we have LISTENERS. Objects: – objects are used to handle the transactions with the help of services. QUEUE MANAGER maintains all the objects and services. QUEUE: – it is a database structure

IBM Websphere MQ Reason code list / mq reason codes / websphere mq error codes / mq error messages

Reason code list ================= The following is a list of reason codes, in numeric order, providing detailed information to help you understand them, including: * An explanation of the circumstances that have caused the code to be raised * The associated completion code * Suggested programmer actions in response to the code * 0 (0000) (RC0): MQRC_NONE * 900 (0384) (RC900): MQRC_APPL_FIRST * 999 (03E7) (RC999): MQRC_APPL_LAST * 2001 (07D1) (RC2001): MQRC_ALIAS_BASE_Q_TYPE_ERROR * 2002 (07D2) (RC2002): MQRC_ALREADY_CONNECTED * 2003 (07D3) (RC2003): MQRC_BACKED_OUT * 2004 (07D4) (RC2004): MQRC_BUFFER_ERROR * 2005 (07D5) (RC2005): MQRC_BUFFER_LENGTH_ERROR * 2006 (07D6) (RC2006): MQRC_CHAR_ATTR_LENGTH_ERROR * 2007 (07D7) (RC2007): MQRC_CHAR_ATTRS_ERROR * 2008 (07D8) (RC2008): MQRC_CHAR_ATTRS_TOO_SHORT * 2009 (07D9) (RC2009): MQRC_CONNECTION_BROKEN * 2010 (07DA) (RC2010): MQRC_DATA_LENGTH_ERROR * 2011 (07DB) (RC2011): MQRC_DYNAMIC_Q_NAME_ERROR * 2012 (07DC) (RC201

IBM WebSphere MQ – Common install/uninstall issues for MQ Version on Windows - Middleware News

Creating a log file when you install or uninstall WebSphere MQ WebSphere MQ for Windows is installed using the Microsoft Installer (MSI). If you install the MQ server or client through launchpad , MQPARMS or setup.exe , then a log file is automatically generated in %temp% during installation. Alternatively you can supply parameters on the installation MSI command msiexec to generate a log file, or enable MSI logging system-wide (which generates MSI logs for all install and uninstall operations). If you uninstall through the Windows Add/Remove programs option, no log file is generated. You should either uninstall from the MSI command line and supply parameters to generate a log file, or enable MSI logging system-wide (which generates MSI logs for all install and uninstall operations). For details on how to enable MSI logging, see the following article in the WebSphere MQ product documentation: Advanced installation using msiexec For details on how to enable system-w