Skip to main content

Planning the Sun Cluster HA for IBM WebSphere MQ Installation and Configuration - Middleware News

  • The Sun Cluster HA for WebSphere MQ data service can be configured only as a failover service – WebSphere MQ cannot operate as a scalable service and, therefore, the Sun Cluster HA for WebSphere MQ data service can be configured to run only as a failover service.
  • Mounting /var/mqm as a Global File System – If you intend to install multiple WebSphere MQ Managers, then you must mount /var/mqm as a Global File System.
    After mounting /var/mqm as a Global File System, you must also create a symbolic link for /var/mqm/qmgrs/@SYSTEM to a Local File System on each node within Sun Cluster that will run WebSphere MQ, for example:

    # mkdir -p /var/mqm_local/qmgrs/@SYSTEM
    # mkdir -p /var/mqm/qmgrs
    # ln -s /var/mqm_local/qmgrs/@SYSTEM /var/mqm/qmgrs/@SYSTEM
    #
    This restriction is required because WebSphere MQ uses keys to build internal control structures. These keys are derived from the ftok() function call and need to be unique on each node. Mounting /var/mqm as a Global File System, with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a Local File System ensures that any derived shared memory segments keys are unique on each node.

    Note – If your Queue Managers were created before you setup a symbolic link for /var/mqm/qmgrs/@SYSTEM, you must copy the contents, with permissions, of /var/mqm/qmgrs/@SYSTEM to /var/mqm_local/qmgrs/@SYSTEM before creating the symbolic link. Furthermore, you must stop all Queue Managers before you do this.

  • Mounting /var/mqm as a Failover File System – If you intend to only install one WebSphere MQ Manager, then you can mount /var/mqm as a Failover File System. However, we recommend that you still mount /var/mqm as a Global File System to allow you to install multiple WebSphere MQ Managers in the future.
  • Multiple WebSphere MQ Managers with Failover File Systems – As you are installing multiple WebSphere MQ Managers you must mount /var/mqm as a Global File System, as described earlier. However, the data files for each Queue Manager can be mounted as Failover File Systems through a symbolic link from /var/mqm to the Failover File System. Refer to Example 1–1.
  • Multiple WebSphere MQ Managers with Global File Systems – As you are installing multiple WebSphere MQ Managers you must mount /var/mqm as a Global File System, as described earlier. However, the data files for each Queue Manager can be mounted as Global File Systems. Refer to Example 1–2.
  • Installing WebSphere MQ onto Cluster File Systems – Initially, the WebSphere MQ product is installed into /opt/mqm and /var/mqm. When a WebSphere MQ Manager is created, the default directory locations created are /var/mqm/qmgrs/ and /var/mqm/log/. Before you pkgadd mqm, on all nodes within Sun Cluster that will run WebSphere MQ , you must mount these locations as either Failover File Systems or Global File Systems.
    Example 1–1 shows two WebSphere MQ Managers with Failover File Systems. /var/mqm is mount, via a symbolic link, as a Global File System. A subset of the /etc/vfstab entries for WebSphere MQ are shown.

    Example 1–1 WebSphere MQ Managers with Failover File Systems


    # ls -l /var/mqm
    lrwxrwxrwx   1 root     other         11 Sep 17 16:53 /var/mqm ->
     /global/mqm
    #
    # ls -l /global/mqm/qmgrs
    total 6
    lrwxrwxrwx   1 root      other          512 Sep 17 09:57 @SYSTEM -> 
     /var/mqm_local/qmgrs/@SYSTEM
    lrwxrwxrwx   1 root     other         22 Sep 17 17:19 qmgr1 ->
     /local/mqm/qmgrs/qmgr1
    lrwxrwxrwx   1 root     other         22 Sep 17 17:19 qmgr2 ->
     /local/mqm/qmgrs/qmgr2
    #
    # ls -l /global/mqm/log
    total 4
    lrwxrwxrwx   1 root     other         20 Sep 17 17:18 qmgr1 ->
     /local/mqm/log/qmgr1
    lrwxrwxrwx   1 root     other         20 Sep 17 17:19 qmgr2 ->
     /local/mqm/log/qmgr2
    #
    # more /etc/vfstab (Subset of the output)
    /dev/md/dg_d3/dsk/d30   /dev/md/dg_d3/rdsk/d30  /global/mqm
                 ufs     3       yes     logging,global
    /dev/md/dg_d3/dsk/d33   /dev/md/dg_d3/rdsk/d33  /local/mqm/qmgrs/qmgr1
      ufs     4       no      logging
    /dev/md/dg_d3/dsk/d36   /dev/md/dg_d3/rdsk/d36  /local/mqm/log/qmgr1
        ufs     4       no      logging
    /dev/md/dg_d4/dsk/d43   /dev/md/dg_d4/rdsk/d43  /local/mqm/qmgrs/qmgr2
      ufs     4       no      logging
    /dev/md/dg_d4/dsk/d46   /dev/md/dg_d4/rdsk/d46  /local/mqm/log/qmgr2
        ufs     4       no      logging
    #

    Example 1–2 shows two WebSphere MQ Managers with Global Failover File Systems. /var/mqm is mount, via a symbolic link, as a Global File System. A subset of the /etc/vfstab entries for WebSphere MQ are shown.

    Example 1–2 WebSphere MQ Managers with Global File Systems


    # ls -l /var/mqm
    lrwxrwxrwx   1 root     other         11 Jan  8 14:17 /var/mqm ->
     /global/mqm
    #  
    # ls -l /global/mqm/qmgrs
    total 6
    lrwxrwxrwx   1 root      other          512 Dec 16 09:57 @SYSTEM -> 
     /var/mqm_local/qmgrs/@SYSTEM
    drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr1
    drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr2
    # 
    # ls -l /global/mqm/log
    total 4
    drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr1
    drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr2
    #
    # more /etc/vfstab (Subset of the output)
    /dev/md/dg_d4/dsk/d40   /dev/md/dg_d4/rdsk/d40  /global/mqm
         ufs     3       yes     logging,global
    /dev/md/dg_d4/dsk/d43   /dev/md/dg_d4/rdsk/d43  /global/mqm/qmgrs/qmgr1
     ufs     4       yes     logging,global
    /dev/md/dg_d4/dsk/d46   /dev/md/dg_d4/rdsk/d46  /global/mqm/log/qmgr1
       ufs     4       yes     logging,global
    /dev/md/dg_d5/dsk/d53   /dev/md/dg_d5/rdsk/d53  /global/mqm/qmgrs/qmgr2
     ufs     4       yes     logging,global
    /dev/md/dg_d5/dsk/d56   /dev/md/dg_d5/rdsk/d56  /global/mqm/log/qmgr2
       ufs     4       yes     logging,global

Configuration Requirements

The requirements in this section apply to Sun Cluster HA for WebSphere MQ only. You must meet these requirements before you proceed with your Sun Cluster HA for WebSphere MQ installation and configuration.

Caution –
Your data service configuration might not be supported if you do not adhere to these requirements.


  • WebSphere MQ components and their dependencies —You can configure the Sun Cluster HA for WebSphere MQ data service to protect a WebSphere MQ instance and its respective components. These components and their dependencies are described.
    Table 1–3 WebSphere MQ components and their dependencies (via -> symbol)
    ComponentDescription
    Queue Manager(Mandatory) -> SUNW.HAStoragePlus resource
    The SUNW.HAStoragePlus resource manages the WebSphere MQ File System Mount points and ensures that WebSphere MQ is not started until these are mounted.
    Channel Initiator(Optional) -> Queue_Manager and Listener resources
    Dependency on the Listener is required only if runmqlsr is used instead of inetd.
    By default, a channel initiator is started by WebSphere MQ. However, if you want a different or another channel initiation queue, other than the default (SYSTEM.CHANNEL.INITQ), then you should deploy this component.
    Command Server (Optional) -> Queue_Manager and Listener resources
    Dependency on the Listener is required only if runmqlsr is used instead of inetd.
    Deploy this component if you want WebSphere MQ to process commands sent to the command queue.
    Listener (Optional) ->Queue_Manager resource
    Deploy this component if you want a dedicated listener (runmqlsr) and will not use the inetd listener.
    Trigger Monitor (Optional) ->Queue_Manager and Listener resources
    Dependency on the Listener is required only if runmqlsr is used instead of inetd.
    Deploy this component if you want a trigger monitor.


    Note – For detailed information about these WebSphere MQ components, refer to IBM's WebSphere MQ Application Programming manual.

    Each WebSphere MQ component has a configuration and registration file in /opt/SUNWscmqs/xxx/util, where xxx is a three-character abbreviation for the respective WebSphere MQ component. These files allow you to register the WebSphere MQ components with Sun Cluster.
    Within these files, the appropriate dependencies have been applied.

    Example 1–3 WebSphere MQ configuration and registration file for Sun Cluster


    # cd /opt/SUNWscmqs
    # 
    # ls -l chi/util
    total 4
    -rwxr-xr-x   1 root     sys          720 Dec 20 14:44 chi_config
    -rwxr-xr-x   1 root     sys          586 Dec 20 14:44 chi_register
    # 
    # ls -l csv/util
    total 4
    -rwxr-xr-x   1 root     sys          645 Dec 20 14:44 csv_config
    -rwxr-xr-x   1 root     sys          562 Dec 20 14:44 csv_register
    # 
    # ls -l lsr/util
    total 4
    -rwxr-xr-x   1 root     sys          640 Dec 20 14:44 lsr_config
    -rwxr-xr-x   1 root     sys          624 Dec 20 14:44 lsr_register
    # 
    # ls -l mgr/util
    total 4
    -rwxr-xr-x   1 root     sys          603 Dec 20 14:44 mgr_config
    -rwxr-xr-x   1 root     sys          515 Dec 20 14:44 mgr_register
    # 
    # ls -l trm/util
    total 4
    -rwxr-xr-x   1 root     sys          717 Dec 20 14:44 trm_config
    -rwxr-xr-x   1 root     sys          586 Dec 20 14:44 trm_register
    # 
    # 
    # more mgr/util/*
    ::::::::::::::
    mgr/util/mgr_config
    ::::::::::::::
    #
    # Copyright 2003 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    # 
    # This file will be sourced in by mgr_register and the parameters
    # listed below will be used.
    #
    # These parameters can be customized in (key=value) form
    #
    #      RS - name of the resource for the application
    #      RG - name of the resource group containing RS
    #    QMGR - name of the Queue Manager
    #    PORT - name of the Queue Manager port number
    #      LH - name of the LogicalHostname SC resource
    #  HAS_RS - name of the Queue Manager HAStoragePlus SC resource
    # CLEANUP - Cleanup IPC entries YES or NO (Default CLEANUP=YES)
    #
    #       Under normal shutdown and startup WebSphere MQ manages it's
    #       cleanup of IPC resources with the following fix packs.
    #
    #       MQSeries v5.2 Fix Pack 07 (CSD07)
    #       WebSphere MQ v5.3 Fix Pack 04 (CSD04)
    #
    #       Please refer to APAR number IY38428.
    #
    #       However, while running in a failover environment, the IPC keys
    #       that get generated will be different between nodes. As a result
    #       after a failover of a Queue Manager, some shared memory segments
    #       can remain allocated on the node although not used. 
    #
    #       Although this does not cause WebSphere MQ a problem when starting
    #       or stopping (with the above fix packs applied), it can deplete
    #       the available swap space and in extreme situations a node may 
    #       run out of swap space. 
    #
    #       To resolve this issue, setting CLEANUP=YES will ensure that 
    #       IPC shared memory segments for WebSphere MQ are removed whenever
    #       a Queue Manager is stopped. However IPC shared memory segments 
    #       are only removed under strict conditions, namely
    #
    #       - The shared memory segment(s) are owned by
    #               CREATOR=mqm and CGROUP=mqm
    #       - The shared memory segment has no attached processes
    #       - The CPID and LPID process ids are not running
    #       - The shared memory removal is performed by userid mqm
    #
    #       Setting CLEANUP=NO will not remove any shared memory segments.
    #
    #       Setting CLEANUP=YES will cleanup shared memory segments under the
    #       conditions described above.
    #
    
    RS=
    RG=
    QMGR=
    PORT=
    LH=
    HAS_RS=
    CLEANUP=YES
    ::::::::::::::
    mgr/util/mgr_register
    ::::::::::::::
    #
    # Copyright 2003 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    #
    
    . `dirname $0`/mgr_config
    
    scrgadm -a -j $RS -g $RG -t SUNW.gds \
    -x Start_command="/opt/SUNWscmqs/mgr/bin/start-qmgr \
    -R $RS -G $RG -Q $QMGR -C $CLEANUP " \
    -x Stop_command="/opt/SUNWscmqs/mgr/bin/stop-qmgr \
    -R $RS -G $RG -Q $QMGR -C $CLEANUP " \
    -x Probe_command="/opt/SUNWscmqs/mgr/bin/test-qmgr \
    -R $RS -G $RG -Q $QMGR -C $CLEANUP " \
    -y Port_list=$PORT/tcp -y Network_resources_used=$LH \
    -x Stop_signal=9 \
    -y Resource_dependencies=$HAS_RS
    # 

  • WebSphere MQ Manager protection—
    WebSphere MQ is unable to determine whether a Queue Manager is already running on another node within Sun Cluster if Global File Systems are being used for the WebSphere MQ instance, that is, /global/mqm/qmgrs/ and /global/mqm/log/.
    Under normal conditions, the Sun Cluster HA for WebSphere MQ data service manages the startup and shutdown of the Queue Manager, regardless of which Cluster File System is being used (for example, FFS or GFS).
    However, it is possible that someone could manually start the Queue Manager on another node within Sun Cluster if the WebSphere MQ instance is running on a Global File System.

    Note – This has been reported to IBM and a fix is being worked on.

    To protect against this happening, two options are available.
    1. Use Failover File Systems for the WebSphere MQ instance
      This is the recommended approach because the WebSphere MQ instance files would be mounted only on one node at a time. With this configuration, WebSphere MQ is able to determine whether the Queue Manager is running.
    2. Create a symbolic link for strmqm/endmqm to check-start (Provided script).
      The script /opt/SUNWscmqs/mgr/bin/check-start provides a mechanism to prevent the WebSphere MQ Manager from being started or stopped.
      The check-start script will verify that the WebSphere MQ Manager is being started or stopped by Sun Cluster and will report an error if an attempt is made to start or stop the WebSphere MQ Manager manually.
      Example 1–4shows a manual attempt to start the WebSphere MQ Manager. The response was generated by the check-start script.

      Example 1–4 Manual attempt to start the WebSphere MQ Manager by mistake.


      # strmqm qmgr1
      # Request to run 
within SC3.0 has been refused # This solution is required only if you require a Global File System for the WebSphere MQ instance. Example 1–5details the steps that you must take to achieve this.



Example 1–5 Create a symbolic link for strmqm and endmqm to check-start


# cd /opt/mqm/bin
#
# mv strmqm strmqm_sc3
# mv endmqm endmqm_sc3
#
# ln -s /opt/SUNWscmqs/mgr/bin/check-start strmqm
# ln -s /opt/SUNWscmqs/mgr/bin/check-start endmqm
#
Edit the /opt/SUNWscmqs/mgr/etc/config file and change the following entries for START_COMMAND and STOP_COMMAND. In this example we have chosen to add a suffix to the command names with _sc3. You can choose another name.

# cat /opt/SUNWscmqs/mgr/etc/config
# Copyright 2003 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# Usage:
#       DEBUG= or ALL
#       START_COMMAND=/opt/mqm/bin/
#       STOP_COMMAND=/opt/mqm/bin/
#
DEBUG=
START_COMMAND=/opt/mqm/bin/strmqm_sc3
STOP_COMMAND=/opt/mqm/bin/endmqm_sc3
#


Caution – The above steps need to be done on each node within the cluster that will host the Sun Cluster HA for WebSphere MQ data service. Do not perform this procedure until you have created your Queue Manager(s), because crtmqm will call strmqm and endmqm on its behalf.


Note – If you implement this workaround, then you must back it out whenever you need to apply any maintenance to WebSphere MQ. Afterwards, you would need to reapply this workaround. The recommended approach is to use Failover File Systems for the WebSphere MQ instance, until a fix has been made to WebSphere MQ.

Comments

  1. I feel that a debt of gratitude is in order for the valuabe data and experiences you have so given here. Wondering where to go in 2020? Things to do has ranked as the best include a remote, idyllic island and the design capital.

    ReplyDelete

Post a Comment

adsrerrapop

Popular posts from this blog

IBM Websphere MQ interview Questions Part 5

MQ Series: - It is an IBM web sphere product which is evolved in 1990’s. MQ series does transportation from one point to other. It is an EAI tool (Middle ware) VERSIONS:-5.0, 5.1, 5.3, 6.0, 7.0(new version). The currently using version is 6.2 Note: – MQ series supports more than 35+ operating systems. It is platform Independent. For every OS we have different MQ series software’s. But the functionality of MQ series Default path for installing MQ series is:- C: programfiles\BM\clipse\SDK30 C: programfiles\IBM\WebsphereMQ After installation it will create a group and user. Some middleware technologies are Tibco, SAP XI. MQ series deals with two things, they are OBJECTS, SERVICES. In OBJECTS we have • QUEUES • CHANNELS • PROCESS • AUTHENTICATION • QUERY MANAGER. In SERVICES we have LISTENERS. Objects: – objects are used to handle the transactions with the help of services. QUEUE MANAGER maintains all the objects and services. QUEUE: – it is a database structure

IBM Websphere MQ Reason code list / mq reason codes / websphere mq error codes / mq error messages

Reason code list ================= The following is a list of reason codes, in numeric order, providing detailed information to help you understand them, including: * An explanation of the circumstances that have caused the code to be raised * The associated completion code * Suggested programmer actions in response to the code * 0 (0000) (RC0): MQRC_NONE * 900 (0384) (RC900): MQRC_APPL_FIRST * 999 (03E7) (RC999): MQRC_APPL_LAST * 2001 (07D1) (RC2001): MQRC_ALIAS_BASE_Q_TYPE_ERROR * 2002 (07D2) (RC2002): MQRC_ALREADY_CONNECTED * 2003 (07D3) (RC2003): MQRC_BACKED_OUT * 2004 (07D4) (RC2004): MQRC_BUFFER_ERROR * 2005 (07D5) (RC2005): MQRC_BUFFER_LENGTH_ERROR * 2006 (07D6) (RC2006): MQRC_CHAR_ATTR_LENGTH_ERROR * 2007 (07D7) (RC2007): MQRC_CHAR_ATTRS_ERROR * 2008 (07D8) (RC2008): MQRC_CHAR_ATTRS_TOO_SHORT * 2009 (07D9) (RC2009): MQRC_CONNECTION_BROKEN * 2010 (07DA) (RC2010): MQRC_DATA_LENGTH_ERROR * 2011 (07DB) (RC2011): MQRC_DYNAMIC_Q_NAME_ERROR * 2012 (07DC) (RC201

IBM WebSphere MQ – Common install/uninstall issues for MQ Version on Windows - Middleware News

Creating a log file when you install or uninstall WebSphere MQ WebSphere MQ for Windows is installed using the Microsoft Installer (MSI). If you install the MQ server or client through launchpad , MQPARMS or setup.exe , then a log file is automatically generated in %temp% during installation. Alternatively you can supply parameters on the installation MSI command msiexec to generate a log file, or enable MSI logging system-wide (which generates MSI logs for all install and uninstall operations). If you uninstall through the Windows Add/Remove programs option, no log file is generated. You should either uninstall from the MSI command line and supply parameters to generate a log file, or enable MSI logging system-wide (which generates MSI logs for all install and uninstall operations). For details on how to enable MSI logging, see the following article in the WebSphere MQ product documentation: Advanced installation using msiexec For details on how to enable system-w