-
The Sun Cluster HA for WebSphere MQ data service can be configured
only as a failover service – WebSphere MQ cannot operate
as a scalable service and, therefore, the Sun Cluster HA for WebSphere MQ data service can be configured
to run only as a failover service.
-
Mounting /var/mqm as a Global File
System – If you intend to install multiple WebSphere MQ
Managers, then you must mount /var/mqm as a Global File
System.
After mounting /var/mqm as a Global File System, you must also create a symbolic link for /var/mqm/qmgrs/@SYSTEM to a Local File System on each node within Sun Cluster that will run WebSphere MQ, for example:
# mkdir -p /var/mqm_local/qmgrs/@SYSTEM # mkdir -p /var/mqm/qmgrs # ln -s /var/mqm_local/qmgrs/@SYSTEM /var/mqm/qmgrs/@SYSTEM #
Note – If your Queue Managers were created before you setup a symbolic link for /var/mqm/qmgrs/@SYSTEM, you must copy the contents, with permissions, of /var/mqm/qmgrs/@SYSTEM to /var/mqm_local/qmgrs/@SYSTEM before creating the symbolic link. Furthermore, you must stop all Queue Managers before you do this.
-
Mounting /var/mqm as a Failover File
System – If you intend to only install one WebSphere MQ Manager,
then you can mount /var/mqm as a Failover File System.
However, we recommend that you still mount /var/mqm as
a Global File System to allow you to install multiple WebSphere MQ Managers
in the future.
-
Multiple WebSphere MQ Managers with
Failover File Systems – As you are installing multiple WebSphere
MQ Managers you must mount /var/mqm as a Global File System,
as described earlier. However, the data files for each Queue Manager can be
mounted as Failover File Systems through a symbolic link from /var/mqm to the Failover File System. Refer to Example 1–1.
-
Multiple WebSphere MQ Managers with
Global File Systems – As you are installing multiple WebSphere
MQ Managers you must mount /var/mqm as a Global File System,
as described earlier. However, the data files for each Queue Manager can be
mounted as Global File Systems. Refer to Example 1–2.
-
Installing WebSphere MQ onto Cluster
File Systems – Initially, the WebSphere MQ product is installed
into /opt/mqm and /var/mqm. When a WebSphere MQ
Manager is created, the default directory locations created are /var/mqm/qmgrs/
and /var/mqm/log/ . Before you pkgadd mqm, on all nodes within Sun Cluster that will run WebSphere MQ , you must mount these locations as either Failover File Systems or Global File Systems.
Example 1–1 shows two WebSphere MQ Managers with Failover File Systems. /var/mqm is mount, via a symbolic link, as a Global File System. A subset of the /etc/vfstab entries for WebSphere MQ are shown.
Example 1–1 WebSphere MQ Managers with Failover File Systems
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Sep 17 16:53 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 lrwxrwxrwx 1 root other 512 Sep 17 09:57 @SYSTEM -> /var/mqm_local/qmgrs/@SYSTEM lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr1 -> /local/mqm/qmgrs/qmgr1 lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr2 -> /local/mqm/qmgrs/qmgr2 # # ls -l /global/mqm/log total 4 lrwxrwxrwx 1 root other 20 Sep 17 17:18 qmgr1 -> /local/mqm/log/qmgr1 lrwxrwxrwx 1 root other 20 Sep 17 17:19 qmgr2 -> /local/mqm/log/qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d3/dsk/d30 /dev/md/dg_d3/rdsk/d30 /global/mqm ufs 3 yes logging,global /dev/md/dg_d3/dsk/d33 /dev/md/dg_d3/rdsk/d33 /local/mqm/qmgrs/qmgr1 ufs 4 no logging /dev/md/dg_d3/dsk/d36 /dev/md/dg_d3/rdsk/d36 /local/mqm/log/qmgr1 ufs 4 no logging /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /local/mqm/qmgrs/qmgr2 ufs 4 no logging /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /local/mqm/log/qmgr2 ufs 4 no logging #
Example 1–2 shows two WebSphere MQ Managers with Global Failover File Systems. /var/mqm is mount, via a symbolic link, as a Global File System. A subset of the /etc/vfstab entries for WebSphere MQ are shown.
Example 1–2 WebSphere MQ Managers with Global File Systems
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Jan 8 14:17 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 lrwxrwxrwx 1 root other 512 Dec 16 09:57 @SYSTEM -> /var/mqm_local/qmgrs/@SYSTEM drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # ls -l /global/mqm/log total 4 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d4/dsk/d40 /dev/md/dg_d4/rdsk/d40 /global/mqm ufs 3 yes logging,global /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /global/mqm/qmgrs/qmgr1 ufs 4 yes logging,global /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /global/mqm/log/qmgr1 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d53 /dev/md/dg_d5/rdsk/d53 /global/mqm/qmgrs/qmgr2 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d56 /dev/md/dg_d5/rdsk/d56 /global/mqm/log/qmgr2 ufs 4 yes logging,global
Configuration Requirements
The requirements in this section apply to Sun Cluster HA for WebSphere MQ only. You must meet these requirements before you proceed with your Sun Cluster HA for WebSphere MQ installation and configuration.Caution –
Your data service configuration might not be supported if you do not adhere to these requirements.
-
WebSphere MQ components
and their dependencies —You can configure the Sun Cluster HA for WebSphere MQ data
service to protect a WebSphere MQ instance and its respective components.
These components and their dependencies are described.
Table 1–3 WebSphere MQ components and their dependencies (via -> symbol)Component Description Queue Manager(Mandatory) -> SUNW.HAStoragePlus resource
The SUNW.HAStoragePlus resource manages the WebSphere MQ File System Mount points and ensures that WebSphere MQ is not started until these are mounted.Channel Initiator(Optional) -> Queue_Manager and Listener resources
Dependency on the Listener is required only if runmqlsr is used instead of inetd.
By default, a channel initiator is started by WebSphere MQ. However, if you want a different or another channel initiation queue, other than the default (SYSTEM.CHANNEL.INITQ), then you should deploy this component.Command Server (Optional) -> Queue_Manager and Listener resources
Dependency on the Listener is required only if runmqlsr is used instead of inetd.
Deploy this component if you want WebSphere MQ to process commands sent to the command queue.Listener (Optional) ->Queue_Manager resource
Deploy this component if you want a dedicated listener (runmqlsr) and will not use the inetd listener.Trigger Monitor (Optional) ->Queue_Manager and Listener resources
Dependency on the Listener is required only if runmqlsr is used instead of inetd.
Deploy this component if you want a trigger monitor.
Note – For detailed information about these WebSphere MQ components, refer to IBM's WebSphere MQ Application Programming manual.
Each WebSphere MQ component has a configuration and registration file in /opt/SUNWscmqs/xxx/util, where xxx is a three-character abbreviation for the respective WebSphere MQ component. These files allow you to register the WebSphere MQ components with Sun Cluster.
Within these files, the appropriate dependencies have been applied.
Example 1–3 WebSphere MQ configuration and registration file for Sun Cluster
# cd /opt/SUNWscmqs # # ls -l chi/util total 4 -rwxr-xr-x 1 root sys 720 Dec 20 14:44 chi_config -rwxr-xr-x 1 root sys 586 Dec 20 14:44 chi_register # # ls -l csv/util total 4 -rwxr-xr-x 1 root sys 645 Dec 20 14:44 csv_config -rwxr-xr-x 1 root sys 562 Dec 20 14:44 csv_register # # ls -l lsr/util total 4 -rwxr-xr-x 1 root sys 640 Dec 20 14:44 lsr_config -rwxr-xr-x 1 root sys 624 Dec 20 14:44 lsr_register # # ls -l mgr/util total 4 -rwxr-xr-x 1 root sys 603 Dec 20 14:44 mgr_config -rwxr-xr-x 1 root sys 515 Dec 20 14:44 mgr_register # # ls -l trm/util total 4 -rwxr-xr-x 1 root sys 717 Dec 20 14:44 trm_config -rwxr-xr-x 1 root sys 586 Dec 20 14:44 trm_register # # # more mgr/util/* :::::::::::::: mgr/util/mgr_config :::::::::::::: # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # This file will be sourced in by mgr_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # QMGR - name of the Queue Manager # PORT - name of the Queue Manager port number # LH - name of the LogicalHostname SC resource # HAS_RS - name of the Queue Manager HAStoragePlus SC resource # CLEANUP - Cleanup IPC entries YES or NO (Default CLEANUP=YES) # # Under normal shutdown and startup WebSphere MQ manages it's # cleanup of IPC resources with the following fix packs. # # MQSeries v5.2 Fix Pack 07 (CSD07) # WebSphere MQ v5.3 Fix Pack 04 (CSD04) # # Please refer to APAR number IY38428. # # However, while running in a failover environment, the IPC keys # that get generated will be different between nodes. As a result # after a failover of a Queue Manager, some shared memory segments # can remain allocated on the node although not used. # # Although this does not cause WebSphere MQ a problem when starting # or stopping (with the above fix packs applied), it can deplete # the available swap space and in extreme situations a node may # run out of swap space. # # To resolve this issue, setting CLEANUP=YES will ensure that # IPC shared memory segments for WebSphere MQ are removed whenever # a Queue Manager is stopped. However IPC shared memory segments # are only removed under strict conditions, namely # # - The shared memory segment(s) are owned by # CREATOR=mqm and CGROUP=mqm # - The shared memory segment has no attached processes # - The CPID and LPID process ids are not running # - The shared memory removal is performed by userid mqm # # Setting CLEANUP=NO will not remove any shared memory segments. # # Setting CLEANUP=YES will cleanup shared memory segments under the # conditions described above. # RS= RG= QMGR= PORT= LH= HAS_RS= CLEANUP=YES :::::::::::::: mgr/util/mgr_register :::::::::::::: # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # . `dirname $0`/mgr_config scrgadm -a -j $RS -g $RG -t SUNW.gds \ -x Start_command="/opt/SUNWscmqs/mgr/bin/start-qmgr \ -R $RS -G $RG -Q $QMGR -C $CLEANUP " \ -x Stop_command="/opt/SUNWscmqs/mgr/bin/stop-qmgr \ -R $RS -G $RG -Q $QMGR -C $CLEANUP " \ -x Probe_command="/opt/SUNWscmqs/mgr/bin/test-qmgr \ -R $RS -G $RG -Q $QMGR -C $CLEANUP " \ -y Port_list=$PORT/tcp -y Network_resources_used=$LH \ -x Stop_signal=9 \ -y Resource_dependencies=$HAS_RS #
-
WebSphere MQ Manager protection—
WebSphere MQ is unable to determine whether a Queue Manager is already running on another node within Sun Cluster if Global File Systems are being used for the WebSphere MQ instance, that is, /global/mqm/qmgrs/and /global/mqm/log/ .
Under normal conditions, the Sun Cluster HA for WebSphere MQ data service manages the startup and shutdown of the Queue Manager, regardless of which Cluster File System is being used (for example, FFS or GFS).
However, it is possible that someone could manually start the Queue Manager on another node within Sun Cluster if the WebSphere MQ instance is running on a Global File System.
Note – This has been reported to IBM and a fix is being worked on.
To protect against this happening, two options are available.
-
Use Failover File Systems for the WebSphere MQ instance
This is the recommended approach because the WebSphere MQ instance files would be mounted only on one node at a time. With this configuration, WebSphere MQ is able to determine whether the Queue Manager is running.
-
Create a symbolic link for strmqm/endmqm
to check-start (Provided script).
The script /opt/SUNWscmqs/mgr/bin/check-start provides a mechanism to prevent the WebSphere MQ Manager from being started or stopped.
The check-start script will verify that the WebSphere MQ Manager is being started or stopped by Sun Cluster and will report an error if an attempt is made to start or stop the WebSphere MQ Manager manually.
Example 1–4shows a manual attempt to start the WebSphere MQ Manager. The response was generated by the check-start script.
Example 1–4 Manual attempt to start the WebSphere MQ Manager by mistake.
# strmqm qmgr1 # Request to run
-
Use Failover File Systems for the WebSphere MQ instance
Example 1–5 Create a symbolic link for strmqm and endmqm to check-start
# cd /opt/mqm/bin # # mv strmqm strmqm_sc3 # mv endmqm endmqm_sc3 # # ln -s /opt/SUNWscmqs/mgr/bin/check-start strmqm # ln -s /opt/SUNWscmqs/mgr/bin/check-start endmqm # |
# cat /opt/SUNWscmqs/mgr/etc/config # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Usage: # DEBUG= |
Caution – The above steps need to be done on each node within the cluster that will host the Sun Cluster HA for WebSphere MQ data service. Do not perform this procedure until you have created your Queue Manager(s), because crtmqm will call strmqm and endmqm on its behalf.
Note – If you implement this workaround, then you must back it out whenever you need to apply any maintenance to WebSphere MQ. Afterwards, you would need to reapply this workaround. The recommended approach is to use Failover File Systems for the WebSphere MQ instance, until a fix has been made to WebSphere MQ.
I feel that a debt of gratitude is in order for the valuabe data and experiences you have so given here. Wondering where to go in 2020? Things to do has ranked as the best include a remote, idyllic island and the design capital.
ReplyDelete