Example---Setting Up VVR in a VCS Environment
Configuring VVR with VCS requires the completion of several tasks, each of which must be performed in the order presented below.
Before setting up the VVR configuration, verify whether all the nodes in the cluster that have VVR installed use the same port number for replication. To verify and change the port numbers, use the vrport command. For instructions on using the vrport command, see the VERITAS Volume Replicator Administrator's Guide. If the port number is the same on all nodes, add the VVR agents to the VCS configuration.
Setting Up the VVR Configuration
The example in this section refers to the sample configuration shown in Example VVR Configuration in a VCS Environment. Note that the VVR configuration that is being set up in this example applies to the RVG Agent, that is, it uses the names that are used in the sample configuration file of the RVG agent.
The procedure to configure VVR is the same for all the VVR agents. Use the sample configuration files located in /etc/VRTSvcs/conf/sample_vvr/RVG directory to configure the other agents. For more information on configuring VVR, refer to the VERITAS Volume Replicator Administrator's Guide. The example uses the names listed in the following table.
Name of Cluster: Seattle
Disk group
|
hrdg
|
Primary RVG
|
hr_rvg
|
Primary RLINK to london1
|
rlk_london_hr_rvg
|
Primary data volume #1
|
hr_dv01
|
Primary data volume #2
|
hr_dv02
|
Primary SRL for hr_rvg
|
hr_srl
|
Cluster IP
|
10.216.144.160
|
Name of Cluster: London
Disk group
|
hrdg
|
Secondary RVG
|
hr_rvg
|
Secondary RLINK to seattle
|
rlk_seattle_hr_rvg
|
Secondary data volume #1
|
hr_dv01
|
Secondary data volume #2
|
hr_dv02
|
Secondary SRL for hr_rvg
|
hr_srl
|
Cluster IP
|
10.216.144.162
|
This example assumes that each of the hosts seattle1 and london1 has a disk group named hrdg with enough free space to create the VVR objects mentioned in the example. Set up the VVR configuration on seattle1 and london1 to include the objects used in the sample configuration files, main.cf.seattle and main.cf.london, located in the /etc/VRTSvcs/conf/sample_vvr/RVG directory.
-
On london1:
- Create the Secondary data volumes.
# vxassist -g hrdg make hr_dv01 100M \
layout=mirror logtype=dcm mirror=2
# vxassist -g hrdg make hr_dv02 100M \
layout=mirror logtype=dcm mirror=2
- Create the Secondary SRL.
# vxassist -g hrdg make hr_srl 200M mirror=2
-
On seattle1:
- Create the Primary data volumes.
# vxassist -g hrdg make hr_dv01 100M \
layout=mirror logtype=dcm mirror=2
# vxassist -g hrdg make hr_dv02 100M \
layout=mirror logtype=dcm mirror=2
- Create the Primary SRL.
# vxassist -g hrdg make hr_srl 200M mirror=2
- Create the Primary RVG.
# vradmin -g hrdg createpri hr_rvg \
hr_dv01,hr_dv02 hr_srl
- Determine the virtual IP address to be used for replication, and then verify that the device interface for this IP is plumbed. If the device interface for this IP is not plumbed, then plumb the device. Get the IP up using the OS-specific command. This IP address that is to be used for replication must be configured as the IP resource for this RVG service group.
- Create the Secondary RVG.
# vradmin -g hrdg addsec hr_rvg \
10.216.144.160 10.216.144.162 prlink=rlk_london_hr_rvg \
srlink=rlk_seattle_hr_rvg
Note
The RLINKs must point to the virtual IP address for failovers to succeed. The virtual IP address 10.216.144.160 must be able to ping virtual IP address 10.216.144.162 and vice versa.
- Start Replication.
# vradmin -g hrdg -f startrep hr_rvg
-
Create the following directories on seattle1 and seattle2. These directories will be used as mount points for volumes hr_dv01 and hr_dv02 on the seattle site.
# mkdir /hr_mount01
# mkdir /hr_mount02
-
On seattle1 and seattle2, create file systems on the volumes hr_dv01 and hr_dv02.
Verifying the VVR Replication State
Test the replication state between seattle1 and london1 to verify that VVR is configured correctly. Type the following command on each node:
# vxprint -g hrdg hr_rvg
Verify that the state of the RVG is ENABLED/ACTIVE.
Verify that the state of the RLINK is CONNECT/ACTIVE.
Configuring the Agents
This section explains how to configure the VVR agents.
Configuration Tasks
This section gives instructions on how to configure the RVG agent and RVGPrimary agent when VCS is stopped and when VCS is running. Sample configuration files, main.cf.seattle and main.cf.london, are located in the /etc/VRTSvcs/conf/sample_vvr/RVG and /etc/VRTSvcs/conf/sample_vvr/RVGPrimary directories respectively, and can be used for reference.
You can add the RVG resource to your existing VCS configuration using any one of the following procedures:
Configuring the Agents When VCS is Running
The example in this section explains how to configure the RVG and RVGPrimary agents when VCS is running. For details about the example configuration, see Example Configuration for a Failover Application
Note
Use this example as a reference when creating or changing your resources and attributes.
Perform the following steps on the system seattle1 in the Primary cluster Seattle:
-
Log in as root.
-
Set the VCS configuration mode to read/write by issuing the following command:
# haconf -makerw
-
Create the replication service group, VVRGrp. This group contains all the storage and replication resources.
- Add a service group, VVRGrp, to the cluster Seattle and modify the attributes SystemList and AutoStartList of the service group to populate SystemList and AutoStartList:
# hagrp -add VVRGrp
# hagrp -modify VVRGrp SystemList seattle1 0 seattle2 1
# hagrp -modify VVRGrp AutoStartList seattle1 seattle2
- Add the DiskGroup resource Hr_Dg to the service group VVRGrp and modify the attributes of the resource:
# hares -add Hr_Dg DiskGroup VVRGrp
# hares -modify Hr_Dg DiskGroup hrdg
- Add the RVG resource Hr_Rvg to the service group VVRGrp and modify the attributes of the resource:
# hares -add Hr_Rvg RVG VVRGrp
# hares -modify Hr_Rvg RVG hr_rvg
# hares -modify Hr_Rvg DiskGroup hrdg
- Add a NIC resource vvrnic to the service group VVRGrp and modify the attributes of the resource:
# hares -add vvrnic NIC VVRGrp
# hares -modify vvrnic Device lan3
- Add the IP resource vvrip to the service group VVRGrp and modify the attributes of the resource:
# hares -add vvrip IP VVRGrp
# hares -modify vvrip Device lan3
# hares -modify vvrip Address 192.2.40.20
# hares -modify vvrip NetMask "255.255.248.0"
- Specify resource dependencies for the resources you added in the previous steps:
# hares -link Hr_Rvg vvrip
# hares -link Hr_Rvg Hr_Dg
# hares -link vvrip vvrnic
- Enable all resources in VVRGrp
# hagrp -enableresources VVRGrp
-
Create the application service group, ORAGrp. This group contains all the application specific resources.
- Add a service group, ORAGrp, to the cluster Seattle and populate the attributes SystemList, AutoStartList and ClusterList of the service group
# hagrp -add ORAGrp
# hagrp -modify ORAGrp SystemList seattle1 0 seattle2 1
# hagrp -modify ORAGrp AutoStartList seattle1 seattle2
# hagrp -modify ORAGrp ClusterList Seattle 0 London 1
- Add a NIC resource oranic to the service group ORAGrp and modify the attributes of the resource:
# hares -add oranic NIC ORAGrp
# hares -modify oranic Device lan0
- Add an IP resource oraip to the service group ORAGrp and modify the attributes of the resource:
# hares -add oraip IP ORAGrp
# hares -modify oraip Device lan0
# hares -modify oraip Address 192.2.40.1
# hares -modify oraip NetMask "255.255.248.0"
- Add the Mount resource Hr_Mount01 to mount the volume hr_dv01 in the RVG resource Hr_Rvg:
# hares -add Hr_Mount01 Mount ORAGrp
# hares -modify Hr_Mount01 MountPoint /hr_mount01
# hares -modify Hr_Mount01 BlockDevice \
/dev/vx/dsk/Hr_Dg/hr_dv01
# hares -modify Hr_Mount01 FSType vxfs
# hares -modify Hr_Mount01 FsckOpt %-n
# hares -modify Hr_Mount01 MountOpt rw
- Add the Mount resource Hr_Mount02 to mount the volume hr_dv02 in the RVG resource Hr_Rvg:
# hares -add Hr_Mount02 Mount ORAGrp
# hares -modify Hr_Mount02 MountPoint /hr_mount02
# hares -modify Hr_Mount02 BlockDevice \
/dev/vx/dsk/Hr_Dg/hr_dv02
# hares -modify Hr_Mount02 FSType vxfs
# hares -modify Hr_Mount02 FsckOpt %-n
# hares -modify Hr_Mount02 MountOpt rw
- Add the Oracle resource Hr_Oracle
# hares -add Hr_Oracle Oracle ORAGrp
# hares -modify Hr_Oracle Sid hr1
# hares -modify Hr_Oracle Owner oracle
# hares -modify Hr_Oracle Home "/hr_mount01/OraHome1"
# hares -modify Hr_Oracle Pfile "inithr1.ora"
# hares -modify Hr_Oracle User dbtest
# hares -modify Hr_Oracle Pword dbtest
# hares -modify Hr_Oracle Table oratest
# hares -modify Hr_Oracle MonScript "./bin/Oracle/SqlTest.pl"
# hares -modify Hr_Oracle StartUpOpt STARTUP
# hares -modify Hr_Oracle ShutDownOpt IMMEDIATE
# hares -modify Hr_Oracle AutoEndBkup 1
- Add the Oracle listener resource LISTENER
# hares -add LISTENER Netlsnr ORAGrp
# hares -modify LISTENER Owner oracle
# hares -modify LISTENER Home "/hr_mount01/OraHome1"
# hares -modify LISTENER Listener LISTENER
# hares -modify LISTENER EnvFile "/oracle/.profile"
# hares -modify LISTENER MonScript "./bin/Netlsnr/LsnrTest.pl"
- Add the RVGPrimary resource Hr_RvgPri
# hares -add Hr_RvgPri RVGPrimary ORAGrp
# hares -modify Hr_RvgPri RvgResourceName Hr_Rvg
- Specify resource dependencies for the resources you added in the previous steps:
# hares -link LISTENER Hr_Oracle
# hares -link LISTENER oraip
# hares -link Hr_Oracle Hr_Mount01
# hares -link Hr_Oracle Hr_Mount02
# hares -link Hr_Mount01 rvg-pri
# hares -link Hr_Mount02 rvg-pri
# hares -link oraip oranic
- Specify an online local hard group dependency between ORAGrp and VVRGrp.
# hagrp -link ORAGrp VVRGrp online local hard
- Enable all resources in ORAGrp
# hagrp -enableresources ORAGrp
- Save and close VCS configuration
# haconf -dump -makero
-
Repeat steps 1 to 4 on the system london1 in the Secondary cluster London with the changes described below:
- Repeat steps 1 and 2.
- At step 3a, replace seattle1 and seattle2 with london1 and london2, as follows:
Add a service group, VVRGrp, to the cluster London and modify the attributes SystemList and AutoStartList of the service group to populate SystemList and AutoStartList:
# hagrp -add VVRGrp
# hagrp -modify VVRGrp SystemList london1 0 london2 1
# hagrp -modify VVRGrp AutoStartList london1 london2
- Repeat steps 3b, 3c, 3d.
- At step 3e, modify the Address attribute for the IP resource appropriately.
- Repeat steps 3f and 3g.
- At step 4a, replace seattle1 and seattle2 with london1 and london2, as follows:
Add a service group, ORAGrp, to the cluster London and populate the attributes SystemList, AutoStartList and ClusterList of the service group
# hagrp -add ORAGrp
# hagrp -modify ORAGrp SystemList london1 0 london2 1
# hagrp -modify ORAGrp AutoStartList london1 london2
# hagrp -modify ORAGrp ClusterList Seattle 0 London 1
- Repeat step 4b
- At step 4c, modify the Address attribute for the IP resource appropriately.
- Repeat steps 4d through 4l.
-
Bring the service groups online, if not already online.
# hagrp -online VVRGrp -sys seattle1
# hagrp -online ORAGrp -sys seattle1
-
Verify that the service group ORAGrp is ONLINE on the system seattle1 by issuing the following command:
# hagrp -state ORAGrp
Configuring the Agents When VCS is Stopped
Perform the following steps to configure the RVG agent using the sample configuration file on the first node in the Primary cluster and Secondary cluster. In the example in this guide, seattle1 is the first Primary node and london1 is the first Secondary node.
-
Log in as root.
-
Ensure that all changes to the existing configuration have been saved and that further changes are prevented while you modify main.cf:
If the VCS cluster is currently writeable, run the following command:
# haconf -dump -makero
If the VCS cluster is already read only, run the following command:
# haconf -dump
-
Do not edit the configuration files while VCS is started. The following command will stop the had daemon on all systems and leave resources available:
# hastop -all -force
-
Make a backup copy of the main.cf file:
# cd /etc/VRTSvcs/conf/config
# cp main.cf main.cf.orig
-
Edit the main.cf files for the Primary and Secondary clusters. The files main.cf.seattle and main.cf.london located in the /etc/VRTSvcs/conf/sample_vvr/RVGPrimary directory can be used for reference for the primary cluster and the secondary cluster respectively.
-
Save and close the file.
-
Verify the syntax of the file /etc/VRTSvcs/conf/config/main.cf:
# cd /etc/VRTSvcs/conf/config/
# hacf -verify .
-
Start the VCS engine:
# hastart
-
Go to Administering the Service Groups.
|