< Previous | Next > | |
Product: Volume Manager Guides | |
Manual: Volume Manager 4.1 Administrator's Guide |
Administering DMP Using vxdmpadmThe vxdmpadm utility is a command line administrative interface to the DMP feature of VxVM. You can use the vxdmpadm utility to perform the following tasks.
The following sections cover these tasks in detail along with sample output. For more information, see the vxdmpadm(1M) manual page. Retrieving Information About a DMP NodeThe following command displays the DMP node that controls a particular physical path: # vxdmpadm getdmpnode nodename=c3t2d1 The physical path is specified by argument to the nodename attribute, which must be a valid path listed in the /dev/rdsk directory. The above command displays output such as the following: NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME =============================================================== c3t2d1 ENABLED ACME 2 2 0 enc0 Use the enclosure attribute with getdmpnode to obtain a list of all DMP nodes for the specified enclosure. # vxdmpadm getdmpnode enclosure=enc0 NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME =============================================================== c2t1d0 ENABLED ACME 2 2 0 enc0 c2t1d1 ENABLED ACME 2 2 0 enc0 c2t1d2 ENABLED ACME 2 2 0 enc0 c2t1d3 ENABLED ACME 2 2 0 enc0 Displaying the Members of a LUN GroupThe following command displays the DMP nodes that are in the same LUN group as a specified DMP node: # vxdmpadm getlungroup dmpnodename=c11t0d10 The above command displays output such as the following: NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME ================================================================== c11t0d8 ENABLED ACME 2 2 0 enc1 c11t0d9 ENABLED ACME 2 2 0 enc1 c11t0d10 ENABLED ACME 2 2 0 enc1 c11t0d11 ENABLED ACME 2 2 0 enc1 Displaying All Paths Controlled by a DMP NodeThe following command displays the paths controlled by the specified DMP node: # vxdmpadm getsubpaths dmpnodename=c2t1d0 NAME STATE[-] PATH-TYPE[-] CTLR-NAME ENCLR-TYPE ENCLR-NAME ATTRS ================================================================== c2t1d0 ENABLED PRIMARY c2 ACME enc0 - c3t2d0 ENABLED SECONDARY c3 ACME enc0 - The specified DMP node must be a valid node in the /dev/vx/rdmp directory. The state of a path that is currently enabled and available for I/O is shown as ENABLED(A): # vxdmpadm getsubpaths dmpnodename=c2t66d0 NAME STATE[A] PATH-TYPE[M] CTLR-NAME ENCLR-TYPE ENCLR-NAME ATTRS ================================================================== c2t66d0 ENABLED(A) PRIMARY c2 ACME enc0 - c1t66d0 ENABLED PRIMARY c1 ACME enc0 - For A/A arrays, all enabled paths that are available for I/O are shown as ENABLED(A). For A/P arrays in which the I/O policy is set to singleactive, only one path is shown as ENABLED(A). The other paths are enabled but not available for I/O. If the I/O policy is not set to singleactive, DMP can use a group of paths (all primary or all secondary) for I/O, which are shown as ENABLED(A). See Specifying the I/O Policy for more information. You can also use getsubpaths to obtain all paths through a particular host disk controller: # vxdmpadm getsubpaths ctlr=c2 NAME STATE[-] PATH-TYPE[-] CTLR-NAME ENCLR-TYPE ENCLR-NAME ATTRS ================================================================== c2t1d0 ENABLED PRIMARY c2t1d0 ACME enc0 - c2t2d0 ENABLED PRIMARY c2t2d0 ACME enc0 - c2t3d0 ENABLED SECONDARY c2t3d0 ACME enc0 - c2t4d0 ENABLED SECONDARY c2t4d0 ACME enc0 - Listing Information About Host I/O ControllersThe following command lists attributes of all host I/O controllers on the system: # vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c1 OTHER ENABLED other0 c2 X1 ENABLED jbod0 c3 ACME ENABLED enc0 c4 ACME ENABLED enc0 This form of the command lists controllers belonging to a specified enclosure and enclosure type: # vxdmpadm listctlr enclosure=enc0 type=ACME CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c2 ACME ENABLED enc0 c3 ACME ENABLED enc0 Listing Information About EnclosuresTo display the attributes of a specified enclosure, use the following command: # vxdmpadm listenclosure enc0 ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE ================================================================== enc0 A3 60020f20000001a90000 CONNECTED A/P The following command lists attributes for all enclosures in a system: # vxdmpadm listenclosure all The following is example output from this command: ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE ================================================================== Disk Disk DISKS CONNECTED Disk ANA0 ACME 508002000001d660 CONNECTED A/A enc0 A3 60020f20000001a90000 CONNECTED A/P Displaying Information About TPD-Controlled DevicesThe third-party driver (TPD) coexistence feature allows I/O that is controlled by third-party multipathing drivers to bypass DMP while retaining the monitoring capabilities of DMP. The following commands allow you to display the paths that DMP has discovered for a given TPD device, and the TPD device that corresponds to a given TPD-controlled node discovered by DMP: # vxdmpadm getsubpaths tpdnodename=TPD_node_name # vxdmpadm gettpdnode nodename=DMP_node_name See Changing Device Naming for TPD-Controlled Enclosures for information on how to select whether OS or TPD-based device names are displayed. For example, consider the following disks in an EMC Symmetrix array controlled by PowerPath, which are known to DMP: # vxdisk list DEVICE TYPE DISK GROUP STATUS emcpower10 auto:sliced disk1 ppdg online emcpower11 auto:sliced disk2 ppdg online emcpower12 auto:sliced disk3 ppdg online emcpower13 auto:sliced disk4 ppdg online emcpower14 auto:sliced disk5 ppdg online emcpower15 auto:sliced disk6 ppdg online emcpower16 auto:sliced disk7 ppdg online emcpower17 auto:sliced disk8 ppdg online emcpower18 auto:sliced disk9 ppdg online emcpower19 auto:sliced disk10 ppdg online The following command displays the paths that DMP has discovered, and which correspond to the PowerPath-controlled node, emcpower10: # vxdmpadm getsubpaths tpdnodename=emcpower10 NAME TPDNODENAME PATH-TYPE[-] DMP-NODENAME ENCLR-TYPE ENCLR-NAME ===================================================================== c7t0d10 emcpower10s2 - emcpower10 EMC EMC0 c6t0d10 emcpower10s2 - emcpower10 EMC EMC0 Conversely, the next command displays information about the PowerPath node that corresponds to the path, c7t0d10, discovered by DMP: # vxdmpadm gettpdnode nodename=c7t0d10 NAME STATE PATHS ENCLR-TYPE ENCLR-NAME ===================================================================== emcpower10s2 ENABLED 2 EMC EMC0 Gathering and Displaying I/O StatisticsYou can use the vxdmpadm iostat command to gather and display I/O statistics for a specified DMP node, enclosure or path. To enable the gathering of statistics, enter this command: # vxdmpadm iostat start [memory=size] To reset the I/O counters to zero, use this command: # vxdmpadm iostat reset The memory attribute can be used to limit the maximum amount of memory that is used to record I/O statistics for each CPU. The default limit is 32k (32 kilobytes) per CPU. To display the accumulated statistics at regular intervals, use the following command: # vxdmpadm iostat show {all | dmpnodename=dmp-node | \ enclosure=enclr-name | pathname=path_name} [interval=seconds [count=N]] This command displays I/O statistics for all controllers (all), or for a specified DMP node, enclosure or path. The statistics displayed are the CPU usage and amount of memory per CPU used to accumulate statistics, the number of read and write operations, the number of blocks read and written, and the average time in milliseconds per read and write operation. The interval and count attributes may be used to specify the interval in seconds between displaying the I/O statistics, and the number of lines to be displayed. The actual interval may be smaller than the value specified if insufficient memory is available to record the statistics. To disable the gathering of statistics, enter this command: # vxdmpadm iostat stop Examples of Using the vxdmpadm iostat CommandThe follow is an example session using the vxdmpadm iostat command. The first command enables the gathering of I/O statistics: # vxdmpadm iostat start The next command displays the current statistics including the accumulated total numbers of read and write operations and kilobytes read and written, on all paths: # vxdmpadm iostat show all cpu usage = 7952us per cpu memory = 8192b OPERATIONS KBYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c0t0d0 1088 0 557056 0 0.009542 0.000000 c2t118d0 87 0 44544 0 0.001194 0.000000 c3t118d0 0 0 0 0 0.000000 0.000000 c2t122d0 87 0 44544 0 0.007265 0.000000 c3t122d0 0 0 0 0 0.000000 0.000000 c2t115d0 87 0 44544 0 0.001200 0.000000 c3t115d0 0 0 0 0 0.000000 0.000000 c2t103d0 87 0 44544 0 0.007315 0.000000 c3t103d0 0 0 0 0 0.000000 0.000000 c2t102d0 87 0 44544 0 0.001132 0.000000 c3t102d0 0 0 0 0 0.000000 0.000000 c2t121d0 87 0 44544 0 0.000997 0.000000 c3t121d0 0 0 0 0 0.000000 0.000000 c2t112d0 87 0 44544 0 0.001559 0.000000 c3t112d0 0 0 0 0 0.000000 0.000000 c2t96d0 87 0 44544 0 0.007057 0.000000 c3t96d0 0 0 0 0 0.000000 0.000000 c2t106d0 87 0 44544 0 0.007247 0.000000 c3t106d0 0 0 0 0 0.000000 0.000000 c2t113d0 87 0 44544 0 0.007235 0.000000 c3t113d0 0 0 0 0 0.000000 0.000000 c2t119d0 87 0 44544 0 0.001390 0.000000 c3t119d0 0 0 0 0 0.000000 0.000000 The following command changes the amount of memory that vxdmpadm can use to accumulate the statistics: # vxdmpadm iostat start memory=4096 The displayed statistics can be filtered by path name, DMP node name, and enclosure name (note that the per-CPU memory has changed following the previous command): # vxdmpadm iostat show pathname=c3t115d0 cpu usage = 8132us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c3t115d0 0 0 0 0 0.000000 0.000000 # vxdmpadm iostat show dmpnodename=c0t0d0 cpu usage = 8501us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c0t0d0 1088 0 557056 0 0.009542 0.000000 # vxdmpadm iostat show enclosure=Disk cpu usage = 8626us per cpu memory = 4096b cpu usage = 8501us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c0t0d0 1088 0 557056 0 0.009542 0.000000 You can also specify the number of times to display the statistics and the time interval. Here the incremental statistics for a path are displayed twice with a 2-second interval: # vxdmpadm iostat show pathname=c3t115d0 interval=2 count=2 cpu usage = 8195us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c3t115d0 0 0 0 0 0.000000 0.000000 cpu usage = 59us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c3t115d0 0 0 0 0 0.000000 0.000000 Setting the Attributes of the Paths to an EnclosureYou can use the vxdmpadm setattr command to set the following attributes of the paths to an enclosure or disk array:
Changes a standby (failover) path to an active path. The example below specifies an active path for an A/P-C disk array: # vxdmpadm setattr path c2t10d0 pathtype=active Restores the original primary or secondary attributes of a path. This example restores the attributes for a path to an A/P disk array: # vxdmpadm setattr path c3t10d0 pathtype=nomanual Restores the normal priority of a path. The following example restores the default priority to a path: # vxdmpadm setattr path c1t20d0 pathtype=nopreferred Specifies a path as preferred, and optionally assigns a priority number to it. If specified, the priority number must be an integer that is greater than or equal to one. Higher priority numbers indicate that a path is able to carry a greater I/O load. Note Setting a priority for path does not change the I/O policy. The I/O policy must be set independently as described in Specifying the I/O Policy. This example first sets the I/O policy to priority for an Active/Active disk array, and then specifies a preferred path with an assigned priority of 2: # vxdmpadm setattr enclosure enc0 iopolicy=priority # vxdmpadm setattr path c1t20d0 pathtype=preferred \ priority=2 Defines a path as being the primary path for an Active/Passive disk array. The following example specifies a primary path for an A/P disk array: # vxdmpadm setattr path c3t10d0 pathtype=primary Defines a path as being the secondary path for an Active/Passive disk array. This example specifies a secondary path for an A/P disk array: # vxdmpadm setattr path c4t10d0 pathtype=secondary Marks a standby (failover) path that it is not used for normal I/O scheduling. This path is used if there are no active paths available for I/O. The next example specifies a standby path for an A/P-C disk array: # vxdmpadm setattr path c2t10d0 pathtype=standby Displaying the I/O PolicyTo display the current and default settings of the I/O policy for an enclosure, array or array type, use the vxdmpadm getattr command. The following example displays the default and current setting of iopolicy for JBOD disks: # vxdmpadm getattr enclosure Disk iopolicy ENCLR_NAME DEFAULT CURRENT --------------------------------------- Disk balanced MinimumQ The next example displays the setting of partitionsize for the enclosure enc0, on which the balanced I/O policy with a partition size of 2MB has been set: # vxdmpadm getattr enclosure enc0 partitionsize ENCLR_NAME DEFAULT CURRENT --------------------------------------- enc0 1024 2048 Specifying the I/O PolicyYou can use the vxdmpadm setattr command to change the I/O policy for distributing I/O load across multiple paths to a disk array or enclosure. You can set policies for an enclosure (for example, HDS01), for all enclosures of a particular type (such as HDS), or for all enclosures of a particular array type (A/A for Active/Active, or A/P for Active/Passive). Note Starting with release 4.1 of VxVM, I/O policies are recorded in the file /etc/vx/dmppolicy.info, and are persistent across reboots of the system. Do not edit this file yourself. The following policies may be set:
This policy attempts to maximize overall I/O throughput from/to the disks by dynamically scheduling I/O on the paths. It is suggested for use where I/O loads can vary over time. For example, I/O from/to a database may exhibit both long transfers (table scans) and short transfers (random look ups). The policy is also useful for a SAN environment where different paths may have different number of hops. No further configuration is possible as this policy is automatically managed by DMP. In this example, the adaptive I/O policy is set for the enclosure enc1: # vxdmpadm setattr enclosure enc1 iopolicy=adaptive This policy is designed to optimize the use of caching in disk drives and RAID controllers, and is the default policy for A/A arrays. The size of the cache typically ranges from 120KB to 500KB or more, depending on the characteristics of the particular hardware. During normal operation, the disks (or LUNs) are logically divided into a number of regions (or partitions), and I/O from/to a given region is sent on only one of the active paths. Should that path fail, the workload is automatically redistributed across the remaining paths. You can use the size argument to the partitionsize attribute to specify the partition size. The partition size in blocks is adjustable in powers of 2 from 2 up to 2^31 as illustrated in the table below: The default value for the partition size is 1024 blocks (1MB). A value that is not a power of 2 is silently rounded down to the nearest acceptable value. Specifying a partition size of 0 is equivalent to the default partition size of 1024 blocks (1MB). For example, the suggested partition size for an Hitachi HDS 9960 A/A array is from 16,384 to 65,536 blocks (16MB to 64MB) for an I/O activity pattern that consists mostly of sequential reads or writes. Note The benefit of this policy is lost if the value is set larger than the cache size. The default value can be changed by adjusting the value of a tunable parameter (see dmp_pathswitch_blks_shift) and rebooting the system. The next example sets the balanced I/O policy with a partition size of 2048 blocks (2MB) on the enclosure enc0: # vxdmpadm setattr enclosure enc0 iopolicy=balanced \ partitionsize=2048 This policy sends I/O on paths that have the minimum number of outstanding I/O requests in the queue for a LUN. This is suitable for low-end disks or JBODs where a significant track cache does not exist. No further configuration is possible as DMP automatically determines the path with the shortest queue. The following example sets the I/O policy to minimumq for a JBOD: # vxdmpadm setattr enclosure Disk iopolicy=minimumq This policy is useful when the paths in a SAN have unequal performance, and you want to enforce load balancing manually. You can assign priorities to each path based on your knowledge of the configuration and performance characteristics of the available paths, and of other aspects of your system. See Setting the Attributes of the Paths to an Enclosure for details of how to assign priority values to individual paths. In this example, the I/O policy is set to priority for all SENA arrays: # vxdmpadm setattr arrayname SENA iopolicy=priority This policy shares I/O equally between the paths in a round-robin sequence. For example, if there are three paths, the first I/O request would use one path, the second would use a different path, the third would be sent down the remaining path, the fourth would go down the first path, and so on. No further configuration is possible as this policy is automatically managed by DMP. This is the default policy for A/PC configurations with multiple active paths per controller. The next example sets the I/O policy to round-robin for all Active/Active arrays: # vxdmpadm setattr arraytype A/A iopolicy=round-robin This policy routes I/O down one single active path. This is the default policy for A/P arrays with one active path per controller, where the other paths are used in case of failover. If configured for A/A arrays, there is no load balancing across the paths, and the alternate paths are only used to provide high availability (HA). If the currently active path fails, I/O is switched to an alternate active path. No further configuration is possible as the single active path is selected by DMP. The following example sets the I/O policy to singleactive for JBOD disks: # vxdmpadm setattr arrayname DISK iopolicy=singleactive Example of Applying Load Balancing in a SANThis example describes how to configure load balancing in a SAN environment where there are multiple primary paths to an Active/Passive device through several SAN switches. As can be seen in this sample output from the vxdisk list command, the device c3t2d15 has eight primary paths: # vxdisk list c3t2d15 Device: c3t2d15 ... numpaths: 8 c2t0d15 state=enabled type=primary c2t1d15 state=enabled type=primary c3t1d15 state=enabled type=primary c3t2d15 state=enabled type=primary c4t2d15 state=enabled type=primary c4t3d15 state=enabled type=primary c5t3d15 state=enabled type=primary c5t4d15 state=enabled type=primary In addition, the device is in the enclosure ENC0, belongs to the disk group mydg, and contains a simple concatenated volume myvol1. The first step is to enable the gathering of DMP statistics: # vxdmpadm iostat start Next the dd command is used to apply an input workload from the volume: # dd if=/dev/vx/rdsk/mydg/myvol1 of=/dev/null & By running the vxdmpadm iostat command to display the DMP statistics for the device, it can be seen that all I/O is being directed to one path, c5t4d15: # vxdmpadm iostat show dmpnodename=c3t2d15 interval=5 count=2 ... cpu usage = 11294us per cpu memory = 32768b OPERATIONS KBYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c2t0d15 0 0 0 0 0.000000 0.000000 c2t1d15 0 0 0 0 0.000000 0.000000 c3t1d15 0 0 0 0 0.000000 0.000000 c3t2d15 0 0 0 0 0.000000 0.000000 c4t2d15 0 0 0 0 0.000000 0.000000 c4t3d15 0 0 0 0 0.000000 0.000000 c5t3d15 0 0 0 0 0.000000 0.000000 c5t4d15 5493 0 5493 0 0.411069 0.000000 The vxdmpadm command is used to display the I/O policy for the enclosure that contains the device: # vxdmpadm getattr enclosure ENC0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ ENC0 Single-Active Single-Active This shows that the policy for the enclosure is set to singleactive, which explains why all the I/O is taking place on one path. To balance the I/O load across the multiple primary paths, the policy is set to round-robin as shown here: # vxdmpadm setattr enclosure ENC0 iopolicy=round-robin # vxdmpadm getattr enclosure ENC0 iopolicy ENCLR_NAME DEFAULT CURRENT ============================================ ENC0 Single-Active Round-Robin The DMP statistics are now reset: # vxdmpadm iostat reset With the workload still running, the effect of changing the I/O policy to balance the load across the primary paths can now be seen. # vxdmpadm iostat show dmpnodename=c3t2d15 interval=5 count=2 ... cpu usage = 14403us per cpu memory = 32768b OPERATIONS KBYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c2t0d15 1021 0 1021 0 0.396670 0.000000 c2t1d15 947 0 947 0 0.391763 0.000000 c3t1d15 1004 0 1004 0 0.393426 0.000000 c3t2d15 1027 0 1027 0 0.402142 0.000000 c4t2d15 1086 0 1086 0 0.390424 0.000000 c4t3d15 1048 0 1048 0 0.391221 0.000000 c5t3d15 1036 0 1036 0 0.390927 0.000000 c5t4d15 1021 0 1021 0 0.392752 0.000000 The enclosure can be returned to the single active I/O policy by entering the following command: # vxdmpadm setattr enclosure ENC0 iopolicy=singleactive Disabling a ControllerNote This operation is not supported for controllers that are used to access disk arrays on which cluster-shareable disk groups are configured. Disabling I/O to a host disk controller prevents DMP from issuing I/O through the specified controller. The command blocks until all pending I/O issued through the specified disk controller are completed. To disable a controller, use the following command: # vxdmpadm [-f] disable ctlr=ctlr_name The disable operation fails if it is issued to a controller connected to the root disk through a single path. If there is a single path connected to a disk, the disable command fails with an error message. Use the -f option to forcibly disable the controller. Enabling a ControllerNote This operation is not supported for controllers that are used to access disk arrays on which cluster-shareable disk groups are configured. Enabling a controller allows a previously disabled host disk controller to accept I/O. This operation succeeds only if the controller is accessible to the host and I/O can be performed on it. When connecting Active/Passive disk arrays in a non-clustered environment, the enable operation results in failback of I/O to the primary path. The enable operation can also be used to allow I/O to the controllers on a system board that was previously detached. To enable a controller, use the following command: # vxdmpadm enable ctlr=ctlr_name Renaming an EnclosureThe vxdmpadm setattr command can be used to assign a meaningful name to an existing enclosure, for example: # vxdmpadm setattr enclosure enc0 name=GRP1 This example changes the name of an enclosure from enc0 to GRP1. Note The maximum length of the enclosure name prefix is 25 characters. The name must not contain an underbar character (_). The following command shows the changed name: # vxdmpadm listenclosure all ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ============================================================ other0 OTHER OTHER_DISKS CONNECTED jbod0 X1 X1_DISKS CONNECTED GRP1 ACME 60020f20000001a90000 CONNECTED Starting the DMP Restore DaemonThe DMP restore daemon re-examines the condition of paths at a specified interval. The type of analysis it performs on the paths depends on the specified checking policy. Note The DMP restore daemon does not change the disabled state of the path through a controller that you have disabled using vxdmpadm disable. Use the start restore command to start the restore daemon and specify one of the following policies:
The restore daemon analyzes all paths in the system and revives the paths that are back online, as well as disabling the paths that are inaccessible. The command to start the restore daemon with this policy is: # vxdmpadm start restore policy=check_all [interval=seconds] The restore daemon checks that at least one alternate path is healthy. It generates a notification if this condition is not met. This policy avoids inquiry commands on all healthy paths, and is less costly than check_all in cases where a large number of paths are available. This policy is the same as check_all if there are only two paths per DMP node. The command to start the restore daemon with this policy is: # vxdmpadm start restore policy=check_alternate [interval=seconds] This is the default policy. The restore daemon checks the condition of paths that were previously disabled due to hardware failures, and revives them if they are back online. The command to start the restore daemon with this policy is: # vxdmpadm start restore policy=check_disabled [interval=seconds] The restore daemon performs check_all once in a given number of cycles, and check_disabled in the remainder of the cycles. This policy may lead to periodic slowing down (due to check_all) if there are a large number of paths available. The command to start the restore daemon with this policy is: # vxdmpadm start restore policy=check_periodic interval=seconds \ [period=number] The interval attribute must be specified for this policy. The default number of cycles between running the check_all policy is 10. The interval attribute specifies how often the restore daemon examines the paths. For example, after stopping the restore daemon, the polling interval can be set to 400 seconds using the following command: # vxdmpadm start restore interval=400 Note The default interval is 300 seconds. Decreasing this interval can adversely affect system performance. To change the interval or policy, you must first stop the restore daemon, and then restart it with new attributes. See the vxdmpadm(1M) manual page for more information about DMP restore policies. Stopping the DMP Restore DaemonUse the following command to stop the DMP restore daemon: # vxdmpadm stop restore Note Automatic path failback stops if the restore daemon is stopped. Displaying the Status of the DMP Restore DaemonUse the following command to display the status of the automatic path restoration daemon, its polling interval, and the policy that it uses to check the condition of paths: # vxdmpadm stat restored This produces output such as the following: The number of daemons running : 1 The interval of daemon: 300 The policy of daemon: check_disabled Displaying Information About the DMP Error DaemonsTo display the number of error daemons that are running, use the following command: # vxdmpadm stat errord Configuring Array Policy ModulesAn array policy module (APM) is a dynamically loadable kernel module that may be provided by some vendors for use in conjunction with an array. An APM defines procedures to:
DMP supplies default procedures for these functions when an array is registered. An APM may modify some or all of the existing procedures that are provided by DMP or by another version of the APM. You can use the following command to display all the APMs that are configured for a system: # vxdmpadm listapm all The output from this command includes the file name of each module, the supported array type, the APM name, the APM version, and whether the module is currently in use (loaded). To see detailed information for an individual module, specify the module name as the argument to the command: # vxdmpadm listapm module_name To add and configure an APM, use the following command: # vxdmpadm -a cfgapm module_name [attr1=value1 [attr2=value2 ...]] The optional configuration attributes and their values are specific to the APM for an array. Consult the documentation that is provided by the array vendor for details. Note By default, DMP uses the most recent APM that is available. Specify the -u option instead of the -a option if you want to force DMP to use an earlier version of the APM. The current version of an APM is replaced only if it is not in use. Specifying the -r option allows you to remove an APM that is not currently loaded: # vxdmpadm -r cfgapm module_name For more information about configuring APMs, see the vxdmpadm(1M) manual page. |
^ Return to Top | < Previous | Next > |
Product: Volume Manager Guides | |
Manual: Volume Manager 4.1 Administrator's Guide | |
VERITAS Software Corporation
www.veritas.com |