< Previous | Next > | |
Product: Volume Manager Guides | |
Manual: Volume Manager 4.1 Administrator's Guide |
Tunable ParametersThe following sections describe specific tunable parameters. dmp_enable_restore_daemonSet to 1 to enable the DMP restore daemon; set to 0 to disable. dmp_failed_io_thresholdThe time limit for an I/O request in DMP. If the time exceeds this value, the usual result is to mark the disk as bad. dmp_pathswitch_blks_shiftThe default number of contiguous I/O blocks (expressed as the integer exponent of a power of 2; for example 10 represents 1024 blocks) that are sent along a DMP path to an Active/Active array before switching to the next available path. The default value of this parameter is set to 10 so that 1024 blocks (1MB) of contiguous I/O are sent over a DMP path before switching. For intelligent disk arrays with internal data caches, better throughput may be obtained by increasing the value of this tunable. For example, for the HDS 9960 A/A array, the optimal value is between 14 and 16 for an I/O activity pattern that consists mostly of sequential reads or writes. Note This parameter only affects the behavior of the balanced I/O policy. A value of 0 disables multipathing for the policy unless the vxdmpadm command is used to specify a different partition size as described in Specifying the I/O Policy. dmp_restore_daemon_cyclesIf the DMP restore policy is CHECK_PERIODIC, the number of cycles after which the CHECK_ALL policy is called. dmp_restore_daemon_intervalThe time in seconds between two invocations of the DMP Restore Daemon. dmp_restore_daemon_policyThe DMP restore policy, which can be set to 0 (CHECK_ALL), 1 (CHECK_DISABLED), 2 (CHECK_PERIODIC), or 3 (CHECK_ALTERNATE). dmp_retry_countIf an inquiry succeeds on a path, but there is an I/O error, the number of retries to attempt on the path. vol_checkpt_defaultThe interval at which utilities performing recoveries or resynchronization operations load the current offset into the kernel as a checkpoint. A system failure during such operations does not require a full recovery, but can continue from the last reached checkpoint. The default value of the checkpoint is 10240 sectors (10MB). Increasing this size reduces the overhead of checkpointing on recovery operations at the expense of additional recovery following a system failure during a recovery. vol_default_iodelayThe count in clock ticks for which utilities pause if they have been directed to reduce the frequency of issuing I/O requests, but have not been given a specific delay time. This tunable is used by utilities performing operations such as resynchronizing mirrors or rebuilding RAID-5 columns. The default for this tunable is 50 ticks. Increasing this value results in slower recovery operations and consequently lower system impact while recoveries are being performed. vol_fmr_logszThe maximum size in kilobytes of the bitmap that Non-Persistent FastResync uses to track changed blocks in a volume. The number of blocks in a volume that are mapped to each bit in the bitmap depends on the size of the volume, and this value changes if the size of the volume is changed. For example, if the volume size is 1 gigabyte and the system block size is 1024 bytes, a vol_fmr_logsz value of 4 yields a map contains 32,768 bits, each bit representing one region of 32 blocks. The larger the bitmap size, the fewer the number of blocks that are mapped to each bit. This can reduce the amount of reading and writing required on resynchronization, at the expense of requiring more non-pageable kernel memory for the bitmap. Additionally, on clustered systems, a larger bitmap size increases the latency in I/O performance, and it also increases the load on the private network between the cluster members. This is because every other member of the cluster must be informed each time a bit in the map is marked. Since the region size must be the same on all nodes in a cluster for a shared volume, the value of the vol_fmr_logsz tunable on the master node overrides the tunable values on the slave nodes, if these values are different. Because the value of a shared volume can change, the value of vol_fmr_logsz is retained for the life of the volume. In configurations which have thousands of mirrors with attached snapshot plexes, the total memory overhead can represent a significantly higher overhead in memory consumption than is usual for VxVM. The default value of this tunable is 4KB. The maximum and minimum permitted values are 1KB and 8KB. Note The value of this tunable does not have any effect on Persistent FastResync. vol_max_volThe maximum number of volumes that can be created on the system. This value can be set to between 1 and the maximum number of minor numbers representable in the system. The default value for this tunable is 16777215. vol_maxioThe maximum size of logical I/O operations that can be performed without breaking up the request. I/O requests to VxVM that are larger than this value are broken up and performed synchronously. Physical I/O requests are broken up based on the capabilities of the disk device and are unaffected by changes to this maximum logical request limit. The default value for this tunable is 256 sectors (256KB). Note The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio. If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio. vol_maxioctlThe maximum size of data that can be passed into VxVM via an ioctl call. Increasing this limit allows larger operations to be performed. Decreasing the limit is not generally recommended, because some utilities depend upon performing operations of a certain size and can fail unexpectedly if they issue oversized ioctl requests. The default value for this tunable is 32768 bytes (32KB). vol_maxkiocountThe maximum number of I/O operations that can be performed by VxVM in parallel. Additional I/O requests that attempt to use a volume device are queued until the current activity count drops below this value. The default value for this tunable is 2048. Because most process threads can only issue a single I/O request at a time, reaching the limit of active I/O requests in the kernel requires 2048 I/O operations to be performed in parallel. Raising this limit is unlikely to provide much benefit except on the largest of systems. vol_maxparallelioThe number of I/O operations that the vxconfigd(1M) daemon is permitted to request from the kernel in a single VOL_VOLDIO_READ per VOL_VOLDIO_WRITE ioctl call. The default value for this tunable is 256. It is not desirable to change this value. vol_maxspecialioThe maximum size of an I/O request that can be issued by an ioctl call. Although the ioctl request itself can be small, it can request a large I/O request be performed. This tunable limits the size of these I/O requests. If necessary, a request that exceeds this value can be failed, or the request can be broken up and performed synchronously. The default value for this tunable is 256 sectors (256KB). Raising this limit can cause difficulties if the size of an I/O request causes the process to take more memory or kernel virtual mapping space than exists and thus deadlock. The maximum limit for vol_maxspecialio is 20% of the smaller of physical memory or kernel virtual memory. It is inadvisable to go over this limit, because deadlock is likely to occur. If stripes are larger than vol_maxspecialio, full stripe I/O requests are broken up, which prevents full-stripe read/writes. This throttles the volume I/O throughput for sequential I/O or larger I/O requests. This tunable limits the size of an I/O request at a higher level in VxVM than the level of an individual disk. For example, for an 8 by 64KB stripe, a value of 256KB only allows I/O requests that use half the disks in the stripe; thus, it cuts potential throughput in half. If you have more columns or you have used a larger interleave factor, then your relative performance is worse. This tunable must be set, as a minimum, to the size of your largest stripe (RAID-0 or RAID-5). vol_subdisk_numThe maximum number of subdisks that can be attached to a single plex. There is no theoretical limit to this number, but it has been limited to a default value of 4096. This default can be changed, if required. volcvm_smartsyncIf set to 0, volcvm_smartsync disables SmartSync on shared disk groups. If set to 1, this parameter enables the use of SmartSync with shared disk groups. SeeSmartSync Recovery Accelerator for more information. voldrl_max_drtregsThe maximum number of dirty regions that can exist on the system for non-sequential DRL on volumes. A larger value may result in improved system performance at the expense of recovery time. This tunable can be used to regulate the worse-case recovery time for the system following a failure. The default value for this tunable is 2048. voldrl_max_seq_dirtyThe maximum number of dirty regions allowed for sequential DRL. This is useful for volumes that are usually written to sequentially, such as database logs. Limiting the number of dirty regions allows for faster recovery if a crash occurs. The default value for this tunable is 3. voldrl_min_regionszThe minimum number of sectors for a dirty region logging (DRL) volume region. With DRL, VxVM logically divides a volume into a set of consecutive regions. Larger region sizes tend to cause the cache hit-ratio for regions to improve. This improves the write performance, but it also prolongs the recovery time. The VxVM kernel currently sets the default value for this tunable to 512 sectors. Note If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio. voliomem_chunk_sizeThe granularity of memory chunks used by VxVM when allocating or releasing system memory. A larger granularity reduces CPU overhead due to memory allocation by allowing VxVM to retain hold of a larger amount of memory. The default size for this tunable is 64KB. voliomem_maxpool_szThe maximum memory requested from the system by VxVM for internal purposes. This tunable has a direct impact on the performance of VxVM as it prevents one I/O operation from using all the memory in the system. VxVM allocates two pools that can grow up to voliomem_maxpool_sz, one for RAID-5 and one for mirrored volumes. A write request to a RAID-5 volume that is greater than voliomem_maxpool_sz/10 is broken up and performed in chunks of size voliomem_maxpool_sz/10. A write request to a mirrored volume that is greater than voliomem_maxpool_sz/2 is broken up and performed in chunks of size voliomem_maxpool_sz/2. The default value for this tunable is 4M. Note The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio. voliot_errbuf_dfltThe default size of the buffer maintained for error tracing events. This buffer is allocated at driver load time and is not adjustable for size while VxVM is running. The default size for this buffer is 16384 bytes (16KB). Increasing this buffer can provide storage for more error events at the expense of system memory. Decreasing the size of the buffer can result in an error not being detected via the tracing device. Applications that depend on error tracing to perform some responsive action are dependent on this buffer. voliot_iobuf_defaultThe default size for the creation of a tracing buffer in the absence of any other specification of desired kernel buffer size as part of the trace ioctl. The default size of this tunable is 8192 bytes (8KB). If trace data is often being lost due to this buffer size being too small, then this value can be tuned to a more generous amount. voliot_iobuf_limitThe upper limit to the size of memory that can be used for storing tracing buffers in the kernel. Tracing buffers are used by the VxVM kernel to store the tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool. Increasing this size can allow additional tracing to be performed at the expense of system memory usage. Setting this value to a size greater than can readily be accommodated on the system is inadvisable. The default value for this tunable is 131072 bytes (128KB). voliot_iobuf_maxThe maximum buffer size that can be used for a single trace buffer. Requests of a buffer larger than this size are silently truncated to this size. A request for a maximal buffer size from the tracing interface results (subject to limits of usage) in a buffer of this size. The default size for this buffer is 65536 bytes (64KB). Increasing this buffer can provide for larger traces to be taken without loss for very heavily used volumes. Care should be taken not to increase this value above the value for the voliot_iobuf_limit tunable value. voliot_max_openThe maximum number of tracing channels that can be open simultaneously. Tracing channels are clone entry points into the tracing device driver. Each vxtrace process running on a system consumes a single trace channel. The default number of channels is 32. The allocation of each channel takes up approximately 20 bytes even when not in use. volpagemod_max_memszThe amount of memory, measured in kilobytes, that is allocated for caching FastResync and cache object metadata. This tunable has a default value of 6144KB (6MB) of physical memory. Note The memory allocated for this cache is exclusively dedicated to it. It is not available for other processes or applications. Setting the value of volpagemod_max_memsz below 512KB fails if cache objects or volumes that have been prepared for instant snapshot operations are present on the system. If you do not use the FastResync or DRL features that are implemented using a version 20 DCO volume, the value of volpagemod_max_memsz can be set to 0. However, if you subsequently decide to enable these features, you can use the vxtune command to change the value to a more appropriate one: # vxtune volpagemod_max_memsz value where the new value is specified in kilobytes. Using the vxtune command to adjust the value of volpagemod_max_memsz does not persist across system reboots unless you also adjust the value that is configured in the /stand/system file. volraid_minpool_szThe initial amount of memory that is requested from the system by VxVM for RAID-5 operations. The maximum size of this memory pool is limited by the value of voliomem_maxpool_sz. The default value for this tunable is 16348 sectors (16MB). volraid_rsrtransmaxThe maximum number of transient reconstruct operations that can be performed in parallel for RAID-5. A transient reconstruct operation is one that occurs on a non-degraded RAID-5 volume that has not been predicted. Limiting the number of these operations that can occur simultaneously removes the possibility of flooding the system with many reconstruct operations, and so reduces the risk of causing memory starvation. The default number of transient reconstruct operations that can be performed in parallel is 1. Increasing this size improves the initial performance on the system when a failure first occurs and before a detach of a failing object is performed, but can lead to memory starvation. |
^ Return to Top | < Previous | Next > |
Product: Volume Manager Guides | |
Manual: Volume Manager 4.1 Administrator's Guide | |
VERITAS Software Corporation
www.veritas.com |