Oracle® Clusterware Installation Guide 11g Release 1 (11.1) for Linux Part Number B28263-01 |
|
|
View PDF |
This chapter includes storage administration tasks that you should complete if you intend to use Oracle Clusterware with Oracle Real Application Clusters (Oracle RAC).
This chapter contains the following topics:
Reviewing Storage Options for Oracle Database and Recovery Files
Configuring Storage for Oracle Database Files on a Supported Shared File System
Configuring Storage for Oracle Database Files on Shared Storage Devices
This section describes supported options for storing Oracle Database files, and data files.
See Also:
The Oracle Certify site for a list of supported vendors for Network Attached Storage options:http://www.oracle.com/technology/support/metalink/
Refer also to the Certify site on OracleMetalink for the most current information about certified storage options:
https://metalink.oracle.com/
There are three ways of storing Oracle Database and recovery files:
Automatic Storage Management: Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager for Oracle Database files. It performs striping and mirroring of database files automatically.
A supported shared file system: Supported file systems include the following:
A supported cluster file system: Note that if you intend to use a cluster file system for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware. If you intend to store Oracle Clusterware files on OCFS2, then you must ensure that OCFS2 volume sizes are at least 500 MB each.
See Also:
The Certify page on OracleMetalink for supported cluster file systemsNAS Network File System (NFS) listed on Oracle Certify: Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.
See Also:
The Certify page on OracleMetalink for supported Network Attached Storage (NAS) devices, and supported cluster file systemsBlock or Raw Devices: A partition is required for each database file. If you do not use ASM, then for new installations on raw devices, you must use a custom installation.
Note:
On Linux, Oracle recommends using block devices for new installations.For all installations, you must choose the storage option that you want to use for Oracle Database files, or for Oracle Clusterware with Oracle RAC. If you want to enable automated backups during the installation, then you must also choose the storage option that you want to use for recovery files (the flash recovery area). You do not have to use the same storage option for each file type.
For single-instance Oracle Database installations using Oracle Clusterware for failover, you must use OCFS2, ASM, block devices (on Linux), or shared raw disks if you do not want the failover processing to include dismounting and remounting of local file systems.
The following table shows the storage options supported for storing Oracle Database files and Oracle Database recovery files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.
Note:
For the most up-to-date information about supported storage options for Oracle RAC installations, refer to the Certify pages on the OracleMetaLink Web site:https://metalink.oracle.com
Table 4-1 Supported Storage Options for Oracle Database and Recovery Files
Storage Option | File Types Supported | |
---|---|---|
Database | Recovery | |
Automatic Storage Management |
Yes |
Yes |
OCFS2 |
Yes |
Yes |
Red Hat Global File System (GFS); for Red Hat Enterprise Linux and Oracle Enterprise Linux |
Yes |
Yes |
Local storage |
No |
No |
NFS file system Note: Requires a certified NAS device |
Yes |
Yes |
Shared raw devices |
Yes |
No |
Yes |
No |
Use the following guidelines when choosing the storage options that you want to use for each file type:
You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.
Oracle recommends that you choose Automatic Storage Management (ASM) as the storage option for database and recovery files.
For Standard Edition Oracle RAC installations, ASM is the only supported storage option for database or recovery files.
You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any ASM instance starts.
If you intend to use ASM with Oracle RAC, and you are configuring a new ASM instance, then your system must meet the following conditions:
All nodes on the cluster have the 11g release 1 (11.1) version of Oracle Clusterware installed.
Any existing ASM instance on any node in the cluster is shut down.
If you intend to upgrade an existing Oracle RAC database, or an Oracle RAC database with ASM instances, then you must ensure that your system meets the following conditions:
Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA) are run on the node where the Oracle RAC database or Oracle RAC database with ASM instance is located.
The Oracle RAC database or Oracle RAC database with an ASM instance is running on the same nodes that you intend to make members of the new cluster installation. For example, if you have an existing Oracle RAC database running on a three-node cluster, then you must install the upgrade on all three nodes. You cannot upgrade only 2 nodes of the cluster, removing the third instance in the upgrade.
See Also:
Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing databaseIf you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.
After you have installed and configured Oracle Clusterware storage, and after you have reviewed your disk storage options for Oracle Database files, you must perform the following tasks in the order listed:
1: Check for available shared storage with CVU
Refer to Checking for Available Shared Storage with CVU.
2: Configure storage for Oracle Database files and recovery files
To use a shared file system for database or recovery file storage, refer to Configuring Storage for Oracle Database Files on a Supported Shared File System, and ensure that in addition to the volumes you create for Oracle Clusterware files, you also create additional volumes with sizes sufficient to store database files.
To use Automatic Storage Management for database or recovery file storage, refer to "Configuring Disks for Automatic Storage Management with ASMLIB"
To use shared devices for database file storage, refer to "Configuring Storage for Oracle Database Files on Shared Storage Devices".
Note:
If you choose to configure database files on raw devices, note that you must complete database software installation first, and then configure storage after installation.You cannot use OUI to configure a database that uses raw devices for storage. In a future release, the option to use raw and block devices for database storage will become unavailable.
To check for all shared file systems available across all nodes on the cluster on a supported shared file system, log in as the installation owner user (oracle
or crs
), and use the following syntax:
/mountpoint/runcluvfy.sh comp ssa -n node_list
If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:
/mountpoint/runcluvfy.sh comp ssa -n node_list -s storageID_list
In the preceding syntax examples, the variable mountpoint
is the mountpoint path of the installation media, the variable node_list
is the list of nodes you want to check, separated by commas, and the variable storageID_list
is the list of storage device IDs for the storage devices managed by the file system type that you want to check.
For example, if you want to check the shared accessibility from node1
and node2
of storage devices /dev/sdb
and /dev/sdc
, and your mountpoint is /dev/dvdrom/
, then enter the following command:
$ /mnt/dvdrom/runcluvfy.sh comp ssa -n node1,node2 -s /dev/sdb,/dev/sdc
If you do not specify storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list.
Database files consist of the files that make up the database, and the recovery area files. There are four options for storing database files:
Oracle Cluster File System (OCFS2), GPS
Network File System (NFS)
Automatic Storage Management (ASM)
Block devices (Database files only--not for the recovery area)
Raw devices (Database files only--not for the recovery area)
During configuration of Oracle Clusterware, if you selected OCFS2 or NFS, and the volumes that you created are large enough to hold the database files and recovery files, then you have completed required preinstallation steps. You can proceed to Chapter 5, "Installing Oracle Clusterware".
If you want to place your database files on ASM, then proceed to Configuring Disks for Automatic Storage Management.
If you want to place your database files on raw devices, and manually provide storage management for your database and recovery files, then proceed to "Configuring Storage for Oracle Database Files on Shared Storage Devices".
Note:
Databases can consist of a mixture of ASM files and non-ASM files. Refer to Oracle Database Administrator's Guide for additional information about ASM. For OCFS2 certification status, refer to the Certify page on OracleMetaLink.Review the following sections to complete storage requirements for Oracle Database files:
Requirements for Using a File System for Oracle Database Files
Enabling Direct NFS Client Oracle Disk Manager Control of NFS
Disabling Direct NFS Client Oracle Disk Management Control of NFS
Creating Required Directories for Oracle Database Files on Shared File Systems
To use a file system for Oracle Database files, the file system must comply with the following requirements:
To use a cluster file system, it must be a supported cluster file system, as listed in the section "Deciding to Use a Cluster File System for Data Files".
To use an NFS file system, it must be on a certified NAS device.
If you choose to place your database files on a shared file system, then one of the following must be true:
The disks used for the file system are on a highly available storage device, (for example, a RAID device that implements file redundancy).
The file systems consist of at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.
The oracle
user must have write permissions to create the files in the path that you specify.
Use Table 4-2 to determine the partition size for shared file systems.
Table 4-2 Shared File System Volume Size Requirements
File Types Stored | Number of Volumes | Volume Size |
---|---|---|
Oracle Database files |
1 |
At least 1.5 GB for each volume |
Recovery files Note: Recovery files must be on a different volume than database files |
1 |
At least 2 GB for each volume |
In Table 4-2, the total required volume size is cumulative. For example, to store all database files on the shared file system, you should have at least 3.4 GB of storage available over a minimum of two volumes.
For Linux x86 (32-bit) and x86 (64-bit) platforms, Oracle provides Oracle Cluster File System 2 (OCFS2) is designed for Linux kernel 2.6. You can have a shared Oracle home on OCFS2.
If you have an existing Oracle installation, then use the following command to determine if OCFS2 is installed:
# rpm -qa | grep ocfs
To ensure that OCFS2 is loaded, enter the following command:
/etc/init.d/ocfs status
If you want to install the Oracle Database files on an OCFS2 file system, and the packages are not installed, then download them from the following Web site. Follow the instructions listed with the kit to install the packages and configure the file system:
OCFS2:
http://oss.oracle.com/projects/ocfs2/
Note:
For OCFS2 certification status, refer to the Certify page on OracleMetaLink:https://metalink.oracle.com
Network-attached storage (NAS) systems use NFS to access data. You can store data files on a supported NFS system.
NFS file systems must be mounted and available over NFS mounts before you start installation. Refer to your vendor documentation to complete NFS configuration and mounting.
This section contains the following information about Direct NFS:
With Oracle Database 11g release 1 (11.1), instead of using the operating system kernel NFS client, you can configure Oracle Database to access NFS V3 servers directly using an Oracle internal Direct NFS client.
To enable Oracle Database to use Direct NFS, the NFS file systems must be mounted and available over regular NFS mounts before you start installation. The mount options used in mounting the file systems are not relevant, as Direct NFS manages settings after installation. Refer to your vendor documentation to complete NFS configuration and mounting.
Some NFS file servers require NFS clients to connect using reserved ports. If your filer is running with reserved port checking, then you must disable it for Direct NFS to operate. To disable reserved port checking, consult your NFS file server documentation.
If you use Direct NFS, then you can choose to use a new file specific for Oracle datafile management, oranfstab
, to specify additional options specific for Oracle Database to Direct NFS. For example, you can use oranfstab
to specify additional paths for a mount point. You can add the oranfstab
file either to /etc
or to $ORACLE_HOME/dbs
. The oranfstab
file is not required to use NFS or Direct NFS.
With Oracle RAC installations, if you want to use Direct NFS, then you must replicate the file /etc/oranfstab
on all nodes, and keep each /etc/oranfstab
file synchronized on all nodes.
When the oranfstab
file is placed in $ORACLE_HOME/dbs
, the entries in the file are specific to a single database. In this case, all nodes running an Oracle RAC database use the same $ORACLE_HOME/dbs/oranfstab
file.
When the oranfstab
file is placed in /etc
, then it is globally available to all Oracle databases, and can contain mount points used by all Oracle databases running on nodes in the cluster, including single-instance databases. However, on Oracle RAC systems, if the oranfstab
file is placed in /etc
, then you must replicate the file /etc/oranfstab
file on all nodes, and keep each /etc/oranfstab
file synchronized on all nodes, just as you must with the /etc/fstab
file.
In all cases, mount points must be mounted by the kernel NFS system, even when they are being served using Direct NFS.
Direct NFS determines mount point settings to NFS storage devices based on the configurations in /etc/mtab
, which are changed with configuring the /etc/fstab
file.
Direct NFS searches for mount entries in the following order:
$ORACLE_HOME/dbs/oranfstab
/etc/oranfstab
/etc/mtab
Direct NFS uses the first matching entry found.
Note:
You can have only one active Direct NFS implementation for each instance. Using Direct NFS on an instance will prevent another Direct NFS implementation.If Oracle Database uses Direct NFS mount points configured using oranfstab
, then it first verifies kernel NFS mounts by cross-checking entries in oranfstab
with operating system NFS mount points. If a mismatch exists, then Direct NFS logs an informational message, and does not serve the NFS server. If Oracle Database is unable to open an NFS server using Direct NFS, then Oracle Database uses the platform operating system kernel NFS client. In this case, the kernel NFS mount options must be set up as defined in "Checking NFS Mount Buffer Size Parameters for Oracle RAC". Additionally, an informational message will be logged into the Oracle alert and trace files indicating that Direct NFS could not be established. The Oracle files resident on the NFS server that are served by the Direct NFS Client are also accessible through the operating system kernel NFS client. The usual considerations for maintaining integrity of the Oracle files apply in this situation.
Direct NFS can use up to four network paths defined in the oranfstab
file for an NFS server. The Direct NFS client performs load balancing across all specified paths. If a specified path fails, then Direct NFS reissues I/O commands over any remaining paths.
Use the following views for Direct NFS management:
v$dnfs_servers: Shows a table of servers accessed using Direct NFS.
v$dnfs_files: Shows a table of files currently open using Direct NFS.
v$dnfs_channels: Shows a table of open network paths (or channels) to servers for which Direct NFS is providing files.
v$dnfs_stats: Shows a table of performance statistics for Direct NFS.
Complete the following procedure to enable Direct NFS:
Create an oranfstab
file with the following attributes for each NFS server to be accessed using Direct NFS:
Server: The NFS server name.
Path: Up to four network paths to the NFS server, specified either by IP address, or by name, as displayed using the ifconfig
command.
Export: The exported path from the NFS server.
Mount: The local mount point for the NFS server.
Note:
On Linux and Unix platforms, the location of theoranfstab
file is $ORACLE_HOME/dbs
.The following is an example of an oranfstab
file with two NFS server entries:
server: MyDataServer1 path: 132.34.35.12 path: 132.34.35.13 export: /vol/oradata1 mount: /mnt/oradata1 server: MyDataServer2 path: NfsPath1 path: NfsPath2 path: NfsPath3 path: NfsPath4 export: /vol/oradata2 mount: /mnt/oradata2 export: /vol/oradata3 mount: /mnt/oradata3 export: /vol/oradata4 mount: /mnt/oradata4 export: /vol/oradata5 mount: /mnt/oradata5
Oracle Database uses an ODM library, libnfsodm10.so
, to enable Direct NFS. To replace the standard ODM library, $ORACLE_HOME/lib/libodm10.so,
with the ODM NFS library, libnfsodm10.so
, complete the following steps:
Use one of the following methods to disable the Direct NFS client:
Remove the oranfstab
file.
Restore the stub libodm10.so
file by reversing the process you completed in step 2b, "Enabling Direct NFS Client Oracle Disk Manager Control of NFS"
Remove the specific NFS server or export paths in the oranfstab
file.
Note:
If you remove an NFS path that Oracle Database is using, then you must restart the database for the change to be effective.If you are using NFS, then you must set the values for the NFS buffer size parameters rsize
and wsize
to at least 16384. Oracle recommends that you use the value 32768.
If you are using Direct NFS, then set the rsize
and wsize
values to 32768. Direct NFS will not serve an NFS server with write size values (wtmax
) less than 32768.
For example, if you decide to use rsize
and wsize
buffer settings with the value 32768, then update the /etc/fstab
file on each node with an entry similar to the following:
nfs_server:/vol/DATA/oradata /home/oracle/netapp nfs\ rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600
Note:
Refer to your storage vendor documentation for additional information about mount options.Use the following instructions to create directories for shared file systems for Oracle Database and recovery files (for example, for a RAC database).
If necessary, configure the shared file systems that you want to use and mount them on each node.
Note:
The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.Use the df -h
command to determine the free disk space on each mounted file system.
From the display, identify the file systems that you want to use:
File Type | File System Requirements |
---|---|
Database files | Choose either:
|
Recovery files | Choose a file system with at least 2 GB of free disk space. |
If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.
Note the names of the mount point directories for the file systems that you identified.
If the user performing installation (typically, oracle
) has permissions to create directories on the disks where you plan to install Oracle Database, then DBCA creates the Oracle Database file directory, and the Recovery file directory.
If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:
Database file directory:
# mkdir /mount_point/oradata # chown oracle:oinstall /mount_point/oradata # chmod 775 /mount_point/oradata
Recovery file directory (flash recovery area):
# mkdir /mount_point/flash_recovery_area # chown oracle:oinstall /mount_point/flash_recovery_area # chmod 775 /mount_point/flash_recovery_area
By making the oracle
user the owner of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.
When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed OCFS2 or NFS configuration for Oracle Database shared storage.
This section describes how to configure disks for use with Automatic Storage Management. Before you configure the disks, you must determine the number of disks and the amount of free disk space that you require. The following sections describe how to identify the requirements and configure the disks:
Identifying Storage Requirements for Automatic Storage Management
Configuring Disks for Automatic Storage Management with ASMLIB
Note:
For Automatic Storage Management installations:Although this section refers to disks, you can also use zero-padded files on a certified NAS storage device in an Automatic Storage Management disk group. Refer to Oracle Database Installation Guide for Linux for information about creating and configuring NAS-based files for use in an Automatic Storage Management disk group.
You can run ASM using ASMLIB, or run ASM using raw devices. Oracle recommends that you use raw devices only with upgrades, and migrate to other storage systems for Oracle Database files.
To identify the storage requirements for using Automatic Storage Management, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:
Determine whether you want to use Automatic Storage Management for Oracle Database files, recovery files, or both.
Note:
You do not have to use the same storage mechanism for database files and recovery files. You can use the file system for one file type and Automatic Storage Management for the other.If you choose to enable automated backups and you do not have a shared file system available, then you must choose Automatic Storage Management for recovery file storage.
If you enable automated backups during the installation, you can choose Automatic Storage Management as the storage mechanism for recovery files by specifying an Automatic Storage Management disk group for the flash recovery area. Depending on how you choose to create a database during the installation, you have the following options:
If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option) then you can decide whether you want to use the same Automatic Storage Management disk group for database files and recovery files, or use different disk groups for each file type.
The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.
If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must use the same Automatic Storage Management disk group for database files and recovery files.
Choose the Automatic Storage Management redundancy level that you want to use for the Automatic Storage Management disk group.
The redundancy level that you choose for the Automatic Storage Management disk group determines how Automatic Storage Management mirrors files in the disk group and determines the number of disks and amount of free disk space that you require, as follows:
External redundancy
An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.
Because Automatic Storage Management does not mirror data in an external redundancy disk group, Oracle recommends that you select external redundancy only use only RAID or similar devices that provide their own data protection mechanisms for disk devices.
Normal redundancy
In a normal redundancy disk group, to increase performance and reliability, Automatic Storage Management by default uses two-way mirroring. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.
For most installations, Oracle recommends that you select normal redundancy disk groups.
High redundancy
In a high redundancy disk group, Automatic Storage Management uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.
While high redundancy disk groups do provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.
Determine the total amount of disk space that you require for the database files and recovery files.
Use the following table to determine the minimum number of disks and the minimum disk space requirements for installing the starter database:
Redundancy Level | Minimum Number of Disks | Database Files | Recovery Files | Both File Types |
---|---|---|---|---|
External | 1 | 1.15 GB | 2.3 GB | 3.45 GB |
Normal | 2 | 2.3 GB | 4.6 GB | 6.9 GB |
High | 3 | 3.45 GB | 6.9 GB | 10.35 GB |
For Oracle RAC installations, you must also add additional disk space for the Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):
15 + (2 * number_of_disks) + (126 * number_of_Automatic_Storage_Management_instances)
For example, for a four-node Oracle RAC installation, using three disks in a high redundancy disk group, you require an additional 525 MB of disk space:
15 + (2 * 3) + (126 * 4) = 525
If an Automatic Storage Management instance is already running on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.
The following section describes how to identify existing disk groups and determine the free disk space that they contain.
Optionally, identify failure groups for the Automatic Storage Management disk group devices.
Note:
You need to complete this step only if you intend to use an installation method that runs Database Configuration Assistant in interactive mode, for example, if you intend to choose the Custom installation type or the Advanced database configuration option. Other installation types do not enable you to specify failure groups.If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.
To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.
Note:
If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:
All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.
Do not specify more than one partition on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.
Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices. They are not supported with Oracle RAC.
If you want to store either database or recovery files in an existing Automatic Storage Management disk group, then you have the following choices, depending on the installation method that you select:
If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to create a disk group, or to use an existing one.
The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.
If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.
Note:
The Automatic Storage Management instance that manages the existing disk group can be running in a different Oracle home directory.To determine if an existing Automatic Storage Management disk group exists, or to determine if there is sufficient disk space in a disk group, you can use Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:
View the contents of the oratab
file to determine if an Automatic Storage Management instance is configured on the system:
$ more /etc/oratab
If an Automatic Storage Management instance is configured on the system, then the oratab
file should contain a line similar to the following:
+ASM2:oracle_home_path
In this example, +ASM2
is the system identifier (SID) of the Automatic Storage Management instance, with the node number appended, and oracle_home_path
is the Oracle home directory where it is installed. By convention, the SID for an Automatic Storage Management instance begins with a plus sign.
Set the ORACLE_SID
and ORACLE_HOME
environment variables to specify the appropriate values for the Automatic Storage Management instance that you want to use.
Connect to the Automatic Storage Management instance as the SYS
user with SYSDBA
privilege and start the instance if necessary:
$ $ORACLE_HOME/bin/sqlplus "SYS/SYS_password as SYSDBA"
SQL> STARTUP
Enter the following command to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:
SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.
If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.
Note:
If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.The Automatic Storage Management library driver (ASMLIB) simplifies the configuration and management of the disk devices by eliminating the need to rebind raw devices used with ASM each time the system is restarted.
A disk that is configured for use with Automatic Storage Management is known as a candidate disk.
If you intend to use Automatic Storage Management for database storage for Linux, then Oracle recommends that you install the ASMLIB driver and associated utilities, and use them to configure candidate disks.
To use the Automatic Storage Management library driver (ASMLIB) to configure Automatic Storage Management devices, complete the following tasks.
Note:
To create a database during the installation using the ASM library driver, you must choose an installation method that runs DBCA in interactive mode. For example, you can run DBCA in an interactive mode by choosing the Custom installation type, or the Advanced database configuration option. You must also change the default disk discovery string toORCL:*.
If you are a member of the Unbreakable Linux Network, then you can install the ASMLIB rpms by subscribing to the Oracle Software for Enterprise Linux channel, and using up2date
to retrieve the most current package for your system and kernel. For additional information, refer to the following URL:
http://www.oracle.com/technology/tech/linux/asmlib/uln.html
To install and configure the ASMLIB driver software manually, follow these steps:
Enter the following command to determine the kernel version and architecture of the system:
# uname -rm
Download the required ASMLIB packages from the OTN Web site:
http://www.oracle.com/technology/tech/linux/asmlib/index.html
Note:
You must installoracleasm-support
package version 2.0.1 or later to use ASMLIB on Red Hat Enterprise Linux Advanced Server, or SUSE Linux Enterprise Server.You must install the following packages, where version
is the version of the ASMLIB driver, arch
is the system architecture, and kernel
is the version of the kernel that you are using:
oracleasm-support-version.arch.rpm oracleasm-kernel-version.arch.rpm oracleasmlib-version.arch.rpm
Switch user to the root
user:
$ su -
Enter a command similar to the following to install the packages:
# rpm -Uvh oracleasm-support-version.arch.rpm \ oracleasm-kernel-version.arch.rpm \ oracleasmlib-version.arch.rpm
For example, if you are using the Red Hat Enterprise Linux AS 4 enterprise kernel on an AMD64 system, then enter a command similar to the following:
# rpm -Uvh oracleasm-support-2.0.1.x86_64.rpm \ oracleasmlib-2.0.1.x86_64.rpm \ oracleasm-2.6.9-11.EL-2.0.1.x86_64.rpm
Enter the following command to run the oracleasm
initialization script with the configure
option:
# /etc/init.d/oracleasm configure
Enter the following information in response to the prompts that the script displays:
Prompt | Suggested Response |
---|---|
Default user to own the driver interface: | Standard groups and users configuration: Specify the Oracle software owner user (typically, oracle )
Job role separation groups and users configuration: Specify the ASM software owner user (typically |
Default group to own the driver interface: | Standard groups and users configuration: Specify the OSDBA group (typically dba ).
Job role separation groups and users configuration: Specify the OSASM group (typically |
Start Oracle Automatic Storage Management Library driver on boot (y/n): | Enter y to start the Oracle Automatic Storage Management library driver when the system starts. |
Fix permissions of Oracle ASM disks on boot? (y/n) | Specify the Oracle software owner (the owner of the RDBMS installation). |
The script completes the following tasks:
Creates the /etc/sysconfig/oracleasm
configuration file
Creates the /dev/oracleasm
mount point
Loads the oracleasm
kernel module
Mounts the ASMLIB driver file system
Note:
The ASMLIB driver file system is not a regular file system. It is used only by the Automatic Storage Management library to communicate with the Automatic Storage Management driver.Repeat this procedure on all nodes in the cluster where you want to install Oracle RAC.
To configure the disk devices that you want to use in an Automatic Storage Management disk group, follow these steps:
If you intend to use IDE, SCSI, or RAID devices in the Automatic Storage Management disk group, then follow these steps:
If necessary, install or configure the shared disk devices that you intend to use for the disk group and restart the system.
To identify the device name for the disks that you want to use, enter the following command:
# /sbin/fdisk -l
Depending on the type of disk, the device name can vary:
To include devices in a disk group, you can specify either whole-drive device names or partition device names.
Note:
Oracle recommends that you create a single whole-disk partition on each disk that you want to use.Use either fdisk
or parted
to create a single whole-disk partition on the disk devices that you want to use.
Enter a command similar to the following to mark a disk as an Automatic Storage Management disk:
# /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
In this example, DISK1
is a name that you want to assign to the disk.
Note:
The disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.If you are using a multi-pathing disk driver with Automatic Storage Management, then make sure that you specify the correct logical device name for the disk.
To make the disk available on the other nodes in the cluster, enter the following command as root
on each node:
# /etc/init.d/oracleasm scandisks
This command identifies shared disks attached to the node that are marked as Automatic Storage Management disks.
To administer the Automatic Storage Management library driver and disks, use the oracleasm
initialization script with different options, as described in Table 4-3.
Table 4-3 ORACLEASM Script Options
Option | Description |
---|---|
configure |
Use the # /etc/init.d/oracleasm configure |
enable disable |
Use the # /etc/init.d/oracleasm enable |
start stop restart |
Use the # /etc/init.d/oracleasm restart |
createdisk |
Use the # /etc/init.d/oracleasm createdisk DISKNAME devicename |
deletedisk |
Use the # /etc/init.d/oracleasm deletedisk DISKNAME
Caution: Do not use this command to unmark disks that are being used by an Automatic Storage Management disk group. You must delete the disk from the Automatic Storage Management disk group before you unmark it. |
querydisk |
Use the # /etc/init.d/oracleasm querydisk {DISKNAME | devicename} |
listdisks |
Use the # /etc/init.d/oracleasm listdisks |
scandisks |
Use the # /etc/init.d/oracleasm scandisks |
When you have completed creating and configuring Automatic Storage Management, with ASMLIB, proceed to Chapter 5, "Installing Oracle Clusterware".
The following subsections describe how to configure Oracle Clusterware files on raw devices.
Before installing the Oracle Database 11g release 1 (11.1) software with Oracle RAC, create enough partitions of specific sizes to support your database, and also leave a few spare partitions of the same size for future expansion. For example, if you have space on your shared disk array, then select a limited set of standard partition sizes for your entire database. Partition sizes of 50 MB, 100 MB, 500 MB, and 1 GB are suitable for most databases. Also, create a few very small and a few very large spare partitions that are (for example) 1 MB and perhaps 5 GB or greater in size. Based on your plans for using each partition, determine the placement of these spare partitions by combining different sizes on one disk, or by segmenting each disk into same-sized partitions.
Note:
Be aware that each instance has its own redo log files, but all instances in a cluster share the control files and data files. In addition, each instance's online redo log files must be readable by all other instances to enable recovery.In addition to the minimum required number of partitions, you should configure spare partitions. Doing this enables you to perform emergency file relocations or additions if a tablespace data file becomes full.
Note:
For new installations, Oracle recommends that you do not use raw devices for database files.Table 4-4 lists the number and size of the shared partitions that you must configure for database files.
Table 4-4 Shared Devices or Logical Volumes Required for Database Files on Linux
Number | Partition Size (MB) | Purpose |
---|---|---|
1 |
500 |
|
1 |
300 + (Number of instances * 250) |
|
Number of instances |
500 |
|
1 |
250 |
|
1 |
160 |
|
1 |
120 |
|
2 * number of instances |
120 |
|
2 |
110 |
|
1 |
5 |
|
1 |
5 |
Note:
If you prefer to use manual undo management, instead of automatic undo management, then, instead of theUNDOTBS
n
shared storage devices, you must create a single rollback segment tablespace (RBS) on a shared storage device partition that is at least 500 MB in size.Use the following procedure to create block device partitions:
Use fdisk
to create disk partitions on block devices for database files.
If you intend to manage files manually, then create partitions at least the size of those in Table 4-4.
If you intend to configure block devices and use ASM to manage data files, then create one partition for each disk comprising the whole disk, and go to the procedure in the section "Configuring Disks for Automatic Storage Management with ASMLIB".
On each node, create or modify a permissions file in /etc/udev/permissions.d
, to change the permissions of the datafiles from default root ownership. On Asianux 2, Enterprise Linux 4, and Red Hat Enterprise Linux 4, this file should be called 49-oracle.permissions
, so that the kernel loads it before 50-udev.permissions
. On Asianux 3, Enterprise Linux 5, Red Hat Enterprise Linux 5, and SUSE Enterprise Server 10, this file should be called 51-oracle.permissions
, so that the kernel loads it after 50-udev.permissions
.
For each partition, the contents of the xx
-oracle.permissions
file are as follows:
devicepartition:oracle_db_install_owner:OSDBA:0660
This section contains information about logical volume managers for Linux.
Alternatively, you can use the -d ldl
option to format the DASD using the Linux disk layout if you require only a single partition (for example, if you want to create a partition for ASM file management). If you use this disk layout, then the partition device name for the DASD is /dev/dasd
xxxx
1
.
On zSeries Linux, you can use raw logical volume manager (LVM) volumes for Oracle Clusterware and Automatic Storage Management files. You can create the required raw logical volumes in a volume group on either direct access storage devices (DASDs) or on SCSI devices. To configure the required raw logical volumes, follow these steps:
Note:
You do not have to format FBA-type DASDs in Linux. The device name for the single whole-disk partition for FBA-type DASDs is/dev/dasd
xxxx
1
.If necessary, install or configure the shared DASDs that you intend to use for the disk group and restart the system.
Enter the following command to identify the DASDs configured on the system:
# more /proc/dasd/devices
The output from this command contains lines similar to the following:
0302(ECKD) at ( 94: 48) is dasdm : active at blocksize: 4096, 540000 blocks, 2109 MB
These lines display the following information for each DASD:
The device number (0302)
The device type (ECKD
or FBA
)
The Linux device major and minor numbers (94: 48
)
The Linux device file name (dasdm
)
In general, DASDs have device names in the form dasd
xxxx
, where xxxx
is between one and four letters that identify the device.
The block size and size of the device
From the display, identify the devices that you want to use.
If the devices displayed are FBA-type DASDs, then you do not have to configure them. You can proceed to bind them for Oracle Database files as described in the section "Binding Partitions to Raw Devices for Oracle ASM Files" .
If you want to use ECKD-type DASDs, then enter a command similar to the following to format the DASD, if it is not already formatted:
# /sbin/dasdfmt -b 4096 -f /dev/dasdxxxx
Caution:
Formatting a DASD destroys all existing data on the device. Make sure that:You specify the correct DASD device name
The DASD does not contain existing data that you want to preserve
This command formats the DASD with a block size of 4 KB and the compatible disk layout (default), which enables you to create up to three partitions on the DASD.
If you intend to create raw logical volumes on SCSI devices, then proceed to step 5.
If you intend to create raw logical volumes on DASDs, and you formatted the DASD with the compatible disk layout, then determine how you want to create partitions.
To create a single whole-disk partition on the device (for example, if you want to create a partition on an entire raw logical volume for database files), enter a command similar to the following:
# /sbin/fdasd -a /dev/dasdxxxx
This command creates one partition across the entire disk. You are then ready to mark devices as physical volumes. Proceed to Step 6.
To create up to three partitions on the device (for example, if you want to create partitions for individual tablespaces), enter a command similar to the following:
# /sbin/fdasd /dev/dasdxxxx
Use the following guidelines when creating partitions:
Use the p
command to list the partition table of the device.
Use the n
command to create a new partition.
After you have created the required partitions on this device, use the w
command to write the modified partition table to the device.
See the fdasd
man page for more information about creating partitions.
The partitions on a DASD have device names similar to the following, where n
is the partition number, between 1 and 3:
/dev/dasdxxxxn
When you have completed creating partitions, you are then ready to mark devices as physical volumes. Proceed to Step 6.
If you intend to use SCSI devices in the volume group, then follow these steps:
If necessary, install or configure the shared disk devices that you intend to use for the volume group and restart the system.
To identify the device name for the disks that you want to use, enter the following command:
# /sbin/fdisk -l
SCSI devices have device names similar to the following:
/dev/sdxn
In this example, x
is a letter that identifies the SCSI disk and n
is the partition number. For example, /dev/sda
is the first disk on the first SCSI bus.
If necessary, use fdisk
to create partitions on the devices that you want to use.
Use the t
command in fdisk
to change the system ID for the partitions that you want to use to 0x8e
.
Enter a command similar to the following to mark each device that you want to use in the volume group as a physical volume:
# pvcreate /dev/sda1 /dev/sdb1
To create a volume group named oracle_vg
using the devices that you marked, enter a command similar to the following:
# vgcreate oracle_vg /dev/dasda1 /dev/dasdb1
To create the required logical volumes in the volume group that you created, enter commands similar to the following:
# lvcreate -L size -n lv_name vg_name
In this example:
size
is the size of the logical volume, for example 500M
lv_name
is the name of the logical volume, for example orcl_system_raw_500m
vg_name
is the name of the volume group, for example oracle_vg
For example, to create a 500 MB logical volume for the SYSTEM tablespace for a database named rac
in the oracle_vg
volume group, enter the following command:
# lvcreate -L 500M -n rac_system_raw_500m oracle_vg
Note:
These commands create a device name similar to the following for each logical volume:/dev/vg_name/lv_name
On the other cluster nodes, enter the following commands to configure the volume group and logical volumes on those nodes:
# vgscan # vgchange -a y
After you have created the required partitions, you must bind the partitions to raw devices on every node. However, you must first determine what raw devices are already bound to other devices. The procedure that you must follow to complete this task varies, depending on the Linux distribution that you are using:
Note:
If the nodes are configured differently, then the disk device names might be different on some nodes. In the following procedure, be sure to specify the correct disk device names on each node.After you configure block or raw devices, you can choose to configure ASM to use the devices and manage database file storage.
To determine what raw devices are already bound to other devices, enter the following command on every node:
# /usr/bin/raw -qa
Raw devices have device names in the form /dev/raw/raw
n
, where n
is a number that identifies the raw device.
For each device that you want to use, identify a raw device name that is unused on all nodes.
Open the /etc/sysconfig/rawdevices
file in any text editor and add a line similar to the following for each partition that you created:
/dev/raw/raw1 /dev/sdb1
Specify an unused raw device for each partition.
To bind the partitions to the raw devices, enter the following command:
# /sbin/service rawdevices restart
The system automatically binds the devices listed in the rawdevices
file when it restarts.
Repeat step 2 through step 3 on each node in the cluster.
To determine what raw devices are already bound to other devices, enter the following command on every node:
# /usr/sbin/raw -qa
Raw devices have device names in the form /dev/raw/raw
n
, where n
is a number that identifies the raw device.
For each device that you want to use, identify a raw device name that is unused on all nodes.
Open the /etc/raw
file in any text editor and add a line similar to the following to associate each partition with an unused raw device:
raw1:sdb1
To bind the partitions to the raw devices, enter the following command:
# /etc/init.d/raw start
To ensure that the raw devices are bound when the system restarts, enter the following command:
# /sbin/chkconfig raw on
Repeat step 2 through step 5 on the other nodes in the cluster.
If you intend to use IDE or SCSI devices for the raw devices, then follow these steps:
If necessary, install or configure the shared disk devices that you intend to use for the raw devices and restart the system.
Note:
Because the number of partitions that you can create on a single device is limited, you might need to create the required partitions on more than one device.To identify the device name for the disks that you want to use, enter the following command:
# /sbin/fdisk -l
Depending on the type of disk, the device name can vary:
You can create the required partitions either on new devices that you added or on previously partitioned devices that have unpartitioned free space. To identify devices that have unpartitioned free space, examine the start and end cylinder numbers of the existing partitions and determine whether the device contains unused cylinders.
To create partitions on a shared storage device, enter a command similar to the following:
# /sbin/fdisk devicename
When creating partitions:
Use the p
command to list the partition table of the device.
Use the n
command to create a partition.
After you have created the required partitions on this device, use the w
command to write the modified partition table to the device.
Refer to the fdisk
man page for more information about creating partitions.
As the oracle
user, use the following command syntax to start Cluster Verification Utility (CVU) stage verification to check hardware, operating system, and storage setup:
/mountpoint/runcluvfy.sh stage –post hwos –n node_list [-verbose]
In the preceding syntax example, replace the variable node_list
with the names of the nodes in your cluster, separated by commas. For example, to check the hardware and operating system of a two-node cluster with nodes node1
and node2
, with the mountpoint /mnt/dvdrom/
and with the option to limit the output to the test results, enter the following command:
$ /mnt/dvdrom/runcluvfy.sh stage –post hwos –n node1,node2
Select the option -verbose
to receive detailed reports of the test results, and progress updates about the system checks performed by Cluster Verification Utility.