Oracle9i Database Performance Tuning Guide and Reference Release 2 (9.2) Part Number A96533-02 |
|
|
View PDF |
Although performance modifications can be made to both the database and to the Oracle instance at a later time, much can be gained by carefully designing the database for the intended needs.
This chapter contains the following sections:
Note: This chapter is an overview of Oracle's new methodology for designing a database for performance. Before reading this chapter, it is important to read the information in the Oracle9i Database Performance Planning manual. For detailed information on memory and I/O configuration, see the other chapters in Part III. |
One of the first stages in managing a database is the initial database creation. This section describes important steps in the creation of an Oracle database.
The Oracle Installer lets you create a database during software installation or at a later time using the Database Creation Assistant. This is an efficient way of creating databases for small to medium size environments, and it provides a straightforward graphical user interface. However, this procedure sets some limits on the possibilities for various options, and it is therefore not recommended for database creation in larger environments.
A manual approach provides full freedom for different configuration options. This is especially important for larger environments. A manual approach typically involves designing multiple parameter files for use during the various stages, running SQL scripts for the initial CREATE
DATABASE
and subsequent CREATE
TABLESPACE
statements, running necessary data dictionary scripts, and so on.
The initialization parameter file is read whenever an Oracle instance is started, including the very first start before the database is created. There are very few parameters that cannot be modified at a later time. The most important parameters to set correctly at database creation time are listed in Table 13-1.
The first SQL statement that is executed after startup of the initial instance is the CREATE
DATABASE
statement. This creates the initial system tablespace, creates the initial redo logfiles, and sets certain database options. The options listed in Table 13-2 cannot be changed or can only be changed with difficulty at a later time.
Database Options | Description |
---|---|
Character set |
The character set specified by this option identifies the character set used for SQL text, for the internal data dictionary, and most importantly for text stored as datatypes |
National character set |
This character set is used for the datatypes |
|
This creates the internal data dictionary. For information on modifying this file, see Chapter 15, "I/O Configuration and Design". |
Location of initial datafile |
The initial datafile(s) that will make up the system tablespace should be set with care. They can be modified later, but the procedure involves a complete shutdown of the instance. |
|
Use the |
Default temporary tablespace |
If you specify |
|
The maximum number of data files. |
|
The maximum number of log files. |
CONNECT SYS/ORACLE AS SYSDBA STARTUP NOMOUNT pfile=/u01/admin/init_create.ora CREATE DATABASE "dbname" DATAFILE '/u01/oradata/system01.dbf' size 200M LOGFILE '/u02/oradata/redo01.dbf' size 100M, '/u02/oradata/redo02.dbf' size 100M CHARACTER SET "WE8ISO8859P1" NATIONAL CHARACTER SET "UTF8" EXTENT MANAGEMENT LOCAL DEFAULT TEMPORARY TABLESPACE mytemp TEMPFILE 'temp.f' SIZE 1000M MAXDATAFILES = 50 MAXLOGFILES = 5;
See Also:
Oracle9i SQL Reference for detailed information about the |
After running the CREATE
DATABASE
statement, certain catalog scripts must be executed. They are found in the rdbms/admin
directory on UNIX or the rdbms
\admin
directory on Windows, under the ORACLE_HOME
directory. The following scripts must be executed:
CATALOG.SQL
- Needed for all normal data dictionary viewsCATPROC.SQL
- Needed to load the initial PL/SQL environment
CONNECT SYS/ORACLE AS SYSDBA @@CATALOG @@CATPROC
The use of the double at-sign forces execution of these scripts from the proper directory.
The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Small log files can increase checkpoint activity and reduce performance. Because the recommendation on I/O distribution for high performance is to use separate disks for the redo log files, there is no reason not to make them large. A potential problem with large redo log files is that these are a single point of failure if redo log mirroring is not in effect.
It is not possible to provide a specific size recommendation for redo log files, but redo log files in the range of a hundred megabytes to a few gigabytes are considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty minutes.
The complete set of required redo log files can be created during database creation. After they are created, the size of a redo log size cannot be changed. However, new, larger files can be added later, and the original (smaller) ones can subsequently be dropped.
Not much can be done to speed up the creation of the initial database and the loading of necessary data dictionary views from catalog SQL files. These steps must be run serially after each other.
Note: Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior. |
After creating the initial database, several extra tablespaces must be created. All databases should have at least three tablespaces in addition to the system tablespace: a temporary tablespace, which is used for things like sorting; a rollback tablespace, which is used to store rollback segments or is designated as the automatic undo management segment; and at least one tablespace for actual application use. In most cases, applications require several tablespaces. For extremely large tablespaces with many datafiles, multiple ALTER
TABLESPACE
x ADD
DATAFILE
Y statements can also be run in parallel.
During tablespace creation, the datafiles that make up the tablespace are initialized with special "empty" block images. TEMPFILES
are not initialized.
Oracle does this to ensure that all datafiles can be written in their entirety, but this can obviously be a lengthy process if done serially. Therefore, run multiple CREATE
TABLESPACE
statements concurrently to speed up the tablespace creation process. See the SQL statement in Example 13-3. The most efficient way to do this is to run one SQL script for each set of available disks.
For permanent tables, the choice between local and global extent management on tablespace creation can have a large effect on performance. For any permanent tablespace that has moderate to large insert, modify, or delete operations compared to reads, local extent management should be chosen.
For permanent tablespaces, Oracle Corporation recommends using automatic segment-space management. Such tablespaces (often referred to as bitmapped tablespaces) are locally managed tablespaces with bitmap segment space management. They are available with Oracle9i Release 1 (9.0.1) and later.
See Also:
Oracle9i Database Administrator's Guide for more information on creating and using automatic segment-space management for tablespaces |
Temporary tablespaces can be dictionary-managed or locally managed. With Oracle9i Release 1 (9.0.1) and later, Oracle Corporation recommends use of locally managed temporary tablespaces. Example 13-3 shows how you can create a temporary tablespace with local extent management:
CONNECT SYSTEM/MANAGER CREATE TABLESPACE appdata DATAFILE '/u02/oradata/appdata01.dbf' size 1000M; CREATE TEMPORARY TABLESPACE mytemp TEMPFILE 'temp.f' SIZE 1000M;
In another session:
CONNECT SYSTEM/MANAGER CREATE TABLESPACE appindex DATAFILE '/u03/oradata/appindex01.dbf' size 1000M;
When installing applications, an initial step is to create all necessary tables and indexes. When you create a segment, such as a table, Oracle allocates space in the database for the data. If subsequent database operations cause the data volume to increase and exceed the space allocated, then Oracle extends the segment.
When creating tables and indexes, note the following:
This allows Oracle to automatically manage segment space for best performance.
Applications should carefully set storage options for the intended use of the table or index. This includes setting the value for PCTFREE
. Note that using automatic segment-space management eliminates the need to specify PCTUSED
.
INITRANS
value higher if necessary
Each datablock has a number of transaction entries that are used for row locking purposes. Initially, this number is specified by the INITRANS
parameter, and the default value (1 for tables, 2 for indexes) is generally sufficient. However, if a table or index is known to have many rows for each block with a high possibility of many concurrent updates, then it is beneficial to set a higher value. This must be done at the CREATE
TABLE
/CREATE
INDEX
time to ensure that it is set for all blocks of the object.
See Also:
|
Data segment compression reduces disk use and memory use (specifically, the buffer cache), often leading to a better scaleup for read-only operations. Data segment compression can also speed up query execution.
Oracle9i Release 2 (9.2) achieves a good compression ratio in many cases with no special tuning. However, if you need a better compression ratio, tuning can improve it slightly in some cases and very substantially in other cases.
Heap-organized block-level compression works by eliminating column value repetitions within individual blocks. This is done by moving such repeated column values into a shared block-level symbol table and replacing occurrences with references into the symbol table. Therefore, the compression ratio is higher in blocks that have more repeated values. As a database administrator or application developer, you can take advantage of this fact by reorganizing rows in the segments that you want to compress, to increase the likelihood of such repetitions.
For a single-column table, you can order table rows by the column value, using a CREATE
TABLE
AS
SELECT
with an ORDER
BY
clause.
You can also apply this method to a table in which one column has low cardinality and other columns have high cardinalities: Order the table rows by the low cardinality column.
The following views contain information about column cardinalities within segments:
You can estimate the compression or decompression ratio for a table table_t
by using the following procedure, which allows automatic cleanup:
ALTER SESSION SET EVENTS '10193 trace name context forever, level 1';
LOCK TABLE table_t$a1 IN EXCLISIVE MODE; DROP TABLE table_t$a1; DROP TABLE table_t$a2;
CREATE TABLE table_t$a1 COMPRESS AS SELECT * FROM table_t WHERE ROWNUM < 1; LOCK TABLE table_t$a1 IN SHARE MODE;
CREATE TABLE table_t$a2 NOCOMPRESS AS SELECT * FROM table_t WHERE ROWNUM < 1; INSERT /*+ APPEND */ INTO table_t$a1 SELECT * FROM table_t SAMPLE BLOCK(x,y); INSERT /*+ APPEND */ INTO table_t$a2 SELECT * FROM table_t SAMPLE BLOCK(x,y);
The data segment compression ratio is the number of blocks in table table_t$a1
divided by number of blocks in table table_t$a2
.
See Also:
Oracle9i SQL Reference for block group sampling syntax |
Many applications need to load data as part of the initial application installation process. This can be fixed data, such as postal code or other type of lookup data, or it can be actual data originating in older systems. Oracle's SQL*Loader tool is the most efficient way to load a substantial amount of data.
When running SQL*Loader, you specify to use either the conventional path or the direct path. The conventional path uses ordinary SQL INSERT
statements to load data into the tables, which means that the loading can be done concurrently with other database operations. However, the loading then is also limited by the normal INSERT
performance. For quick loading of large amounts of data, choose the direct path. With the direct path, the loading process bypasses SQL and loads directly into the database blocks. During this type of load, normal operation on the table (or partition for partitioned tables) cannot be performed.
The following tips could help speed up the data loading process using SQL*Loader:
See Also:
Oracle9i Database Utilities for detailed information on SQL*Loader |
The most efficient way to create indexes is to create them after data has been loaded. By doing this, space management becomes much simpler, and no index maintenance takes place for each row inserted. SQL*Loader automatically does this, but if you are using other methods to do initial data load, you might need to do this manually. Additionally, index creation can be done in parallel using the PARALLEL
clause of the CREATE
INDEX
statement. However, SQL*Loader is not able to do this, so you must manually create indexes in parallel after loading data.
During index creation on tables that contain data, the data must be sorted. This sorting is done in the fastest possible way, if all available memory is used for sorting. Oracle recommends that you enable automatic sizing of SQL working areas by setting the PGA_AGGREGATE_TARGET
initialization parameter.
See Also:
|
The memory for the SQL work area can also be controlled with the SORT_AREA_SIZE
initialization parameter.
The value of the SORT_AREA_SIZE
parameter should be set using the following rules:
SORT_AREA_SIZE
.
Example 13-5 is an example of setting the SORT_AREA_SIZE
parameter.
A system with 512 Mb memory runs an Oracle instance with a 100 Mb SGA, and the operating system uses 50 Mb. The memory available for sorting is 362 Mb, which equals 512 minus 50 minus 100. If the system has four CPUs running with four parallel slaves, then each of these will have 90 Mb available. 10 Mb is set aside for process overhead, so SORT_AREA_SIZE
should be set to 80 Mb. This can be done either in the initialization file or for each session with the following statement:
ALTER SESSION SET SORT_AREA_SIZE = 80000000;
A running Oracle instance is configured using startup parameters, which are set in the initialization parameter file. These parameters influence the behavior of the running instance, including influencing performance. In general, a very simple initialization file with few relevant settings covers most situations, and the initialization file should not be the first place you expect to do performance tuning, except for the few parameters shown in Table 13-4.
Table 13-3 describes the parameters necessary in a minimal initialization file. Although these parameters are necessary they have no performance impact:
Table 13-4 includes the most important parameters to set with performance implications:
Parameter | Description |
---|---|
|
Sets the database block size. |
|
Size of the buffer cache in the SGA. There are no good and simple rules to set a value, which is very application dependent, but typical values are in the range of twenty to fifty for each user session. More often, this value is set too high than too low. |
|
Sets the size of the shared pool in the SGA. The setting is application-dependent, but it is typically is in the range of a few to a few tens of megabytes for each user session. |
|
Sets the maximum number of processes that can be started by that instance. This is the most important primary parameter to set, because many other parameter values are deduced from this. |
|
This is set by default from the value of processes. However, if you are using the shared server, then the deduced value is likely to be insufficient. |
|
If you are using Java stored procedures, then this parameter should be set depending on the actual requirements of memory for the Java environment. |
|
Enables redo log archiving. See Oracle9i User-Managed Backup and Recovery Guide. |
|
Allocates one or more rollback segments by name to this instance. If you set this parameter, the instance acquires all of the rollback segments named in this parameter, even if the number of rollback segments exceeds the minimum number required by the instance (calculated as |
In many cases, only the parameters mentioned in the following example need to be set to appropriate values for the Oracle instance to be reasonable well-tuned. Here is an example of such an initialization file:
DB_NAME = finance DB_DOMAIN = hq.company.com CONTROL_FILES = ('/u01/database/control1.dbf', '/u02/database/control2.dbf') DB_BLOCK_SIZE = 8192 DB_BLOCK_BUFFERS = 12000 # this is approximately 100 Mb DB_FILES = 200 # Maximum 200 files in the database SHARED_POOL_SIZE = 100000000 # 100 Mb PROCESSES = 80 # Would be good for approximately 70 # directly connected users # log_archive_XXX # Set various archiving parameters
Oracle needs undo space to keep information for read consistency, for recovery, and for actual rollback statements. This is kept either in rollback segments or in one or more automatic undo management tablespaces.
The V$UNDOSTAT
view contains statistics for monitoring and tuning undo space. Using this view, you can better estimate the amount of undo space required for the current workload. Oracle also uses this information to help tune undo usage in the system. The V$ROLLSTAT
view contains information about the behavior of the undo segments in the undo tablespace.
See Also:
|
To effectively diagnose performance problems, it is vital to have an established performance baseline for later comparison when the system is running poorly. Without a baseline data point, it can be very difficult to identify new problems. For example, perhaps the volume of transactions on the system has increased, or the transaction profile or application has changed, or the number of users has increased.
After the database is created, tables are created, data is loaded and indexed, and the instance is configured, it is time to set up monitoring procedures.