Oracle Data Guard Concepts and Administration Release 2 (9.2) Part Number A96653-02 |
|
|
View PDF |
This chapter describes how redo logs are applied to a standby database. It includes the following topics:
Log apply services automatically apply archived redo logs to maintain synchronization with the primary database and allow transactionally consistent access to the data. Archived redo data is not available for log apply services until a log switch occurs on the primary database.
The main difference between physical and logical standby databases is the manner in which log apply services apply the archived redo logs. For physical standby databases, log apply services maintain the standby database by performing managed recovery operations. For logical standby databases, log apply services maintain the standby database by executing SQL statements. The following list summarizes these operations:
In this mode, log transport services transmit redo data to the standby site, and log apply services automatically apply the redo logs.
.
Caution: You can also open a physical standby database for read-only operations to allow users to query the standby database for reporting purposes. However, while a standby database that is open for read-only access, it is not kept transactionally current with the primary database, resulting in prolonging a failover or switchover operation if one is required for disaster recovery. See Section 8.2, "Using a Standby Database That Is Open for Read-Only Access" for more information. |
Log apply services manage logical standby databases by executing SQL statements. Logical standby databases can be opened in read/write mode, but the target tables being maintained by the logical standby database are opened in read-only mode for reporting purposes. The SQL apply mode allows you to use the logical standby database for reporting activities even while SQL statements are being applied.
The sections in this chapter describe the managed recovery and SQL apply operations, and log apply services in more detail.
The physical standby database uses several processes to automate archiving redo data and recovering redo logs on the standby database. On the standby database, log apply services use the following processes:
The remote file server (RFS) process receives redo data from the primary database either in the form of archived redo logs or standby redo logs.
If standby redo logs are being used, the ARCn process archives the standby redo logs that are to be applied by the managed recovery process (MRP).
The managed recovery process (MRP) applies information from the archived redo logs to the standby database. When performing managed recovery operations, log apply services automatically apply archived redo logs to maintain transactional synchronization with the primary database.
Log apply services can apply logs to a physical standby database when the database is performing recovery, but not when it is open for read-only operations). A physical standby database can be performing one of the following:
Table 6-1 summarizes the basic tasks for configuring and monitoring log apply services.
Step | Task | See ... |
---|---|---|
1 |
Start the standby instance and mount the standby database. |
|
2 |
Enable managed recovery or read-only operations. |
Section 6.2.2.1 or Section 8.2, respectively |
3 |
If performing managed recovery operations, set initialization parameters to automatically resolve archive gaps. |
Section 6.4 and the Oracle9i Net Services Administrator's Guide |
4 |
Monitor log apply services. |
After all necessary parameter and network files are configured, you can start the standby instance. If the standby instance is not started and mounted, the standby database cannot receive redo data from the primary database.
To start the physical standby database instance, perform the following steps:
SQL> STARTUP NOMOUNT;
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
Log apply services keep the standby database synchronized with the primary database by automatically applying archived redo logs to the standby database, as shown in Figure 6-1.
Text description of the illustration redoapply.gif
You can specify that log apply services run as a foreground session or as a background process.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;
If you started a foreground session, by default, control is not returned to the command prompt.
DISCONNECT
keyword on the SQL statement. For example:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
This statement starts a detached server process and immediately returns control to the user. While the managed recovery process is performing recovery in the background, the foreground process that issued the RECOVER
statement can continue performing other tasks. This does not disconnect the current SQL session.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
You can query views to monitor log apply services as follows:
V$MANAGED_STANDBY
fixed view on the standby database. This view monitors the progress of a standby database in managed recovery mode. For example:
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS 2> FROM V$MANAGED_STANDBY; PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS ------- ------------ ---------- ---------- ---------- ---------- MRP0 APPLYING_LOG 1 946 10 1001
If you did not start a detached server process, you need to execute this query from another SQL session.
V$ARCHIVE_DEST_STATUS
fixed view.
Although this SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
statement does not require any additional clauses, it provides many keywords to help you control the redo apply process.
See Also:
Section 13.12 and Oracle9i SQL Reference for complete information about the SQL statement syntax |
To enable the automatic creation of new datafiles on a physical standby database when datafiles are created on the primary database, you must define the STANDBY_FILE_MANAGEMENT
initialization parameter.
If the directory structures on the primary and standby databases are different, you must also set the DB_FILE_NAME_CONVERT
initialization parameter to convert the filenames of one or more sets of datafiles on the primary database to filenames on the standby database.
When you set the STANDBY_FILE_MANAGEMENT
initialization parameter to AUTO
, it automatically creates on the standby database any datafiles that were newly created on the primary database, using the same name that you specified on the primary database.
The STANDBY_FILE_MANAGEMENT
initialization parameter works with the DB_FILE_NAME_CONVERT
parameter to convert the datafile locations from the primary site to standby site.
When a new datafile is added on the primary database, the same datafile is created on the standby database. The DB_FILE_NAME_CONVERT
parameter is used to convert the datafile name on the primary database to a datafile name on the standby database. This parameter works the same if the STANDBY_FILE_MANAGEMENT
initialization parameter is set to AUTO
or MANUAL
.
The DB_FILE_NAME_CONVERT
initialization parameter must specify paired strings. The first string is a sequence of characters to be looked for in a primary database filename. If that sequence of characters is matched, it is replaced by the second string to construct the standby database filename. You can specify multiple pairs of filenames. For example:
DB_FILE_NAME_CONVERT= "/disk1/oracle/oradata/payroll/df1", \ "/disk1/oracle/oradata/payroll/standby/df1", \ "/disk1/oracle/oradata/payroll", "/disk1/oracle/oradata/payroll/standby/" STANDBY_FILE_MANAGEMENT=AUTO
Note: When you specify pairs of files, be sure to specify the most restrictive path names before the least restrictive, as shown in the example. |
You cannot rename the datafile on the standby site when the STANDBY_FILE_MANAGEMENT
initialization parameter is set to AUTO
. When you set the STANDBY_FILE_MANAGEMENT
initialization parameter to AUTO
, use of the following SQL statements is not allowed:
ALTER DATABASE RENAME
ALTER DATABASE ADD/DROP LOGFILE
ALTER DATABASE ADD/DROP STANDBY LOGFILE MEMBER
ALTER DATABASE CREATE DATAFILE AS
If you attempt to use any of these statements on the standby database, an error is returned. For example:
SQL> ALTER DATABASE RENAME FILE '/disk1/oracle/oradata/payroll/t_db2.log' to 'dummy'; alter database rename file '/disk1/oracle/oradata/payroll/t_db2.log' to 'dummy' * ERROR at line 1: ORA-01511: error in renaming log/data files ORA-01270: RENAME operation is not allowed if STANDBY_FILE_MANAGEMENT is auto
See Also:
Section 8.4.1 to learn how to add datafiles to a database |
Log apply services convert the data from the redo logs into SQL statements and then executes these SQL statements on the logical standby database. Because the logical standby database remains open, tables that are maintained can be used simultaneously for other tasks such as reporting, summations, and queries. Figure 6-2 shows log apply services applying redo data to a logical standby database.
Text description of the illustration sqlapply.gif
The logical standby database uses the following processes:
The remote file server process receives redo data from the primary database. The RFS process communicates with the logical standby process (LSP) to coordinate and record which files arrived.
The logical standby process is the coordinator process for a set of processes that concurrently read, prepare, build, analyze, and apply completed SQL transactions from the archived redo logs. The LSP also maintains metadata in the database.
Table 6-2 summarizes the basic tasks for configuring log apply services.
Step | Task | See ... |
---|---|---|
1 |
Start log apply services. |
|
2 |
Ensure that redo logs are being applied. |
|
3 |
Manage SQL apply operations. |
In addition to providing detailed information about the tasks presented in Table 6-2, the following sections also describe how to delay the application of archived redo logs.
To start log apply services, start the logical standby database, and then use the following statement. (Starting a logical standby database is done in the same manner as starting a primary database.)
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
To stop log apply services, use the following statement:
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
Redo logs are read and applied to a logical standby database when a log switch occurs, not as they arrive on the standby site. You can verify the status of archived redo log apply operations by querying the following views:
V$LOGSTDBY
Use this view to verify that the archived redo logs are being applied to the standby database. This view provides information about the processes that are reading redo data and applying archived redo logs to logical standby databases. For example, the following query shows typical output during the initialization phase:
SQL> COLUMN STATUS FORMAT A50 SQL> COLUMN TYPE FORMAT A12 SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY; TYPE HIGH_SCN STATUS ------------ ---------- -------------------------------------------------- COORDINATOR ORA-16115: loading Log Miner dictionary data READER ORA-16127: stalled waiting for additional transact ions to be applied BUILDER ORA-16117: processing PREPARER ORA-16116: no work available SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY; TYPE HIGH_SCN STATUS ------------ ---------- -------------------------------------------------- COORDINATOR ORA-16126: loading table or sequence object number READER ORA-16116: no work available BUILDER ORA-16116: no work available PREPARER ORA-16116: no work available
DBA_LOGSTDBY_PROGRESS
Use this view for information about the progress of the log apply services. This view shows the state of the LSP and information about the SQL transactions that were executed on the logical standby database. For example:
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS; APPLIED_SCN NEWEST_SCN ----------- ---------- 180702 180702
When the numbers in the APPLIED_SCN
and NEWEST_SCN
columns are equal (as shown in the query example), it means that all of the available data in the redo log was applied. These values can be compared to the values in the FIRST_CHANGE#
column in the DBA_LOGSTDBY_LOG
view to see how much log information has to be applied and how much remains.
See Also:
Chapter 9 for information about managing a logical standby database and Chapter 14 for more information about views that are used in a Data Guard environment |
Data Guard offers automatic archive redo log gap detection and resolution to handle network connectivity problems that might temporarily disconnect one or more standby databases from the primary database. Once properly configured, Data Guard requires no manual intervention by the DBA to detect and resolve such gaps.
The following sections describe gap detection and resolution.
An archive gap is a range of archived redo logs created whenever the standby system is unable to receive the next archived redo log generated by the primary database. For example, an archive gap occurs when the network becomes unavailable and automatic archiving from the primary database to the standby database stops. When the network is available again, automatic transmission of the redo data from the primary database to the failed standby database resumes.
The missing archived redo logs are the gap. The gap is automatically detected and resolved.
An archive gap can occur whenever the primary database archives a log, but the log is not archived to the standby site. Every minute, the primary database polls its standby databases to see if there is a gap in the sequence of archived redo logs. The polling between the primary and standby databases is sometimes referred to as a heartbeat. The primary database polls the standby databases serially.
The following sections describe how to query the appropriate views to determine which logs are missing on the standby database.
To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP
view as shown in the following example:
SQL> SELECT * FROM V$ARCHIVE_GAP; THREAD# LOW_SEQUENCE# HIGH_SEQUENCE# ----------- ------------- -------------- 1 7 10
The output from the previous example indicates your physical standby database is currently missing logs from sequence 7 to sequence 10 for thread 1. After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo logs on your primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1
):
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND
2> SEQUENCE# BETWEEN 7 AND 10;
NAME -------------------------------------------------------------------------------- /primary/thread1_dest/arcr_1_7.arc
/primary/thread1_dest/arcr_1_8.arc
/primary/thread1_dest/arcr_1_9.arc
Copy these logs to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE
statement on your physical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_7.arc'; SQL> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_8.arc'; : :
After you register these logs on the physical standby database, you can restart managed recovery operations.
To determine if there is an archive gap, query the DBA_LOGSTDBY_LOG
view on the logical standby database. For example, the following query indicates there is a gap in the sequence of archived redo logs because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at the file shown as sequence number 6:
SQL> COLUMN FILE_NAME FORMAT a55 SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L 2> WHERE NEXT_CHANGE# NOT IN 3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#) 4> ORDER BY THREAD#,SEQUENCE#; THREAD# SEQUENCE# FILE_NAME ---------- ---------- ----------------------------------------------- 1 6 /disk1/oracle/dbs/log-1292880008_6.arc 1 10 /disk1/oracle/dbs/log-1292880008_10.arc
Copy the missing logs to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE
statement on your logical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE /disk1/oracle/dbs/log-1292880008_10.arc;
After you register these logs on the logical standby database, you can restart log apply services.
For both physical and logical standby databases, Data Guard performs gap detection and resolution automatically. No extra configuration settings are required. However, for physical standby databases, you can set initialization parameters so that log apply services also automatically resolve archive gaps as they occur on a physical standby database.
The following sections describe how to set initialization parameters to facilitate gap recovery for a physical standby database, and how gap recovery is handled on a logical standby database.
You can set initialization parameters so that log apply services automatically identify and resolve archive gaps as they occur on a physical standby database.
Define the FAL_CLIENT
and FAL_SERVER
initialization parameters only for physical standby databases in the initialization parameter file:
The FAL server is a background Oracle process that services the incoming requests from the FAL client. In most cases, the FAL server is located on a primary database. However, it can be located on another standby database.
For log apply services to automatically identify and resolve archive gaps, you must:
FAL_CLIENT
initialization parameter, and assign the network service name that you created in step 2 to the FAL_SERVER
initialization parameter.Log apply services automatically detect, and the FAL server process running on the primary database attempts to resolve, any gaps that may exist when you enable managed recovery with the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
statement.
See Also:
Section B.3 for a description of the manual steps and Oracle9i Net Services Administrator's Guide for information about Oracle Net |
Gap recovery on a logical standby database is handled through the heartbeat mechanism. The important consideration here is that automatic gap recovery is contingent on the availability of the primary database. If the primary database is not available, as would be the case in a failover scenario, automatic gap recovery will not take place.
To monitor the status of archived redo logs and obtain information on log apply services on a physical standby database, query the fixed views described in this section. You can also monitor the standby database using Data Guard Manager.
This section contains the following topics:
See Also:
Chapter 14 for complete reference information on the views named in the preceding list |
Query the physical standby database to monitor log apply and log transport services activity at the standby site.
SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS 2> FROM V$MANAGED_STANDBY; PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS ------- ------------ ---------- ---------- ---------- ---------- RFS ATTACHED 1 947 72 72 MRP0 APPLYING_LOG 1 946 10 72
The previous query output shows that an RFS process has completed the archiving of redo log file sequence number 947. The output also shows a managed recovery operation that is actively applying archived redo log sequence number 946. The recovery operation is currently recovering block number 10 of the 72-block archived redo log.
To quickly determine the level of synchronization for the standby database, issue the following query on the physical standby database:
SQL> SELECT ARCHIVED_THREAD#, ARCHIVED_SEQ#, APPLIED_THREAD#, APPLIED_SEQ# 2> FROM V$ARCHIVE_DEST_STATUS; ARCHIVED_THREAD# ARCHIVED_SEQ# APPLIED_THREAD# APPLIED_SEQ# ---------------- ------------- --------------- ------------ 1 947 1 945
The previous query output shows that the standby database is two archived logs behind in applying the redo logs received from the primary database. This might indicate that a single recovery process is unable to keep up with the volume of archived redo logs being received. Using the PARALLEL
option might be a solution.
The V$ARCHIVED_LOG
fixed view on the physical standby database shows all the archived redo logs received from the primary database. This view is only useful after the standby site starts receiving logs, because before that time the view is populated by old archived log records generated from the primary control file. For example, you can execute the following SQL*Plus statement:
SQL> SELECT REGISTRAR, CREATOR, THREAD#, SEQUENCE#, FIRST_CHANGE#, 2> NEXT_CHANGE# FROM V$ARCHIVED_LOG; REGISTRAR CREATOR THREAD# SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# --------- ------- ---------- ---------- ------------- ------------ RFS ARCH 1 945 74651 74739 RFS ARCH 1 946 74739 74772 RFS ARCH 1 947 74772 74774
The previous query output shows three archived redo logs received from the primary database.
Query the V$LOG_HISTORY
fixed view on the physical standby database to show all the archived redo logs that were applied:
SQL> SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE# 2> FROM V$LOG_HISTORY; THREAD# SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# ---------- ---------- ------------- ------------ 1 945 74651 74739
The previous query output shows that the most recently applied archived redo log was sequence number 945.
The V$DATAGUARD_STATUS
fixed view displays events that would typically be triggered by any message to the alert log or server process trace files.
The following example shows output from the V$DATAGUARD_STATUS
view on a primary database:
SQL> SELECT MESSAGE FROM V$DATAGUARD_STATUS; MESSAGE -------------------------------------------------------------------------------- ARC0: Archival started ARC1: Archival started Archivelog destination LOG_ARCHIVE_DEST_2 validated for no-data-loss recovery Creating archive destination LOG_ARCHIVE_DEST_2: 'dest2' ARCH: Transmitting activation ID 0 LGWR: Completed archiving log 3 thread 1 sequence 11 Creating archive destination LOG_ARCHIVE_DEST_2: 'dest2' LGWR: Transmitting activation ID 6877c1fe LGWR: Beginning to archive log 4 thread 1 sequence 12 ARC0: Evaluating archive log 3 thread 1 sequence 11 ARC0: Archive destination LOG_ARCHIVE_DEST_2: Previously completed ARC0: Beginning to archive log 3 thread 1 sequence 11 Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/arch/arch_1_11.arc' ARC0: Completed archiving log 3 thread 1 sequence 11 ARC1: Transmitting activation ID 6877c1fe 15 rows selected.
The following example shows the contents of the V$DATAGUARD_STATUS
view on a physical standby database:
SQL> SELECT MESSAGE FROM V$DATAGUARD_STATUS; MESSAGE -------------------------------------------------------------------------------- ARC0: Archival started ARC1: Archival started RFS: Successfully opened standby logfile 6: '/oracle/dbs/sorl2.log' ARC1: Evaluating archive log 6 thread 1 sequence 11 ARC1: Beginning to archive log 6 thread 1 sequence 11 Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/arch/arch_1_11.arc' ARC1: Completed archiving log 6 thread 1 sequence 11 RFS: Successfully opened standby logfile 5: '/oracle/dbs/sorl1.log' Attempt to start background Managed Standby Recovery process Media Recovery Log /oracle/arch/arch_1_9.arc 10 rows selected.
To monitor the status of archived redo logs and obtain information on log apply services on a logical standby database, query the fixed views described in this section. You can also monitor the standby database using Data Guard Manager.
This section contains the following topics:
If log apply services should stop unexpectedly, the reason for the problem is shown in this view.
The view also contains other information, such as which DDL statements were applied and which were skipped. For example:
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS'; Session altered. SQL> COLUMN STATUS FORMAT A60 SQL> SELECT EVENT_TIME, STATUS, EVENT FROM DBA_LOGSTDBY_EVENTS 2 ORDER BY EVENT_TIME, COMMIT_SCN; EVENT_TIME STATUS ------------------------------------------------------------------------------ EVENT ------------------------------------------------------------------------------- 23-JUL-02 18:20:12 ORA-16111: log mining and apply setting up 23-JUL-02 18:20:12 ORA-16128: User initiated shut down successfully completed 23-JUL-02 18:20:12 ORA-16112: log mining and apply stopping 23-JUL-02 18:20:23 ORA-16111: log mining and apply setting up 23-JUL-02 18:55:12 ORA-16128: User initiated shut down successfully completed 23-JUL-02 18:57:09 ORA-16111: log mining and apply setting up 23-JUL-02 20:21:47 ORA-16204: DDL successfully applied create table mytable (one number, two varchar(30)) 23-JUL-02 20:22:55 ORA-16205: DDL skipped due to skip setting create database link mydblink 8 rows selected.
This query shows that log apply services were started and stopped a few times. It also shows what DDL was applied and skipped. If log apply services had stopped, the last record in the query would have shown the cause of the problem.
The DBA_LOGSTDBY_LOG
view provides dynamic information about what is happening to log apply services. This view is helpful when you are diagnosing performance problems with log apply services applying archived redo logs to the logical standby database, and it can be helpful for other problems.
For example:
SQL> SELECT FILE_NAME, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#, 2> TIMESTAMP, DICT_BEGIN, DICT_END, THREAD# FROM DBA_LOGSTDBY_LOG 3> ORDER BY SEQUENCE#; FILE_NAME SEQ# FIRST_CHANGE# NEXT_CHANGE# TIMESTAM BEG END THR# ------------------------- ---- ------------- ------------ -------- --- --- ---- /oracle/dbs/hq_nyc_2.log 2 101579 101588 11:02:58 NO NO 1 /oracle/dbs/hq_nyc_3.log 3 101588 142065 11:02:02 NO NO 1 /oracle/dbs/hq_nyc_4.log 4 142065 142307 11:02:10 NO NO 1 /oracle/dbs/hq_nyc_5.log 5 142307 142739 11:02:48 YES YES 1 /oracle/dbs/hq_nyc_6.log 6 142739 143973 12:02:10 NO NO 1 /oracle/dbs/hq_nyc_7.log 7 143973 144042 01:02:11 NO NO 1 /oracle/dbs/hq_nyc_8.log 8 144042 144051 01:02:01 NO NO 1 /oracle/dbs/hq_nyc_9.log 9 144051 144054 01:02:16 NO NO 1 /oracle/dbs/hq_nyc_10.log 10 144054 144057 01:02:21 NO NO 1 /oracle/dbs/hq_nyc_11.log 11 144057 144060 01:02:26 NO NO 1 /oracle/dbs/hq_nyc_12.log 12 144060 144089 01:02:30 NO NO 1 /oracle/dbs/hq_nyc_13.log 13 144089 144147 01:02:41 NO NO 1
The output from this query shows that a LogMiner dictionary build starts at log file sequence number 5. The most recent archive log file is sequence number 13 and it was received at the logical standby database at 01:02:41.
To quickly determine if all log file information was applied, issue the following query on the logical standby database:
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM D BA_LOGSTDBY_PROGRESS; APPLIED_SCN NEWEST_SCN ----------- ---------- 211301 211357
If the APPLIED_SCN
matches the NEWEST_SCN,
then all available log information was applied. To determine how much progress was made through the available logs, join the DBA_LOGSTDBY_PROGRESS
view with the DBA_LOGSTDBY_LOG
view, as shown in the following example:
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS'; Session altered. SQL> SELECT L.SEQUENCE#, L.FIRST_TIME, 2 (CASE WHEN L.NEXT_CHANGE# < P.READ_SCN THEN 'YES' 3 WHEN L.FIRST_CHANGE# < P.APPLIED_SCN THEN 'CURRENT' 4 ELSE 'NO' END) APPLIED 5 FROM DBA_LOGSTDBY_LOG L, DBA_LOGSTDBY_PROGRESS P 6 ORDER BY SEQUENCE#; SEQUENCE# FIRST_TIME APPLIED ---------- ------------------ ------- 24 23-JUL-02 18:19:05 YES 25 23-JUL-02 18:19:48 YES 26 23-JUL-02 18:19:51 YES 27 23-JUL-02 18:19:54 YES 28 23-JUL-02 18:19:59 YES 29 23-JUL-02 18:20:03 YES 30 23-JUL-02 18:20:13 YES 31 23-JUL-02 18:20:18 YES 32 23-JUL-02 18:20:21 YES 33 23-JUL-02 18:32:11 YES 34 23-JUL-02 18:32:19 CURRENT 35 23-JUL-02 19:13:20 CURRENT 36 23-JUL-02 19:13:43 CURRENT 37 23-JUL-02 19:13:46 CURRENT 38 23-JUL-02 19:13:50 CURRENT 39 23-JUL-02 19:13:54 CURRENT 40 23-JUL-02 19:14:01 CURRENT 41 23-JUL-02 19:15:11 NO 42 23-JUL-02 19:15:54 NO 19 rows selected.
In the previous query, the computed APPLIED
column displays YES
, CURRENT
, NO
. The logs with YES
were completely applied and those files are no longer needed by the logical standby database. The logs with CURRENT
contain information that is currently being worked on. Because logical standby applies transactions, and because transactions span logs, it is common for log apply services to be applying changes from multiple logs. For logs with NO
, information from those files is not being applied. Although it is possible that the files might have been open and read.
To inspect the process activity for SQL apply operations, query the V$LOGSTDBY
fixed view on the logical standby database. For example:
SQL> COLUMN STATUS FORMAT A50 SQL> COLUMN TYPE FORMAT A12 SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY; TYPE HIGH_SCN STATUS ------------ ---------- -------------------------------------------------- COORDINATOR ORA-16117: processing READER ORA-16127: stalled waiting for additional transact ions to be applied BUILDER 191896 ORA-16116: no work available PREPARER 191902 ORA-16117: processing ANALYZER 191820 ORA-16120: dependencies being computed for transac tion at SCN 0x0000.0002ed4e APPLIER 191209 ORA-16124: transaction 1 16 1598 is waiting on ano ther transaction APPLIER 191205 ORA-16116: no work available APPLIER 191206 ORA-16124: transaction 1 5 1603 is waiting on anot her transaction APPLIER 191213 ORA-16117: processing APPLIER 191212 ORA-16124: transaction 1 20 1601 is waiting on ano ther transaction APPLIER 191216 ORA-16124: transaction 1 4 1602 is waiting on anot her transaction 11 rows selected.
The previous query displays one row for each process involved in reading and applying redo logs. The different processes perform different functions as described by the TYPE
column. The HIGH_SCN
column is a progress indicator. As long as it keeps changing, from query to query, you know progress is being made. The STATUS
column gives a text description of activity.
The V$LOGSTDBY_STATS
fixed view provides a collection of state and statistical information for log apply services. Most options have default values, and this view displays what values are currently in use. It also provides statistical information that helps indicate progress. Issue the following query to view database state information:
SQL> COLUMN NAME FORMAT A35 SQL> COLUMN VALUE FORMAT A35 SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS 2> WHERE NAME LIKE 'coordinator%' or NAME LIKE 'transactions%'; NAME VALUE ----------------------------------- ----------------------------------- coordinator state APPLYING transactions ready 7821 transactions applied 7802 coordinator uptime 73
This query shows how long SQL apply operations have been running and how many transactions have been applied in that time. It also shows how many transactions are available to be applied, indicating that more work is necessary.
To see the progression of the archiving of redo logs to the standby site, set the LOG_ARCHIVE_TRACE
parameter in the primary and standby initialization parameter files. When you set the LOG_ARCHIVE_TRACE parameter, it causes the Oracle database server to write an audit trail to a trace file as follows:
This causes the Oracle database server to write an audit trail of archiving process activity (ARCn and foreground processes) on the primary database in a trace file whose filename is specified in the USER_DUMP_DEST
initialization parameter.
This causes the Oracle database server to write an audit trail of the RFS process and the ARCn process activity relating to archived redo logs on the standby database in a trace file whose filename is specified in the USER_DUMP_DEST
initialization parameter.
The trace files for a database are located in the directory specified by the USER_DUMP_DEST
parameter in the initialization parameter file. Connect to the primary and standby instances using SQL*Plus, and issue a SHOW
statement to determine the location, for example:
SQL> SHOW PARAMETER user_dump_dest NAME TYPE VALUE ------------------------------------ ------- ------------------------------ user_dump_dest string ?/rdbms/log
The format for the archiving trace parameter is as follows, where trace_level
is an integer:
LOG_ARCHIVE_TRACE=trace_level
To enable, disable, or modify the LOG_ARCHIVE_TRACE
parameter in a primary database, do one of the following:
ALTER SYSTEM SET LOG_ARCHIVE_TRACE=
trace_level
statement while the database is open or mounted.To enable, disable, or modify the LOG_ARCHIVE_TRACE
parameter for a physical standby database that is performing read-only or managed recovery operations, issue a SQL statement similar to the following:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_TRACE=15;
In the previous example, setting the LOG_ARCHIVE_TRACE
parameter to a value of 15 sets trace levels 1, 2, 4, and 8 as described in Section 6.7.3.
Issue the ALTER SYSTEM
statement from a different standby session so that it affects trace output generated by the remote file service (RFS) and ARCn processes when the next archived log is received from the primary database. For example, enter:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_TRACE=32;
The integer values for the LOG_ARCHIVE_TRACE
parameter represent levels of tracing data. In general, the higher the level, the more detailed the information. The following integer levels are available:
You can combine tracing levels by setting the value of the LOG_ARCHIVE_TRACE
parameter to the sum of the individual levels. For example, setting the parameter to 6 generates level 2 and level 4 trace output.
The following are examples of the ARC0 trace data generated on the primary site by the archiving of redo log 387 to two different destinations: the service standby1
and the local directory /oracle/dbs
.
Note: The level numbers do not appear in the actual trace output; they are shown here for clarification only. |
Level Corresponding entry content (sample) ----- -------------------------------- ( 1) ARC0: Begin archiving log# 1 seq# 387 thrd# 1 ( 4) ARC0: VALIDATE ( 4) ARC0: PREPARE ( 4) ARC0: INITIALIZE ( 4) ARC0: SPOOL ( 8) ARC0: Creating archive destination 2 : 'standby1' (16) ARC0: Issuing standby Create archive destination at 'standby1' ( 8) ARC0: Creating archive destination 1 : '/oracle/dbs/d1arc1_387.log' (16) ARC0: Archiving block 1 count 1 to : 'standby1' (16) ARC0: Issuing standby Archive of block 1 count 1 to 'standby1' (16) ARC0: Archiving block 1 count 1 to : '/oracle/dbs/d1arc1_387.log' ( 8) ARC0: Closing archive destination 2 : standby1 (16) ARC0: Issuing standby Close archive destination at 'standby1' ( 8) ARC0: Closing archive destination 1 : /oracle/dbs/d1arc1_387.log ( 4) ARC0: FINISH ( 2) ARC0: Archival success destination 2 : 'standby1' ( 2) ARC0: Archival success destination 1 : '/oracle/dbs/d1arc1_387.log' ( 4) ARC0: COMPLETE, all destinations archived (16) ARC0: ArchivedLog entry added: /oracle/dbs/d1arc1_387.log (16) ARC0: ArchivedLog entry added: standby1 ( 4) ARC0: ARCHIVED ( 1) ARC0: Completed archiving log# 1 seq# 387 thrd# 1 (32) Propagating archive 0 destination version 0 to version 2 Propagating archive 0 state version 0 to version 2 Propagating archive 1 destination version 0 to version 2 Propagating archive 1 state version 0 to version 2 Propagating archive 2 destination version 0 to version 1 Propagating archive 2 state version 0 to version 1 Propagating archive 3 destination version 0 to version 1 Propagating archive 3 state version 0 to version 1 Propagating archive 4 destination version 0 to version 1 Propagating archive 4 state version 0 to version 1 (64) ARCH: changing ARC0 KCRRNOARCH->KCRRSCHED ARCH: STARTING ARCH PROCESSES ARCH: changing ARC0 KCRRSCHED->KCRRSTART ARCH: invoking ARC0 ARC0: changing ARC0 KCRRSTART->KCRRACTIVE ARCH: Initializing ARC0 ARCH: ARC0 invoked ARCH: STARTING ARCH PROCESSES COMPLETE ARC0 started with pid=8 ARC0: Archival started
The following is the trace data generated by the RFS process on the standby site as it receives archived log 387 in directory /stby
and applies it to the standby database:
level trace output (sample) ---- ------------------ ( 4) RFS: Startup received from ARCH pid 9272 ( 4) RFS: Notifier ( 4) RFS: Attaching to standby instance ( 1) RFS: Begin archive log# 2 seq# 387 thrd# 1 (32) Propagating archive 5 destination version 0 to version 2 (32) Propagating archive 5 state version 0 to version 1 ( 8) RFS: Creating archive destination file: /stby/parc1_387.log (16) RFS: Archiving block 1 count 11 ( 1) RFS: Completed archive log# 2 seq# 387 thrd# 1 ( 8) RFS: Closing archive destination file: /stby/parc1_387.log (16) RFS: ArchivedLog entry added: /stby/parc1_387.log ( 1) RFS: Archivelog seq# 387 thrd# 1 available 04/02/99 09:40:53 ( 4) RFS: Detaching from standby instance ( 4) RFS: Shutdown received from ARCH pid 9272