Oracle9i Database Administrator's Guide Release 2 (9.2) Part Number A96521-01 |
|
The Oracle LogMiner utility enables you to query redo logs through a SQL interface. Redo logs contain information about the history of activity on a database.
This chapter contains the following sections:
This chapter describes LogMiner functionality as it is used from the command line. You also have the option of accessing LogMiner functionality through the Oracle LogMiner Viewer graphical user interface (GUI). The LogMiner Viewer is a part of Oracle Enterprise Manager.
All changes made to user data or to the data dictionary are recorded in the Oracle redo logs. Therefore, redo logs contain all the necessary information to perform recovery operations. Because redo log data is often kept in archived files, the data is already available. To ensure that redo logs contain useful information, you should enable at least minimal supplemental logging.
The following are some of the potential uses for data contained in redo logs:
See Also:
See Extracting Actual Data Values from Redo Logs for details about how you can use LogMiner to accomplish this. |
WHERE
clause, updating rows with incorrect values, dropping the wrong index, and so forth.Oracle Corporation provides SQL access to the redo logs through LogMiner, which is part of the Oracle database server. LogMiner presents the information in the redo logs through the V$LOGMNR_CONTENTS
fixed view. This view contains historical information about changes made to the database including, but not limited to, the following:
INSERT
, UPDATE
, DELETE
, or DDL).SCN
column).COMMIT_SCN
column).XIDUSN
, XIDSLT
, and XIDSQN
columns).SEG_NAME
and SEG_OWNER
columns).USERNAME
column).SQL_REDO
column). If a password is part of the statement in a SQL_REDO
column, the password is encrypted.SQL_UNDO
column). SQL_UNDO
columns that correspond to DDL statements are always NULL. Similarly, the SQL_UNDO
column may be NULL for some datatypes and for rolled back operations.The redo logs contain internally generated numerical identifiers to identify tables and their associated columns. To reconstruct SQL statements, LogMiner needs to know how the internal identifiers map to user-defined names. This mapping information is stored in the data dictionary for the database. LogMiner provides a procedure (DBMS_LOGMNR_D.BUILD
) that lets you extract the data dictionary.
See Also:
Oracle9i Supplied PL/SQL Packages and Types Reference for a complete description of the |
The following section describes redo logs and dictionary files in further detail.
Before you begin using LogMiner, it is important to understand how LogMiner works with redo logs and dictionary files. This will help you to get accurate results and to plan the use of your system resources. The following concepts are discussed in this section:
When you run LogMiner, you specify the names of redo logs that you want to analyze. LogMiner retrieves information from those redo logs and returns it through the V$LOGMNR_CONTENTS
view. To ensure that the redo logs contain information of value to you, you must enable at least minimal supplemental logging. See Supplemental Logging.
You can then use SQL to query the V$LOGMNR_CONTENTS
view, as you would any other view. Each select operation that you perform against the V$LOGMNR_CONTENTS
view causes the redo logs to be read sequentially.
Keep the following things in mind about redo logs:
LOB
and LONG
datatypes is available as of release 9.2, but only for redo logs generated on a release 9.2 Oracle database.V$LOGMNR_CONTENTS
view.To determine which redo logs are being analyzed in the current LogMiner session you can look at the V$LOGMNR_LOGS
view, which contains one row for each redo log.
To fully translate the contents of redo logs, LogMiner requires access to a database dictionary.
LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as hex bytes.
For example, instead of the SQL statement:
INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);
LogMiner will display:
insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'), hextoraw('c306'));"
A LogMiner dictionary file contains information that identifies the database it was created from and the time it was created. This information is used to validate the dictionary against the selected redo logs, automatically detecting any mismatch between LogMiner's internal dictionary and the redo logs.
The dictionary file must have the same database character set and be created from the same database as the redo logs being analyzed. However, once the dictionary is extracted, you can use it to mine the redo logs of that database in a separate database instance without being connected to the source database.
Extracting a dictionary file also prevents problems that can occur when the current data dictionary contains only the newest table definitions. For instance, if a table you are searching for was dropped sometime in the past, the current dictionary will not contain any references to it.
LogMiner gives you three choices for your source dictionary:
When the dictionary is in a flat file, fewer system resources are used than when it is contained in the redo logs. It is recommended that you regularly back up the dictionary extracts to ensure correct analysis of older redo logs.
To extract database dictionary information to a flat file, use the DBMS_LOGMNR_D.BUILD
procedure with the STORE_IN_FLAT_FILE
option.
Be sure that no DDL operations occur while the dictionary is being built.
The following steps describe how to extract a dictionary to a flat file (including extra steps you must take if you are using Oracle8). Steps 1 through 4 are preparation steps. You only need to do them once, and then you can extract a dictionary to a flat file as many times as you wish.
DBMS_LOGMNR_D.BUILD
procedure requires access to a directory where it can place the dictionary file. Because PL/SQL procedures do not normally access user directories, you must specify a directory for use by the DBMS_LOGMNR_D.BUILD
procedure or the procedure will fail. To specify a directory, set the initialization parameter, UTL_FILE_DIR
, in the init.ora
file.
See Also:
Oracle9i Database Reference for more information about the |
For example, to set UTL_FILE_DIR
to use /oracle/database
as the directory where the dictionary file is placed, enter the following in the init
.ora
file:
UTL_FILE_DIR = /oracle/database
Remember that for the changes to the init
.ora
file to take effect, you must stop and restart the database.
dbmslmd.sql
script, which is contained in the $ORACLE_HOME/rdbms/admin
directory on the Oracle8i database, to the same directory in the Oracle8 database. For example, enter:
% cp /8.1/oracle/rdbms/admin/dbmslmd.sql /8.0/oracle/rdbms/admin/dbmslmd.sql
STARTUP
command mounts and opens the database:
SQL> STARTUP
dbmslmd.sql
script on the 8.0 database to install the DBMS_LOGMNR_D
package. For example, enter:
@dbmslmd.sql
You may need to enter the complete path to the script.
DBMS_LOGMNR_D.BUILD
. Specify a filename for the dictionary and a directory path name for the file. This procedure creates the dictionary files. For example, enter the following to create the file dictionary.ora
in /oracle/database
:
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', - 2 '/oracle/database/', - 3 OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
You could also specify a filename and location without specifying the STORE_IN_FLAT_FILE
option. The result would be the same.
To extract a dictionary to the redo logs, the database must be open and in ARCHIVELOG
mode and archiving must be enabled. While the dictionary is being extracted to the redo log stream, no DDL statements can be executed. Therefore, the dictionary snapshot extracted to the redo logs is guaranteed to be consistent, whereas the dictionary extracted to a flat file is not.
To extract database dictionary information to the redo logs, use the DBMS_LOGMNR_D.BUILD
procedure with the STORE_IN_REDO_FILES
option. Do not specify a filename or location.
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ( - 2 OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
To ensure that the redo logs contain information of value to you, you must enable at least minimal supplemental logging. See Supplemental Logging.
See Also:
Oracle9i Recovery Manager User's Guide for more information about |
The process of extracting the dictionary to the redo logs does consume database resources, but if you limit the extraction to off-peak hours, this should not be a problem and it is faster than extracting to a flat file. Depending on the size of the dictionary, it may be contained in multiple redo logs. Provided the relevant redo logs have been archived, you can find out which redo logs contain the start and end of an extracted dictionary. To do so, query the V$ARCHIVED_LOG
view, as follows:
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES'; SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';
The names of the start and end redo logs, and possibly other logs in between them, are specified with the ADD_LOGFILE
procedure when you are preparing to start a LogMiner session.
It is recommended that you periodically back up the redo logs so that the information is saved and available at a later date. Ideally, this will not involve any extra steps because if your database is being properly managed, there should already be a process in place for backing up and restoring archived redo logs. Again, because of the time required, it is good practice to do this during off-peak hours.
To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source when you start LogMiner, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => - 2 DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Using the online catalog means that you do not have to bother extracting a dictionary to a flat file or to the redo logs. In addition to using the online catalog to analyze online redo logs, you can use it to analyze archived redo logs provided you are on the same system that generated the archived redo logs.
The online catalog contains the latest information about the database and may be the fastest way to start your analysis. Because DDL operations that change important tables are somewhat rare, the online catalog generally contains the information you need for your analysis.
Remember, however, that the online catalog can only reconstruct SQL statements that are executed on the latest version of a table. As soon as the table is altered, the online catalog no longer reflects the previous version of the table. This means that LogMiner will not be able to reconstruct any SQL statements that were executed on the previous version of the table. Instead, LogMiner generates nonexecutable SQL in the SQL_REDO
column (including hex-to-raw formatting of binary values) similar to the following example:
insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'), hextoraw('c306'));"
The online catalog option requires that the database be open.
The online catalog option is not valid with the DDL_DICT_TRACKING
option.
LogMiner automatically builds its own internal dictionary from the source dictionary that you specify when you start LogMiner (either a flat file dictionary, a dictionary in the redo logs, or an online catalog).
If your source dictionary is a flat file dictionary or a dictionary in the redo logs, you can use the DDL_DICT_TRACKING
option to direct LogMiner to track data definition language (DDL) statements. DDL tracking is disabled by default. To enable it, use the OPTIONS
parameter to specify DDL_DICT_TRACKING
when you start LogMiner. For example:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => - 2 DBMS_LOGMNR.DDL_DICT_TRACKING);
With this option set, LogMiner applies any DDL statements seen in the redo logs to its internal dictionary. For example, to see all the DDLs executed by user SYS
, you could issue the following query:
SQL> SELECT USERNAME, SQL_REDO 2 FROM V$LOGMNR_CONTENTS 3 WHERE USERNAME = 'SYS' AND OEPRATION = 'DDL';
The information returned might be similar to the following, although the actual information and how it is displayed will be different on your screen.
USERNAME SQL_REDO SYS ALTER TABLE SCOTT.ADDRESS ADD CODE NUMBER; SYS CREATE USER KATHY IDENTIFIED BY VALUES 'E4C8B920449B4C32' DEFAULT TABLESPACE TS1;
Keep the following in mind when you use the DDL_DICT_TRACKING
option:
DDL_DICT_TRACKING
option is not valid with the DICT_FROM_ONLINE_CATALOG
option.DDL_DICT_TRACKING
option requires that the database be open.The ability to track DDL statements helps you monitor schema evolution because SQL statements used to change the logical structure of a table (because of DDL operations such as adding or dropping of columns) can be reconstructed. In addition, data manipulation language (DML) operations performed on new tables created after the dictionary was extracted can also be shown.
Because LogMiner automatically assigns versions to the database metadata, it will detect and notify you of any mismatch between its internal dictionary and the redo logs.
When you are using LogMiner, keep the recommendations and restrictions described in the following sections in mind.
Oracle Corporation recommends that you take the following into consideration when you are using LogMiner:
SYSTEM
tablespace. Use the DBMS_LOGMNR_D
.SET_TABLESPACE
routine to re-create all LogMiner tables in an alternate tablespace. For example, the following statement will re-create all LogMiner tables to use the logmnrts$
tablespace:
SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts$');
See Also:
Oracle9i Supplied PL/SQL Packages and Types Reference for a full description of the |
The following restrictions apply when you are using LogMiner:
For example, the following features require that supplemental logging be turned on. (Note that in Oracle9i release 9.0.1, supplemental logging was always on (it was not available at all in releases prior to 9.0.1). But in release 9.2, you must specifically turn on supplemental logging; otherwise it will not be enabled.)
ARCHIVELOG
mode be enabled).SQL_REDO
and SQL_UNDO
with primary key information for updates.LONG
and LOB
datatypes are supported only if supplemental logging is enabled.
LogMiner can potentially be dealing with large amounts of information. There are several methods you can use to limit the information that is returned to the V$LOGMNR_CONTENTS
view, as well as the speed at which it is returned. These options are specified when you start LogMiner.
When you use the COMMITTED_DATA_ONLY
option, only rows belonging to committed transactions are shown in the V$LOGMNR_CONTENTS
view. This enables you to filter out rolled back transactions, transactions that are in progress, and internal operations.
To enable this option, you specify it when you start LogMiner, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => - 2 DBMS_LOGMNR.COMMITTED_DATA_ONLY);
When you specify the COMMITTED_DATA_ONLY
option, LogMiner groups together all DML operations that belong to the same transaction. Transactions are returned in the order in which they were committed.
If long-running transactions are present in the redo logs being analyzed, use of this option may cause an "Out of Memory" error.
The default is for LogMiner to show rows corresponding to all transactions and to return them in the order in which they are encountered in the redo logs.
For example, suppose you start LogMiner without specifying COMMITTED_DATA_ONLY
and you execute the following query:
SQL> SELECT (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, 2 USERNAME AS USER, 3 SQL_REDO AS SQL_REDO 4 FROM V$LOGMNR_CONTENTS;
The output would be as follows. Both committed and uncommitted transactions are returned and rows from different transactions are interwoven.
XID USER SQL_REDO 1.5.123 SCOTT SET TRANSACTION READ WRITE; 1.5.123 SCOTT INSERT INTO "SCOTT"."EMP"("EMPNO","ENAME") VALUES (8782, 'Frost'); 1.6.124 KATHY SET TRANSACTION READ WRITE; 1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY") VALUES (8839, 'Cummings', '415-321-1234'); 1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY") VALUES (7934, 'Yeats', '033-334-1234'); 1.5.123 SCOTT INSERT INTO "SCOTT"."EMP" ("EMPNO","ENAME") VALUES (8566, 'Browning'); 1.6.124 KATHY COMMIT; 1.7.234 GOUTAM SET TRANSACTION READ WRITE; 1.5.123 SCOTT COMMIT; 1.7.234 GOUTAM INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY") VALUES (8499, 'Emerson', '202-334-1234');
Now suppose you start LogMiner, but this time you specify the COMMITTED_DATA_ONLY
option. If you executed the previous query again, the output would look as follows:
1.6.124 KATHY SET TRANSACTION READ WRITE; 1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY") VALUES (8839, 'Cummings', '415-321-1234'); 1.6.124 KATHY INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY") VALUES (7934, 'Yeats', '033-334-1234'); 1.6.124 KATHY COMMIT; 1.5.123 SCOTT SET TRANSACTION READ WRITE; 1.5.123 SCOTT INSERT INTO "SCOTT"."EMP" ("EMPNO","ENAME") VALUES (8566, 'Browning'); 1.5.123 SCOTT INSERT INTO "SCOTT"."EMP"("EMPNO","ENAME") VALUES (8782, 'Frost'); 1.5.123 SCOTT COMMIT;
Because the commit for the 1.6.124 transaction happened before the commit for the 1.5.123 transaction, the entire 1.6.124 transaction is returned first. This is true even though the 1.5.123 transaction started before the 1.6.124 transaction. None of the 1.7.234 transaction is returned because a commit was never issued for it.
When you use the SKIP_CORRUPTION
option, any corruptions in the redo logs are skipped during select operations from the V$LOGMNR_CONTENTS
view. Rows that are retrieved after the corruption are flagged with a "Log File Corruption Encountered" message. Additionally, for every corrupt redo record encountered, an informational row is returned that indicates how many blocks were skipped.
The default is for the select operation to terminate at the first corruption it encounters in the redo log.
To enable this option, you specify it when you start LogMiner, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => - 2 DBMS_LOGMNR.SKIP_CORRUPTION);
To filter data by time, set the STARTTIME
and ENDTIME
parameters. The procedure expects date values. Use the TO_DATE
function to specify date and time, as in this example:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( - 2 DICTFILENAME => '/oracle/dictionary.ora', - 3 STARTTIME => TO_DATE('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'), - 4 ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
If no STARTTIME
or ENDTIME
parameters are specified, the entire redo log is read from start to end, for each SELECT
statement issued.
The timestamps should not be used to infer ordering of redo records. You can infer the order of redo records by using the SCN.
To filter data by SCN (system change number), use the STARTSCN
and ENDSCN
parameters, as in this example:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( - 2 DICTFILENAME => '/oracle/dictionary.ora', - 3 STARTSCN => 100, - 4 ENDSCN => 150);
The STARTSCN
and ENDSCN
parameters override the STARTTIME
and ENDTIME
parameters in situations where all are specified.
If no STARTSCN
or ENDSCN
parameters are specified, the entire redo log is read from start to end, for each SELECT
statement issued.
LogMiner information is contained in the following views. You can use SQL to query them as you would any other view.
V$LOGMNR_CONTENTS
Shows changes made to user and table information.
V$LOGMNR_DICTIONARY
Shows information about the LogMiner dictionary file, provided the dictionary was created using the STORE_IN_FLAT_FILE
option. The information shown includes the database name and status information.
V$LOGMNR_LOGS
Shows information about specified redo logs. There is one row for each redo log.
V$LOGMNR_PARAMETERS
Shows information about optional LogMiner parameters, including starting and ending system change numbers (SCNs) and starting and ending times.
See Also:
Oracle9i Database Reference for detailed information about the contents of these views |
The rest of this section discusses the following topics with regard to accessing LogMiner information:
LogMiner output is contained in the V$LOGMNR_CONTENTS
view. After LogMiner is started, you can issue SQL statements at the command line to query the data contained in V$LOGMNR_CONTENTS
.
When a SQL select operation is executed against the V$LOGMNR_CONTENTS
view, the redo logs are read sequentially. Translated information from the redo logs is returned as rows in the V$LOGMNR_CONTENTS
view. This continues until either the filter criteria specified at startup are met or the end of the redo log is reached.
LogMiner returns all the rows in SCN order unless you have used the COMMITTED_DATA_ONLY
option to specify that only committed transactions should be retrieved. SCN order is the order normally applied in media recovery.
For example, suppose you wanted to find out about any delete operations that a user named Ron had performed on the scott.orders
table. You could issue a query similar to the following:
SQL> SELECT OPERATION, SQL_REDO, SQL_UNDO 2 FROM V$LOGMNR_CONTENTS 3 WHERE SEG_OWNER = 'SCOTT' AND SEG_NAME = 'ORDERS' AND 4 OPERATION = 'DELETE' AND USERNAME = 'RON';
The following output would be produced. The formatting may be different on your display than that shown here.
OPERATION SQL_REDO SQL_UNDO DELETE delete from "SCOTT"."ORDERS" insert into "SCOTT"."ORDERS" where "ORDER_NO" = 2 and ("ORDER_NO", "QTY", "EXPR_SHIP") "QTY" = 3 and values(2,3,'Y'); "EXPR_SHIP" = 'Y' and ROWID = 'AAABM8AABAAALm/AAA' DELETE delete from "SCOTT"."ORDERS" insert into "SCOTT"."ORDERS" where "ORDER_NO" = 4 and ("ORDER_NO",'QTY","EXPR_SHIP") "QTY" = 7 and values(4,7,'Y'); "EXPR_SHIP" = 'Y' and ROWID = 'AAABM8AABAAALm/AAC';
This output shows that user Ron deleted two rows from the scott.orders
table. The reconstructed SQL statements are equivalent, but not necessarily identical, to the actual statement that Ron issued. The reason for this is that the original WHERE
clause is not logged in the redo logs, so LogMiner can only show deleted (or updated or inserted) rows individually.
Therefore, even though a single DELETE
statement may have been responsible for the deletion of both rows, the output in V$LOGMNR_CONTENTS
does not reflect that. Thus, the actual DELETE
statement may have been DELETE FROM SCOTT.ORDERS WHERE EXPR_SHIP = 'Y
' or it might have been DELETE FROM SCOTT.ORDERS WHERE QTY < 8.
By default, SQL_REDO
and SQL_UNDO
statements are ended with a semicolon. Depending on how you plan to use the reconstructed statements, you may or may not want them to include the semicolon. To suppress the semicolon, specify the DBMS_LOGMNR
.NO_SQL_DELIMITER
option when you start LogMiner.
Note that if the STATUS
field of V$LOGMNR_CONTENTS
contains dbms_logmnr
.invalid_sql
, then the SQL cannot be executed.
Sometimes a query can result in a large number of columns containing reconstructed SQL statements, which can be visually busy and hard to read. LogMiner provides the DBMS_LOGMNR
.PRINT_PRETTY_SQL
option to address this problem. The PRINT_PRETTY_SQL
option formats the reconstructed SQL statements as follows, which makes them easier to read:
insert into "SCOTT"."EMP" values "EMPNO": 5505, "ENAME": "Parker", "SAL": 9000 "DEPTNO": NULL; update "SCOTT"."EMP" set "EMPNO" = 5505 and "SAL" = 9000 where "EMPNO" = 5505 and "SAL" = 9000 and "ROWID" = AABBCEXFGHA;
SQL statements that are reconstructed when the PRINT_PRETTY_SQL
option is enabled are not executable because they do not use standard SQL syntax.
LogMiner lets you make queries based on actual data values. For instance, you could perform a query to show all updates to scott.emp
that increased sal
more than a certain amount. Data such as this can be used to analyze system behavior and to perform auditing tasks.
LogMiner data extraction from redo logs is performed using two mine functions: DBMS_LOGMNR.MINE_VALUE
and DBMS_LOGMNR.COLUMN_PRESENT
. These functions are part of the DBMS_LOGMNR
package. Support for these mine functions is provided by the REDO_VALUE
and UNDO_VALUE
columns in the V$LOGMNR_CONTENTS
view.
The following is an example of how you could use the MINE_VALUE
function to select all updates to scott.emp
that increased the sal
column to more than twice its original value:
SQL> SELECT SQL_REDO FROM V$LOGMNR_CONTENTS 2 WHERE 3 SEG_NAME = 'emp' AND 4 SEG_OWNER = 'SCOTT' AND 5 OPERATION = 'UPDATE' AND 6 DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'SCOTT.EMP.SAL') > 7 2*DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'SCOTT.EMP.SAL');
As shown in this example, the MINE_VALUE
function takes two arguments. The first one specifies whether to mine the redo (REDO_VALUE
) or undo (UNDO_VALUE
) portion of the data. The second argument is a string that specifies the fully-qualified name of the column to be mined (in this case, SCOTT
.EMP
.SAL
). The MINE_VALUE
function always returns a string that can be converted back to the original datatype.
If the MINE_VALUE
function returns a NULL
value, it can mean either:
To distinguish between these two cases, use the DBMS_LOGMNR
.COLUMN_PRESENT
function which returns a 1 if the column is present in the redo or undo portion of the data. Otherwise, it returns a 0. For example, suppose you wanted to find out the increment by which the values in the sal
column were modified and the corresponding transaction identifier. You could issue the following query:
SQL> SELECT 2 (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, 3 (DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'SCOTT.EMP.SAL') - 4 DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'SCOTT.EMP.SAL')) AS INCR_SAL 5 FROM V$LOGMNR_CONTENTS 6 WHERE 7 DBMS_LOGMNR.COLUMN_PRESENT(REDO_VALUE, 'SCOTT.EMP.SAL') = 1 AND 8 DBMS_LOGMNR.COLUMN_PRESENT(UNDO_VALUE, 'SCOTT.EMP.SAL') = 1 AND 9 OPERATION = 'UPDATE';
The following usage rules apply to the MINE_VALUE
and COLUMN_PRESENT
functions:
V$LOGMNR_CONTENTS
view.LONG
, LOB
, ADT
, or COLLECTION
datatypes.DATE
, the string that is returned is formatted in canonical form (DD-MON-YYYY HH24:MI:SS.SS) regardless of the date format of the current session.
See Also:
Oracle9i Supplied PL/SQL Packages and Types Reference for a description of the |
Redo logs are generally used for instance recovery and media recovery. The data needed for such operations is automatically recorded in the redo logs. However, a redo-based application may require that additional information be logged in the redo logs. The following are examples of situations in which supplemental data may be needed:
ROWID
which is the usual method used by LogMiner. (Primary keys are not, by default, logged in the redo logs unless the key itself is changed by the update.)The default behavior of the Oracle database server is to not provide any supplemental logging at all, which means that certain features will not be supported (see Restrictions). If you want to make full use of LogMiner support, you must enable supplemental logging.
The use of LogMiner with minimal supplemental logging enabled does not have any significant performance impact on the instance generating the redo logs. However, the use of LogMiner with database-wide supplemental logging enabled does impose significant overhead and effects performance.
There are two types of supplemental logging: database supplemental logging and table supplemental logging. Each of these is described in the following sections.
There are two types of database supplemental logging: minimal and identification key logging.
Minimal supplemental logging logs the minimal amount of information needed for LogMiner to identify, group, and merge the REDO operations associated with DML changes. It ensures that LogMiner (and any products building on LogMiner technology) have sufficient information to support chained rows and various storage arrangements such as cluster tables. In most situations, you should at least enable minimal supplemental logging. To do so, execute the following statement:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA
Note: In LogMiner release 9.0.1, minimal supplemental logging was the default behavior. In release 9.2, the default is no supplemental logging. It must be specifically enabled. |
Identification key logging enables database-wide before-image logging of primary keys or unique indexes (in the absence of primary keys) for all updates. With this type of logging, an application can identify updated rows logically rather than resorting to ROWIDs.
Identification key logging is necessary when supplemental log data will be the source of change in another database, such as a logical standby.
To enable identification key logging, execute the following statement:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;
This statement results in all primary key values, database-wide, being logged regardless of whether or not any of them are modified.
If a table does not have a primary key, but has one or more non-null unique key constraints, one of the constraints is chosen arbitrarily for logging as a means of identifying the row getting updated.
If the table has neither a primary key nor a unique index, then all columns except LONG
and LOB
are supplementally logged. Therefore, Oracle Corporation recommends that when you use supplemental logging, all or most tables be defined to have primary or unique keys.
To disable either minimal or identification key logging, execute the following statement.
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
Keep the following in mind when you use identification key logging:
DELETE
statements contain all the column values required to identify a row.Table supplemental logging uses log groups to log supplemental information. There are two types of log groups:
To enable supplemental logging that uses unconditional log groups, use the ALWAYS
clause as shown in the following example:
SQL> ALTER TABLE scott.emp 2 ADD SUPPLEMENTAL LOG GROUP emp_parttime (empno, ename, deptno) ALWAYS;
This creates a log group named emp_parttime
on scott.emp
that consists of the columns empno
, ename
, and deptno
. These columns will be logged every time an UPDATE
statement is executed on scott.emp
, regardless of whether or not the update affected them. If you wanted to have the entire row image logged any time an update was made, you could create a log group that contained all the columns in the table.
To enable supplemental logging that uses conditional log groups, omit the ALWAYS
clause from your ALTER
TABLE
statement, as shown in the following example:
SQL> ALTER TABLE scott.emp 2 ADD SUPPLEMENTAL LOG GROUP emp_fulltime (empno, ename, deptno);
This creates a log group named emp_fulltime
on scott.emp. Just like the previous example, it consists of the columns empno
, ename
, and deptno
. But because the ALWAYS
clause was omitted, before images of the columns will be logged only if at least one of the columns is updated.
Keep the following in mind when you use log groups:
This section describes the steps in a typical LogMiner session. Each step is described in its own subsection.
To run LogMiner, you use the DBMS_LOGMNR
PL/SQL package. Additionally, you might also use the DBMS_LOGMNR_D
package if you choose to extract a dictionary rather than use the online catalog.
The DBMS_LOGMNR
package contains the procedures used to initialize and run LogMiner, including interfaces to specify names of redo logs, filter criteria, and session characteristics. The DBMS_LOGMNR_D
package queries the dictionary tables of the current database to create a LogMiner dictionary file.
The LogMiner packages are owned by the SYS
schema. Therefore, if you are not connected as user SYS
, you must include SYS
in your call. For example:
EXECUTE SYS.DBMS_LOGMNR.END_LOGMNR
See Also:
|
There are initial setup activities that you must perform before using LogMiner for the first time. You only need to perform these activities once, not every time you use LogMiner:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA
See Supplemental Logging for more information.
DBMS_LOGMNR_D
.SET_TABLESPACE
routine to re-create all LogMiner tables in an alternate tablespace. For example:
SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts$');
See Recommendations for more information.
To use LogMiner you must supply it with a dictionary by doing one of the following:
DICT_FROM_ONLINE_CATALOG
option when you start LogMiner. See Using the Online Catalog.Before you can start LogMiner, you must specify the redo logs that you want to analyze. To do so, execute the DBMS_LOGMNR.ADD_LOGFILE
procedure, as demonstrated in the following steps. You can add and remove redo logs in any order.
Note: If you will be mining in the same instance that is generating the redo logs, you only need to specify one archived redo log and the |
SQL> STARTUP
NEW
option of the DBMS_LOGMNR.ADD_LOGFILE
procedure to signal that this is the beginning of a new list. For example, enter the following to specify /oracle/logs/log1.f
:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( - 2 LOGFILENAME => '/oracle/logs/log1.f', - 3 OPTIONS => DBMS_LOGMNR.NEW);
ADDFILE
option of the DBMS_LOGMNR.ADD_LOGFILE
procedure. For example, enter the following to add /oracle/logs/log2.f
:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( - 2 LOGFILENAME => '/oracle/logs/log2.f', - 3 OPTIONS => DBMS_LOGMNR.ADDFILE);
The OPTIONS
parameter is optional when you are adding additional redo logs. For example, you could simply enter the following:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( - 2 LOGFILENAME=>'/oracle/logs/log2.f');
REMOVEFILE
option of the DBMS_LOGMNR.ADD_LOGFILE
procedure. For example, enter the following to remove /oracle/logs/log2.f
:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( - 2 LOGFILENAME => '/oracle/logs/log2.f', - 3 OPTIONS => DBMS_LOGMNR.REMOVEFILE);
The continuous mining option is useful if you are mining in the same instance that is generating the redo logs. When you plan to use the continuous mining option, you only need to specify one archived redo log before starting LogMiner. Then, when you start LogMiner specify the DBMS_LOGMNR
.CONTINUOUS_MINE
option, which directs LogMiner to automatically add and mine subsequent archived redo logs and also the online catalog.
After you have created a dictionary file and specified which redo logs to analyze, you can start a LogMiner session. Take the following steps:
DBMS_LOGMNR.START_LOGMNR
procedure to start LogMiner.
It is recommended that you specify a dictionary option. If you do not, LogMiner cannot translate internal object identifiers and datatypes to object names and external data formats. Therefore, it would return internal object IDs and present data as hex bytes. Additionally, the MINE_VALUE
and COLUMN_PRESENT
functions cannot be used without a dictionary.
If you are specifying the name of a flat file dictionary, you must supply a fully qualified filename for the dictionary file. For example, to start LogMiner using /oracle/database/dictionary.ora
, issue the following command:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( - 2 DICTFILENAME =>'/oracle/database/dictionary.ora');
If you are not specifying a flat file dictionary name, then use the OPTIONS
parameter to specify either the DICT_FROM_REDO_LOGS
or DICT_FROM_ONLINE_CATALOG
option.
If you specify DICT_FROM_REDO_LOGS
, LogMiner expects to find a dictionary in the redo logs that you specified with the DBMS_LOGMNR.ADD_LOGFILE
procedure. To determine which redo logs contain a dictionary, look at the V$ARCHIVED_LOG
view. See Extracting a Dictionary to the Redo Logs for an example.
For more information on using the online catalog, see Using the Online Catalog.
OPTIONS
parameter to specify additional characteristics of your LogMiner session. For example, you might decide to use the online catalog as your dictionary and to have only committed transactions shown in the V$LOGMNR_CONTENTS
view, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => - 2 DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + - 3 DBMS_LOGMNR.COMMITTED_DATA_ONLY);
The following list is a summary of LogMiner settings that you can specify with the OPTIONS
parameter and where to find more information about them.
DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG
-- See Using the Online CatalogDBMS_LOGMNR.DICT_FROM_REDO_LOGS
-- See step 1 in this listDBMS_LOGMNR.COMMITTED_DATA_ONLY
-- See Showing Only Committed TransactionsDBMS_LOGMNR.SKIP_CORRUPTION
-- See Skipping Redo CorruptionsDBMS_LOGMNR.DDL_DICT_TRACKING
-- See Tracking DDL StatementsDBMS_LOGMNR
.NEW
, DBMS_LOGMNR
.ADDFILE
, and DBMS_LOGMNR
.REMOVEFILE
-- See Specify Redo Logs for AnalysisDBMS_LOGMNR
.NO_SQL_DELIMITER
-- See Formatting of Returned DataDBMS_LOGMNR
.PRINT_PRETTY_SQL
-- See Formatting of Returned DataDBMS_LOGMNR
.CONTINUOUS_MINE
-- See Continuous MiningYou can execute the DBMS_LOGMNR
.START_LOGMNR
procedure multiple times, specifying different options each time. This can be useful for example, if you did not get the desired results from a query of V$LOGMNR_CONTENTS
, and want to restart LogMiner with different options. You do not need to re-add redo logs that were already added for a previous session.
At this point, LogMiner is started and you can perform queries against the V$LOGMNR_CONTENTS
view. See Querying V$LOGMNR_CONTENTS for examples of this.
To properly end a LogMiner session, use the DBMS_LOGMNR.END_LOGMNR
procedure, as follows:
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR;
This procedure closes all the redo logs and allows all the database and system resources allocated by LogMiner to be released.
If this procedure is not executed, LogMiner retains all its allocated resources until the end of the Oracle session in which it was invoked. It is particularly important to use this procedure to end LogMiner if either the DDL_DICT_TRACKING
option or the DICT_FROM_REDO_LOGS
option was used.
This section provides the following example uses of LogMiner.
This example shows how to see all changes made to the database in a specific time range by one of your users: joedevo.
Connect to the database and then take the following steps:
To use LogMiner to analyze joedevo
's data, you must either create a dictionary file before joedevo
makes any changes or specify use of the online catalog at LogMiner startup. See Extract a Dictionary for examples of creating dictionaries.
Assume that joedevo
has made some changes to the database. You can now specify the names of the redo logs that you want to analyze, as follows:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( - 2 LOGFILENAME => 'log1orc1.ora', - 3 OPTIONS => DBMS_LOGMNR.NEW);
If desired, add additional redo logs, as follows:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( - 2 LOGFILENAME => 'log2orc1.ora', - 3 OPTIONS => DBMS_LOGMNR.ADDFILE);
Start LogMiner and limit the search to the specified time range:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( - 2 DICTFILENAME => 'orcldict.ora', - 3 STARTTIME => TO_DATE('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'), - 4 ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
At this point, the V$LOGMNR_CONTENTS
view is available for queries. You decide to find all of the changes made by user joedevo
to the salary
table. Execute the following SELECT
statement:
SQL> SELECT SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS 2 WHERE USERNAME = 'joedevo' AND SEG_NAME = 'salary';
For both the SQL_REDO
and SQL_UNDO
columns, two rows are returned (the format of the data display will be different on your screen). You discover that joedevo
requested two operations: he deleted his old salary and then inserted a new, higher salary. You now have the data necessary to undo this operation.
SQL_REDO SQL_UNDO -------- -------- delete * from SALARY insert into SALARY(NAME, EMPNO, SAL) where EMPNO = 12345 values ('JOEDEVO', 12345, 500) and ROWID = 'AAABOOAABAAEPCABA'; insert into SALARY(NAME, EMPNO, SAL) delete * from SALARY values('JOEDEVO',12345, 2500) where EMPNO = 12345 and ROWID = 'AAABOOAABAAEPCABA'; 2 rows selected
In this example, assume you manage a direct marketing database and want to determine how productive the customer contacts have been in generating revenue for a two week period in August. Assume that you have already created the dictionary and added the redo logs you want to search (as demonstrated in the previous example). Take the following steps:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( - 2 STARTTIME => TO_DATE('07-Aug-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'), - 3 ENDTIME => TO_DATE('21-Aug-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'), - 4 DICTFILENAME => '/usr/local/dict.ora');
V$LOGMNR_CONTENTS
view to determine which tables were modified in the time range you specified, as shown in the following example. (This query filters out system tables that traditionally have a $
in their name.)
SQL> SELECT SEG_OWNER, SEG_NAME, COUNT(*) AS Hits FROM 2 V$LOGMNR_CONTENTS WHERE SEG_NAME NOT LIKE '%$' GROUP BY 3 SEG_OWNER, SEG_NAME;
SEG_OWNER SEG_NAME Hits --------- -------- ---- CUST ACCOUNT 384 SCOTT EMP 12 SYS DONOR 12 UNIV DONOR 234 UNIV EXECDONOR 325 UNIV MEGADONOR 32
The values in the Hits
column show the number of times that the named table had an insert, delete, or update operation performed on it during the two week period specified in the query.
|
Copyright © 2001, 2002 Oracle Corporation. All Rights Reserved. |
|