Oracle9i Streams Release 2 (9.2) Part Number A96571-02 |
|
|
View PDF |
This chapter contains information about identifying and resolving common problems in a Streams environment.
This chapter contains these topics:
If a capture process is not capturing changes as expected or if you are having other problems with a capture process, then use the following checklist to identify and resolve capture problems:
A capture process captures changes only when it is enabled. You can check whether a capture process is enabled, disabled, or aborted by querying the DBA_CAPTURE
data dictionary view.
For example, to check whether a capture process named CAPTURE
is enabled, run the following query:
SELECT STATUS FROM DBA_CAPTURE WHERE CAPTURE_NAME = 'CAPTURE';
If the capture process is disabled, then your output looks similar to the following:
STATUS -------- DISABLED
If the capture process is disabled or aborted, then try restarting it. If you do not know why the capture process was disabled or aborted, then check the trace file for the capture process.
See Also:
|
If a capture process has not captured recent changes, then the cause may be that the capture process has fallen behind. To check, you can query the V$STREAMS_CAPTURE
dynamic performance view. If capture process latency is high, then you may be able to improve performance by adjusting the setting of the parallelism
capture process parameter.
The LOG_PARALLELISM
initialization parameter specifies the level of concurrency for redo allocation within Oracle. If you plan to run one or more capture processes on a database, then this parameter must be set to 1
. If this initialization parameter is set higher than one, you may encounter error ORA-01374
.
Setting this parameter to 1
does not affect the parallelism of capture. You can set parallelism for a capture process using the SET_PARAMETER
procedure in the DBMS_CAPTURE_ADM
package.
The LOGMNR_MAX_PERSISTENT_SESSIONS
initialization parameter specifies the maximum number of persistent LogMiner mining sessions that are concurrently active when all sessions are mining redo logs generated by instances. If you plan to run multiple Streams capture processes on a single database, then set this parameter equal to or higher than the number of planned capture processes.
If you cannot drop a capture process, and you are using multiple capture processes, then it may be because the LOGMNR_MAX_PERSISTENT_SESSIONS
initialization parameter is not set high enough. Try increasing this initialization parameter and retrying the drop capture process operation.
Alternatively, if you do not want to increase the size of this initialization parameter, try stopping at least one of the running capture processes and then retrying the drop capture process operation. If the drop operation succeeds, then restart any capture process you stopped.
If a propagation is not propagating changes as expected, then use the following checklist to identify and resolve propagation problems:
If events are not appearing in the destination queue for a propagation as expected, then the propagation may not be configured to propagate events from the correct source queue to the correct destination queue.
For example, to check the source queue and destination queue for a propagation named dbs1_to_dbs2
, run the following query:
COLUMN SOURCE_QUEUE HEADING 'Source Queue' FORMAT A35 COLUMN DESTINATION_QUEUE HEADING 'Destination Queue' FORMAT A35 SELECT p.SOURCE_QUEUE_OWNER||'.'|| p.SOURCE_QUEUE_NAME||'@'|| g.GLOBAL_NAME SOURCE_QUEUE, p.DESTINATION_QUEUE_OWNER||'.'|| p.DESTINATION_QUEUE_NAME||'@'|| p.DESTINATION_DBLINK DESTINATION_QUEUE FROM DBA_PROPAGATION p, GLOBAL_NAME g WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2';
Your output looks similar to the following:
Source Queue Destination Queue ----------------------------------- ----------------------------------- STRMADMIN.STREAMS_QUEUE@DBS1.NET STRMADMIN.STREAMS_QUEUE@DBS2.NET
If the propagation is not using the correct queues, then create a new propagation. You may need to remove the existing propagation if it is not appropriate for your environment.
For a propagation job to propagate events, the propagation schedule for the propagation job must be enabled. If events are not being propagated by a propagation as expected, then the propagation's propagation job schedule may not be enabled.
You can find the following information about the schedule for a propagation job by running the query in this section:
For example, to check whether a propagation job used by a propagation named dbs1_to_dbs2
is enabled, run the following query:
COLUMN DESTINATION_DBLINK HEADING 'Destination|DB Link' FORMAT A15 COLUMN SCHEDULE_DISABLED HEADING 'Schedule' FORMAT A8 COLUMN PROCESS_NAME HEADING 'Process' FORMAT A7 COLUMN FAILURES HEADING 'Number of|Failures' FORMAT 9999 COLUMN LAST_ERROR_TIME HEADING 'Last Error Time' FORMAT A15 COLUMN LAST_ERROR_MSG HEADING 'Last Error Message' FORMAT A18 SELECT p.DESTINATION_DBLINK, DECODE(s.SCHEDULE_DISABLED, 'Y', 'Disabled', 'N', 'Enabled') SCHEDULE_DISABLED, s.PROCESS_NAME, s.FAILURES, s.LAST_ERROR_TIME, s.LAST_ERROR_MSG FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2' AND p.DESTINATION_DBLINK = s.DESTINATION AND s.SCHEMA = p.SOURCE_QUEUE_OWNER AND s.QNAME = p.SOURCE_QUEUE_NAME;
If the schedule is enabled currently for the propagation job, then your output looks similar to the following:
Destination Number of DB Link Schedule Process Failures Last Error Time Last Error Message --------------- -------- ------- --------- --------------- ------------------ DBS2.NET Enabled J001 0
Try the following actions to correct a problem:
ENABLE_PROPAGATION_SCHEDULE
procedure in the DBMS_AQADM
package, if you have not done so already.J001
.Propagation jobs use job queue processes to propagate events. Make sure the JOB_QUEUE_PROCESSES
initialization parameter is set to 2
or higher in each database instance that does propagation. It should be set to a value that is high enough to accommodate all of the jobs that run simultaneously.
See Also:
|
Streams queues are secure queues, and security must be configured properly for users to be able to perform operations on them. You may encounter one of the following errors if security is not configured properly for a Streams queue:
Secure queue access must be granted to an agent explicitly for both enqueue and dequeue operations. You grant the agent these privileges using the ENABLE_DB_ACCESS
procedure in the DBMS_AQADM
package.
For example, to grant an agent named explicit_dq
privileges of the database user oe
, run the following procedure:
BEGIN DBMS_AQADM.ENABLE_DB_ACCESS( agent_name => 'explicit_dq', db_username => 'oe'); END;
/
To check the privileges of the agents in a database, run the following query:
SELECT AGENT_NAME "Agent", DB_USERNAME "User" FROM DBA_AQ_AGENT_PRIVS;
Your output looks similar to the following:
Agent User ------------------------------ ------------------------------ EXPLICIT_ENQ OE APPLY_OE OE EXPLICIT_DQ OE
See Also:
"Enabling a User to Perform Operations on a Secure Queue" for a detailed example that grants privileges to an agent |
To enqueue into a secure queue, the SENDER_ID
must be set to an agent with secure queue privileges for the queue in the message properties.
See Also:
"Create the Procedure to Enqueue Non-LCR Events" for an example that sets the |
If an apply process is not applying changes as expected, then use the following checklist to identify and resolve apply problems:
An apply process applies changes only when it is enabled. You can check whether an apply process is enabled, disabled, or aborted by querying the DBA_APPLY
data dictionary view.
For example, to check whether an apply process named APPLY
is enabled, run the following query:
SELECT STATUS FROM DBA_APPLY WHERE APPLY_NAME = 'APPLY';
If the apply process is disabled, then your output looks similar to the following:
STATUS -------- DISABLED
If the apply process is disabled or aborted, then try restarting it. If you do not know why the apply process was disabled or aborted, then check the trace file for the apply process.
See Also:
|
If an apply process has not applied recent changes, then the cause may be that the apply process has fallen behind. You can check apply process latency by querying the V$STREAMS_APPLY_COORDINATOR
dynamic performance view. If apply process latency is high, then you may be able to improve performance by adjusting the setting of the parallelism
apply process parameter.
An apply process can apply either captured events or user-enqueued events, but not both types of events. If an apply process is not applying events of a certain type, then it may be because the apply process was configured to apply the other type of events. You can check the type of events applied by an apply process by querying the DBA_APPLY
data dictionary view.
For example, to check whether an apply process named APPLY
applies captured or user-enqueued events, run the following query:
COLUMN APPLY_CAPTURED HEADING 'Type of Events Applied' FORMAT A25 SELECT DECODE(APPLY_CAPTURED, 'YES', 'Captured', 'NO', 'User-Enqueued') APPLY_CAPTURED FROM DBA_APPLY WHERE APPLY_NAME = 'APPLY';
If the apply process applies captured events, then your output looks similar to the following:
Type of Events Applied ------------------------- Captured
If an apply process is not applying the expected type of events, then you may need to create a new apply process to apply the events.
You can use PL/SQL procedures to handle events dequeued by an apply process in a customized way. These handlers include DML handlers, DDL handlers, and message handlers. If an apply process is not behaving as expected, then check the handler procedures used by the apply process, and correct any flaws. You can find the names of these procedures by querying the DBA_APPLY_DML_HANDLERS
and DBA_APPLY
data dictionary views. You may need to modify a handler procedure or remove it to correct an apply problem.
See Also:
|
If you set the parallelism
parameter for an apply process to a value greater than 1
and you set the commit_serialization
parameter of the apply process to full
, then the apply process may detect interested transaction list (ITL) contention if there is a transaction that is dependent on another transaction with a higher SCN. ITL contention occurs if the session that created the transaction waited for an ITL slot in a block. This happens when the session wants to lock a row in the block but one or more other sessions have rows locked in the same block, and there is no free ITL slot in the block.
ITL contention is also possible if the session is waiting due to shared bitmap index fragment. Bitmap indexes index key values and a range of rowids. Each 'entry' in a bitmap index can cover many rows in the actual table. If two sessions want to update rows covered by the same bitmap index fragment, then the second session waits for the first transaction to either COMMIT
or ROLLBACK
.
When an apply process detects such a dependency, it resolves the ITL contention automatically and records information about it in the alert log and apply process trace file for the database. ITL contention may negatively affect the performance of an apply process because there may not be any progress while it is detecting the deadlock.
To avoid the problem in the future, perform one of the following actions:
INITRANS
or MAXTRANS
settings for the table using the ALTER
TABLE
statement.commit_serialization
parameter to none
for the apply process.parallelism
apply process parameter to 1
for the apply process.
See Also:
|
When an apply process cannot apply an event, it moves the event and all of the other events in the same transaction into its queue's exception queue. You should check the for apply errors periodically to see if there are any transactions that could not be applied. You can check for apply errors by querying the DBA_APPLY_ERROR
data dictionary view.
You may encounter the following types of apply process errors for LCR events:
Typically, this error occurs when an update is attempted on an existing row and the OLD_VALUES
in the row LCR do not match the current values at this target site.
To correct this problem, you can update the current values in the row so that the row LCR can be applied successfully. If changes to the row are captured by a capture process at the destination database, then you probably do not want apply this manual change at destination sites. In this case, complete the following steps:
EXEC DBMS_STREAMS.SET_TAG(tag => HEXTORAW('17'));
EXECUTE_ERROR
procedure in the DBMS_APPLY_ADM
package, and specify the transaction identifier for the transaction that caused the error.
EXEC DBMS_APPLY_ADM.EXECUTE_ERROR(local_transaction_id => '5.4.312');
Or, execute all errors for the apply process by running the EXECUTE_ALL_ERRORS
procedure:
EXEC DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS(apply_name => 'APPLY');
EXEC DBMS_STREAMS.SET_TAG(tag => NULL);
Typically, this error occurs because the instantiation SCN is not set on an object for which an apply process is attempting to apply changes. You can query the DBA_APPLY_INSTANTIATED_OBJECTS
data dictionary view to list the objects that have an instantiation SCN.
You can set an instantiation SCN for one or more objects by exporting the objects at the source database, and then importing them at the destination database. If you do not want to use Export/Import, then you can run one or more of the following procedures in the DBMS_APPLY_ADM
package:
Typically, this error occurs because of one of the following conditions:
DBA_CONSTRAINTS
data dictionary view. If no primary key exists for the table, then specify substitute key columns using the SET_KEY_COLUMNS
procedure in the DBMS_APPLY_ADM
package. You may also encounter error ORA-23416
if a table being applied does not have a primary key.
Typically, this error occurs because one or more columns at a table in the source database do not match the corresponding columns at the destination database. The LCRs from the source database may contain more columns than the table at the destination database, or there may be a type mismatch for one or more columns. If the columns differ at the databases, you can use rule-based transformations to avoid errors.
This error may also occur because supplemental logging is not specified where it is required for nonkey columns at the source database. In this case, LCRs from the source database may not contain needed values for these nonkey columns.
If a capture process, a propagation, or an apply process is not behaving as expected, then the problem may be that rules or rule-based transformations for the capture process, propagation, or apply process are not configured properly. Use the following checklist to identify and resolve problems with rules and rule-based transformations:
If a Streams capture process, propagation, or apply process is behaving in an unexpected way, then the problem may be that the rules in the rule set for the capture process, propagation, or apply process are not configured properly. For example, if you expect a capture process to capture changes made to a particular table, but the capture process is not capturing these changes, then the cause may be that the rule set for the capture process does not contain a rule that evaluates to TRUE
when a change is made to the object.
You can check the rules for a particular Streams capture process, propagation, or apply process by querying the following data dictionary views:
DBA_STREAMS_TABLE_RULES
DBA_STREAMS_SCHEMA_RULES
DBA_STREAMS_GLOBAL_RULES
A rule set with no rules is not the same as no rule set. For example, if you use a rule set with no rules for a propagation, then the propagation will not propagate anything. If you do not use a rule set at all for a propagation, then the propagation propagates everything in its source queue.
This section includes the following subsections:
For example, suppose a database is running a capture process named CAPTURE,
and you want to check for table rules that evaluate to TRUE
for this capture process when there are changes to the hr.departments
table. To determine whether there are any such rules in the rule set for the capture process, run the following query:
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A30 COLUMN RULE_TYPE HEADING 'Rule Type' FORMAT A30 SELECT RULE_NAME, RULE_TYPE FROM DBA_STREAMS_TABLE_RULES WHERE STREAMS_NAME = 'CAPTURE' AND STREAMS_TYPE = 'CAPTURE' AND TABLE_OWNER = 'HR' AND TABLE_NAME = 'DEPARTMENTS';
Your output looks similar to the following:
Rule Name Rule Type ------------------------------ ------------------------------ DEPARTMENTS1 DML
Based on these results, the capture process named CAPTURE
should capture DML changes to the hr.departments
table. In other words, a rule exists in the capture process rule set that evaluates to TRUE
when the capture process finds a DML change to the hr.departments
table in the redo log.
A rule of type DDL
for the table in the query results means that the capture process should capture DDL changes to the table. If a capture process should capture both DML and DDL changes to the table, then a rule of each type would appear for the table in the query results.
If you expect an apply process to apply changes to a particular table, but the apply process is not applying these changes, then the cause may be that the rule set for the apply process does not contain a rule that evaluates to TRUE
when an LCR is in the apply process queue. As with capture rules and propagation rules, you can check the rules that were created using the procedures in the DBMS_STREAMS_ADM
package by querying the following data dictionary views:
In addition, an apply process must receive events in its queue before it can apply these events. Therefore, if an apply process is applying captured events, then the rule set for the capture process that captures these events must be configured properly. Similarly, if events are propagated from one or more databases before reaching the apply process, then the rules for each propagation must be configured properly. If the rules for a capture process or a propagation on which the apply process depends are not configured properly, then the events may never reach the apply process queue.
In an environment where a capture process captures changes that are propagated and applied at multiple databases, you can use the following guidelines to determine whether a problem is caused by the capture process or propagations on which an apply process depends, or a problem is caused by the apply process itself:
Also, when you are checking for apply rules, there is the possibility that subset rules exist for one or more tables. A subset rules evaluates to TRUE
only if an LCR contains a change to a particular subset of rows in the table. For example, to check for table rules that evaluate to TRUE
for an apply process named APPLY
when there are changes to the hr.departments
table, run the following query:
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20 COLUMN RULE_TYPE HEADING 'Rule Type' FORMAT A20 COLUMN DML_CONDITION HEADING 'Subset Condition' FORMAT A30 SELECT RULE_NAME, RULE_TYPE, DML_CONDITION FROM DBA_STREAMS_TABLE_RULES WHERE STREAMS_NAME = 'APPLY' AND STREAMS_TYPE = 'APPLY' AND TABLE_OWNER = 'HR' AND TABLE_NAME = 'DEPARTMENTS';
Rule Name Rule Type Subset Condition -------------------- -------------------- ------------------------------ DEPARTMENTS5 DML location_id=1700 DEPARTMENTS6 DML location_id=1700 DEPARTMENTS7 DML location_id=1700
Notice that this query returns any subset condition for the table in the DML_CONDITION
column. In this example, subset rules are specified for the hr.departments table. These subset rules evaluate to TRUE
only if an LCR contains a change that involves a row where the location_id
is 1700
. So, if you expected the apply process to apply all changes to the table, then the subset rules cause the apply process to discard changes that involve rows where the location_id
is not 1700
.
See Also:
|
Schema rules or global rules may also be used to capture changes to all of the database objects in a particular schema or database, respectively. For example, to check for schema rules that evaluate to TRUE
for a capture process named CAPTURE
when there are changes to the hr
schema, run the following query:
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20 COLUMN RULE_TYPE HEADING 'Rule Type' FORMAT A20 SELECT RULE_NAME, RULE_TYPE FROM DBA_STREAMS_SCHEMA_RULES WHERE STREAMS_NAME = 'CAPTURE' AND SCHEMA_NAME = 'HR';
Your output looks similar to the following:
Rule Name Rule Type ------------------------------ ------------------------------ HR7 DML HR8 DDL
Based on these results, the capture process named CAPTURE
should capture DML and DDL changes to all objects in the hr
schema.
If the DBA_STREAMS_GLOBAL_RULES
data dictionary view returns any rows when you query it for a capture process, then the capture process captures all changes in the database, except for unsupported changes and changes made to the SYS
and SYSTEM
schemas.
See Also:
"Datatypes Captured" and "Types of Changes Captured" for information about the types of changes that are captured and are not captured by a capture process |
If you determine that a Streams capture process, propagation, or apply process is not behaving as expected because one or more rules must be added to the rule set for the capture process, propagation, or apply process, then you can use one of the following procedures in the DBMS_STREAMS_ADM
package to add appropriate rules:
ADD_GLOBAL_PROPAGATION_RULES
ADD_GLOBAL_RULES
ADD_SCHEMA_PROPAGATION_RULES
ADD_SCHEMA_RULES
ADD_SUBSET_RULES
ADD_TABLE_PROPAGATION_RULES
ADD_TABLE_RULES
You can use the DBMS_RULE_ADM
package to add customized rules, if necessary.
It is also possible that the Streams capture process, propagation, or apply process is not behaving as expected because one or more rules should be altered or removed from a rule set.
If you have the correct rules, and the changes for the relevant objects are still filtered out by a Streams capture process, propagation, or apply process, then check your trace files and alert log for a warning about a missing "multi-version data dictionary", which is a Streams data dictionary. The following information may be included in such messages:
gdbnm
: Global Name of the source database of the missing objectscn
: SCN for the transaction that has been missedIf you find such messages and you are using custom capture rules or reusing existing capture rules for a new destination database, then make sure you run the appropriate procedure to prepare for instantiation:
PREPARE_TABLE_INSTANTIATION
PREPARE_SCHEMA_INSTANTIATION
PREPARE_GLOBAL_INSTANTIATION
See Also:
|
A rule-based transformation is any modification to an event that results when a rule evaluates to TRUE
. A rule-based transformation is specified in the action context of a rule, and these action contexts contain a name-value pair with STREAMS$_TRANSFORM_FUNCTION
for the name and a user-created procedure for the value. This user-created procedure performs the transformation. If the user-defined procedure contains any flaws, then unexpected behavior may result.
If a Streams capture process, propagation, or apply process is not behaving as expected, then check the rule-based transformation procedures specified for the capture process, propagation, or apply process, and correct any flaws. You can find the names of these procedures by querying the action context of the rules in the rule set used by the Streams capture process, propagation, or apply process. You may need to modify a transformation procedure or remove a rule-based transformation to correct the problem. Make sure the name portion of the name-value pair in the action context is spelled correctly.
Rule evaluation is done before a rule-based transformation. For example, if you have a transformation that changes the name of a table from emps
to employees
, then make sure each rule using the transformation specifies the table name emps
rather than employees
, in its rule condition.
See Also:
|
Messages about each capture process, propagation job, and apply process are recorded in trace files for the database in which the process or propagation job is running. A capture process runs on a source database, a propagation job runs on the database containing the source queue in the propagation, and an apply process runs on a destination database. These trace file messages can help you to identify and resolve problems in a Streams environment.
All trace files for background processes are written to the destination directory specified by the initialization parameter BACKGROUND_DUMP_DEST
. The names of trace files are operating system specific, but each file usually includes the name of the process writing the file.
For example, on some operating systems, the trace file name for a process is sid_xxxxx_iiiii
.trc
, where:
sid
is the system identifier for the databasexxxxx
is the name of the processiiiii
is the operating system process numberAlso, you can set the write_alert_log
parameter to y
for both the capture process and apply process. When this parameter is set to y
, which is the default setting, the alert log for the database contains messages about why the capture process or apply process stopped.
You can control the information in the trace files by setting the trace_file
capture process or apply process parameter using the SET_PARAMETER
procedure in the DBMS_CAPTURE_ADM
and DBMS_APPLY_ADM
packages.
Use the following checklist to check the trace files related to Streams:
See Also:
|
A capture process is an Oracle background process named cp
nn
, where nn
is the capture process number. For example, on some operating systems, if the system identifier for a database running a capture process is hqdb
and the capture process number is 01
, then the trace file for the capture process starts with hqdb_cp01
.
See Also:
"Displaying General Information About a Capture Process" for a query that displays the capture process number of a capture process |
Each propagation uses a propagation job that depends on the job queue coordinator process and a job queue process. The job queue coordinator process is named cjq
nn
, where nn
is the job queue coordinator process number, and a job queue process is named j
nnn
, where nnn
is the job queue process number.
For example, on some operating systems, if the system identifier for a database running a propagation job is hqdb
and the job queue coordinator process is 01
, then the trace file for the job queue coordinator process starts with hqdb_cjq01
. Similarly, on the same database, if a job queue process is 001
, then the trace file for the job queue process starts with hqdb_j001
. You can check the process name by querying the PROCESS_NAME
column in the DBA_QUEUE_SCHEDULES
data dictionary view.
See Also:
"Is the Propagation Job Used by a Propagation Enabled?" for a query that displays the job queue process used by a propagation job |
An apply process is an Oracle background process named ap
nn
, where nn
is the apply process number. For example, on some operating systems, if the system identifier for a database running an apply process is hqdb
and the apply process number is 01
, then the trace file for the apply process starts with hqdb_ap01
.
An apply process also uses parallel execution servers. Information about an apply process may be recorded in the trace file for one or more parallel execution servers. The process name of a parallel execution server is p
nnn
, where nnn
is the process number. So, on some operating systems, if the system identifier for a database running an apply process is hqdb
and the process number is 001
, then the trace file that may contain information about an apply process starts with hqdb_p001
.
See Also:
|