Oracle® Streams Concepts and Administration 11g Release 1 (11.1) Part Number B28321-01 |
|
|
View PDF |
Implicit capture means that database changes are captured automatically and enqueued them. Implicit capture can be accomplished with a capture process or with a synchronous capture. A capture process captures changes in the redo log, while a synchronous capture captures DML changes with an internal mechanism. Both capture processes and synchronous captures reformat the captured changes into logical change records (LCRs) and enqueue the LCRs into an ANYDATA
queue.
The following topics describe configuring implicit capture:
Each task described in this chapter should be completed by an Oracle Streams administrator that has been granted the appropriate privileges, unless specified otherwise.
See Also:
You can create a capture process that captures changes either locally at the source database or remotely at a downstream database. If a capture process runs on a downstream database, then redo data from the source database is copied to the downstream database, and the capture process captures changes in redo data at the downstream database.
You can use any of the following procedures to create a local capture process:
Each of the procedures in the DBMS_STREAMS_ADM
package creates a capture process with the specified name if it does not already exist, creates either a positive or negative rule set for the capture process if the capture process does not have such a rule set, and can add table rules, schema rules, or global rules to the rule set.
The CREATE_CAPTURE
procedure creates a capture process, but does not create a rule set or rules for the capture process. However, the CREATE_CAPTURE
procedure enables you to specify an existing rule set to associate with the capture process, either as a positive or a negative rule set, a first SCN, and a start SCN for the capture process. To create a capture process that performs downstream capture, you must use the CREATE_CAPTURE
procedure.
The following sections describe configuring a capture process:
Preparing for and Configuring a Real-Time Downstream Capture Process
Configuring an Archived-Log Downstream Capture Process that Assigns Logs Implicitly
Configuring an Archived-Log Downstream Capture Process that Assigns Logs Explicitly
Caution:
When a capture process is started or restarted, it might need to scan redo log files with aFIRST_CHANGE#
value that is lower than start SCN. Removing required redo log files before they are scanned by a capture process causes the capture process to abort. You can query the DBA_CAPTURE
data dictionary view to determine the first SCN, start SCN, and required checkpoint SCN for a capture process. A capture process needs the redo log file that includes the required checkpoint SCN, and all subsequent redo log files. See "Capture Process Creation" for more information about the first SCN and start SCN for a capture process.You can configure an entire Oracle Streams environment, including capture processes, using procedures in the DBMS_STREAMS_ADM
package or Oracle Enterprise Manager
After creating a capture process, avoid changing the DBID
or global name of the source database for the capture process. If you change either the DBID
or global name of the source database, then the capture process must be dropped and re-created.
To create a capture process, a user must be granted DBA
role.
If Oracle Database Vault is installed, then the user who creates the capture process must be granted the BECOME
USER
system privilege. Granting this privilege to the user is not required if Oracle Database Vault is not installed. You can revoke the BECOME
USER
system privilege from the user after the capture process is created, if necessary.
To configure downstream capture, the source database must be an Oracle Database 10g Release 1 or later database.
See Also:
Oracle Streams Replication Administrator's Guide for information about changing the DBID or global name of a source database
The following tasks must be completed before you create a capture process:
Configure any source database that generates redo data that will be captured by a capture process to run in ARCHIVELOG
mode. See "ARCHIVELOG Mode and a Capture Process" and Oracle Database Administrator's Guide. For downstream capture processes, the downstream database also must run in ARCHIVELOG
mode if you plan to configure a real-time downstream capture process. The downstream database does not need to run in ARCHIVELOG
mode if you plan to run only archived-log downstream capture process on it.
Ensure that the initialization parameters are set properly on any database that will run a capture process. See "Setting Initialization Parameters Relevant to Oracle Streams".
Create an Oracle Streams administrator on each database involved in the Oracle Streams configuration. See "Configuring an Oracle Streams Administrator". The examples in this chapter assume that the Oracle Streams administrator is strmadmin
.
Create an ANYDATA
queue to associate with the capture process, if one does not exist. See "Configuring an ANYDATA Queue" for instructions. The examples in this chapter assume that the queue used by the capture process is strmadmin.streams_queue
. Create the queue on the same database that will run the capture process.
Specify supplemental logging for some columns at a source database for changes to the columns to be applied successfully at a destination database. Typically, supplemental logging is required in Oracle Streams replication environments, but it might be required in any environment that processes captured LCRs with an apply process.
The procedures listed in "Configuring a Capture Process", such as the ADD_TABLE_RULES
procedure, automatically configure supplemental logging of any primary key, unique key, bitmap index, and foreign key columns in the replicated tables. In addition, you can use the ALTER
DATABASE
statement to specify supplemental logging for all tables in a database, and you can use the ALTER
TABLE
statement to specify supplemental logging for a particular table. See the Oracle Streams Replication Administrator's Guide for more information about specifying supplemental logging.
The following sections describe using the DBMS_STREAMS_ADM
package and the DBMS_CAPTURE_ADM
package to create a local capture process.
To configure a local capture process using the DBMS_STREAMS_ADM
package, complete the following steps:
Complete the tasks in "Preparing to Configure a Capture Process".
Run the ADD_TABLE_RULES
procedure in the DBMS_STREAMS_ADM
package to create a local capture process:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.employees', streams_type => 'capture', streams_name => 'strm01_capture', queue_name => 'strmadmin.streams_queue', include_dml => TRUE, include_ddl => TRUE, include_tagged_lcr => FALSE, source_database => NULL, inclusion_rule => TRUE); END; /
Running this procedure performs the following actions:
Creates a capture process named strm01_capture
. The capture process is created only if it does not already exist. If a new capture process is created, then this procedure also sets the start SCN to the point in time of creation.
Associates the capture process with an existing queue named streams_queue
.
Creates a positive rule set and associates it with the capture process, if the capture process does not have a positive rule set, because the inclusion_rule
parameter is set to TRUE
. The rule set uses the SYS.STREAMS$_EVALUATION_CONTEXT
evaluation context. The rule set name is system generated.
Creates two rules. One rule evaluates to TRUE
for DML changes to the hr.employees
table, and the other rule evaluates to TRUE
for DDL changes to the hr.employees
table. The rule names are system generated.
Adds the two rules to the positive rule set associated with the capture process. The rules are added to the positive rule set because the inclusion_rule
parameter is set to TRUE
.
Specifies that the capture process captures a change in the redo log only if the change has a NULL
tag, because the include_tagged_lcr
parameter is set to FALSE
. This behavior is accomplished through the system-created rules for the capture process.
Creates a capture process that captures local changes to the source database because the source_database
parameter is set to NULL
. For a local capture process, you can also specify the global name of the local database for this parameter.
Prepares the hr.employees
table for instantiation.
Enables supplemental logging for any primary key, unique key, bitmap index, and foreign key columns in the table.
If necessary, complete the steps described in "After Configuring a Capture Process".
See Also:
Oracle Streams Replication Administrator's Guide for more information about Oracle Streams tags
To configure a local capture process using the DBMS_CAPTURE_ADM
package, complete the following steps:
Complete the tasks in "Preparing to Configure a Capture Process".
Run the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create a local capture process:
BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'strm02_capture', rule_set_name => 'strmadmin.strm01_rule_set', start_scn => NULL, source_database => NULL, first_scn => NULL); END; /
Running this procedure performs the following actions:
Creates a capture process named strm02_capture
. A capture process with the same name must not exist.
Associates the capture process with an existing queue named streams_queue
.
Associates the capture process with an existing rule set named strm01_rule_set
. This rule set is the positive rule set for the capture process.
Creates a capture process that captures local changes to the source database because the source_database
parameter is set to NULL
. For a local capture process, you can also specify the global name of the local database for this parameter.
Specifies that the Oracle database determines the start SCN and first SCN for the capture process because both the start_scn
parameter and the first_scn
parameter are set to NULL
.
If no other capture processes that capture local changes are running on the local database, then the BUILD
procedure in the DBMS_CAPTURE_ADM
package is run automatically. Running this procedure extracts the data dictionary to the redo log, and a LogMiner data dictionary is created when the capture process is started for the first time.
If necessary, complete the steps described in "After Configuring a Capture Process".
To create a capture process that performs downstream capture, you must use the CREATE_CAPTURE
procedure. The example in this section describes creating a real-time downstream capture process that uses a database link to the source database. However, a real-time downstream capture process might not use a database link.
This example assumes the following:
The source database is dbs1.net
and the downstream database is dbs2.net
.
The capture process that will be created at dbs2.net
uses the streams_queue
.
The capture process will capture DML changes to the hr.departments
table.
See Also:
"Downstream Capture" for conceptual information about real-time downstream captureComplete the following steps to prepare the source database to copy its redo data to the downstream database, and to prepare the downstream database to accept the redo data:
Complete the tasks in "Preparing to Configure a Capture Process".
Configure Oracle Net so that the source database can communicate with the downstream database.
Configure authentication at both databases to support the transfer of redo data.
Redo transport sessions are authenticated using either the Secure Sockets Layer (SSL) protocol or a remote login password file. If the source database has a remote login password file, then copy it to the appropriate directory on the downstream capture database system.
See Also:
Oracle Data Guard Concepts and Administration for detailed information about authentication requirements for redo transportAt the source database, set the following initialization parameters to configure redo transport services to use the log writer process (LGWR) to copy redo data from the online redo log at the source database to the standby redo log at the downstream database:
Set at least one archive log destination in the LOG_ARCHIVE_DEST_
n
initialization parameter to the computer system running the downstream database. To do this, set the following attributes of this parameter:
SERVICE
- Specify the network service name of the downstream database.
LGWR
ASYNC
or LGWR
SYNC
- Specify this attribute so that the log writer process (LGWR) will send redo data to the downstream database.
When you specify LGWR
ASYNC
, network I/O is performed asynchronously for the destination. Therefore, the LGWR process submits the network I/O request for the destination and continues processing the next request without waiting for the I/O to complete and without checking the completion status of the I/O. The advantage of specifying LGWR
ASYNC
is that it results in little or no effect on the performance of the source database. If the source database is running Oracle Database 10g Release 1 or later, then LGWR
ASYNC
is recommended to avoid affecting source database performance if the downstream database or network is performing poorly.
When you specify LGWR
SYNC
, network I/O is performed synchronously for the destination, which means that once the I/O is initiated, the LGWR process waits for the I/O to complete before continuing. The advantage of specifying LGWR
SYNC
attribute is that redo log files are sent to the downstream database faster then when LGWR
ASYNC
is specified. Also, specifying LGWR
SYNC
AFFIRM
results in behavior that is similar to MAXIMUM
AVAILABILITY
standby protection mode. Note that specifying an ALTER
DATABASE
STANDBY
DATABASE
TO
MAXIMIZE
AVAILABILITY
SQL statement has no effect on an Oracle Streams capture process.
MANDATORY
or OPTIONAL
- If you specify MANDATORY
, then archiving of a redo log file to the downstream database must succeed before the corresponding online redo log at the source database can be overwritten. If you specify OPTIONAL
, then successful archiving of a redo log file to the downstream database is not required before the corresponding online redo log at the source database can be overwritten. Either MANDATORY
or OPTIONAL
is acceptable for a downstream database destination. If neither the MANDATORY
nor the OPTIONAL
attribute is specified, then the default is OPTIONAL
.
NOREGISTER
- Specify this attribute so that the downstream database location is not recorded in the downstream database control file.
VALID_FOR
- Specify either (ONLINE_LOGFILE,PRIMARY_ROLE)
or (ONLINE_LOGFILE,ALL_ROLES)
.
The following example is a LOG_ARCHIVE_DEST_
n
setting at the source database that specifies a real-time capture downstream database:
LOG_ARCHIVE_DEST_2='SERVICE=DBS2.NET LGWR ASYNC OPTIONAL NOREGISTER VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)'
You can specify other attributes in the LOG_ARCHIVE_DEST_
n
initialization parameter if necessary.
Set the LOG_ARCHIVE_DEST_STATE_
n
initialization parameter that corresponds with the LOG_ARCHIVE_DEST_
n
parameter for the downstream database to ENABLE
.
For example, if the LOG_ARCHIVE_DEST_2
initialization parameter is set for the downstream database, then set one LOG_ARCHIVE_DEST_STATE_2
parameter in the following way:
LOG_ARCHIVE_DEST_STATE_2=ENABLE
Ensure that the setting for the LOG_ARCHIVE_CONFIG
initialization parameter includes the send
value.
See Also:
Oracle Database Reference and Oracle Data Guard Concepts and Administration for more information about these initialization parametersAt the downstream database, set the following initialization parameters to configure the downstream database to receive redo data from the source database LGWR and write the redo data to the standby redo log at the downstream database:
Set at least one archive log destination in the LOG_ARCHIVE_DEST_
n
initialization parameter to a directory on the computer system running the downstream database. To do this, set the following attributes of this parameter:
LOCATION
- Specify a valid path name for a disk directory on the system that hosts the downstream database. Each destination that specifies the LOCATION
attribute must identify a unique directory path name. This is the local destination for archived redo log files written from the standby redo logs. Log files from a remote source database should be kept separate from local database log files. In addition, if the downstream database contains log files from multiple source databases, then the log files from each source database should be kept separate from each other.
MANDATORY
- Successful archiving of a standby redo log file must succeed before the corresponding standby redo log file can be overwritten.
VALID_FOR
- Specify either (STANDBY_LOGFILE,PRIMARY_ROLE)
or (STANDBY_LOGFILE,ALL_ROLES)
.
The following example is a LOG_ARCHIVE_DEST_
n
setting at the real-time capture downstream database:
LOG_ARCHIVE_DEST_2='LOCATION=/home/arc_dest/srl_dbs1 MANDATORY VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)'
You can specify other attributes in the LOG_ARCHIVE_DEST_
n
initialization parameter if necessary.
Ensure that the setting for the LOG_ARCHIVE_CONFIG
initialization parameter includes the receive
value.
Optionally set the LOG_ARCHIVE_FORMAT
initialization parameter to generate the filenames in a specified format for the archived redo log files. The following example is a valid LOG_ARCHIVE_FORMAT
setting:
LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
Set the LOG_ARCHIVE_DEST_STATE_
n
initialization parameter that corresponds with the LOG_ARCHIVE_DEST_
n
parameter for the downstream database to ENABLE
.
For example, if the LOG_ARCHIVE_DEST_2
initialization parameter is set for the downstream database, then set one LOG_ARCHIVE_DEST_STATE_2
parameter in the following way:
LOG_ARCHIVE_DEST_STATE_2=ENABLE
If you set other archive destinations at the downstream database, then, to keep archived standby redo log files separate from archived online redo log files from the downstream database, explicitly specify ONLINE_LOGFILE
or STANDBY_LOGFILE
, instead of ALL_LOGFILES
, in the VALID_FOR
attribute. For example, if the LOG_ARCHIVE_DEST_1
parameter specifies the archive destination for the online redo log files at the downstream database, then avoid the ALL_LOGFILES
keyword in the VALID_FOR
attribute when you set the LOG_ARCHIVE_DEST_1
parameter.
See Also:
Oracle Database Reference and Oracle Data Guard Concepts and Administration for more information about these initialization parametersIf you reset any initialization parameters while an instance was running at a database in Step 4 or 5, then you might want to reset them in the relevant initialization parameter file as well, so that the new values are retained when the database is restarted.
If you did not reset the initialization parameters while an instance was running, but instead reset them in the initialization parameter file in Step 4 or 5, then restart the database. The source database must be open when it sends redo data to the downstream database, because the global name of the source database is sent to the downstream database only if the source database is open.
At the downstream database, connect as an administrative user and create standby redo log files.
Note:
The following steps outline the general procedure for adding standby redo log files to the downstream database. The specific steps and SQL statements used to add standby redo log files depend on your environment. For example, in an Oracle Real Application Clusters (Oracle RAC) environment, the steps are different. See Oracle Data Guard Concepts and Administration for detailed instructions about adding standby redo log files to a database.Determine the log file size used on the source database. The standby log file size must exactly match (or be larger than) the source database log file size. For example, if the source database log file size is 500 MB, then the standby log file size must be 500 MB or larger. You can determine the size of the redo log files at the source database (in bytes) by querying the V$LOG
view at the source database.
Determine the number of standby log file groups required on the downstream database. The number of standby log file groups must be at least one more than the number of online log file groups on the source database. For example, if the source database has two online log file groups, then the downstream database must have at least three standby log file groups. You can determine the number of source database online log file groups by querying the V$LOG
view at the source database.
Use the SQL statement ALTER
DATABASE
ADD
STANDBY
LOGFILE
to add the standby log file groups to the downstream database.
For example, assume that the source database has two online redo log file groups and is using a log file size of 500 MB. In this case, use the following statements to create the appropriate standby log file groups:
ALTER DATABASE ADD STANDBY LOGFILE GROUP 3 ('/oracle/dbs/slog3a.rdo', '/oracle/dbs/slog3b.rdo') SIZE 500M; ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('/oracle/dbs/slog4.rdo', '/oracle/dbs/slog4b.rdo') SIZE 500M; ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ('/oracle/dbs/slog5.rdo', '/oracle/dbs/slog5b.rdo') SIZE 500M;
Ensure that the standby log file groups were added successfully by running the following query:
SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$STANDBY_LOG;
You output should be similar to the following:
GROUP# THREAD# SEQUENCE# ARC STATUS ---------- ---------- ---------- --- ---------- 3 0 0 YES UNASSIGNED 4 0 0 YES UNASSIGNED 5 0 0 YES UNASSIGNED
This section contains an example that runs the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create a real-time downstream capture process at the dbs2.net
downstream database that captures changes made to the dbs1.net
source database. The capture process in this example uses a database link to dbs1.net
for administrative purposes.
Complete the following steps:
Connect to the downstream database dbs2.net
as the Oracle Streams administrator.
CONNECT strmadmin/user-password@dbs2.net
Create the database link from dbs2.net
to dbs1.net
. For example, if the user strmadmin
is the Oracle Streams administrator on both databases, then create the following database link:
CREATE DATABASE LINK dbs1.net CONNECT TO strmadmin
IDENTIFIED BY user-password
USING 'dbs1.net';
This example assumes that an Oracle Streams administrator exists at the source database dbs1.net
. If no Oracle Streams administrator exists at the source database, then the Oracle Streams administrator at the downstream database should connect to a user who allows remote access by an Oracle Streams administrator. You can enable remote access for a user by specifying the user as the grantee when you run the GRANT_REMOTE_ADMIN_ACCESS
procedure in the DBMS_STREAMS_AUTH
package at the source database.
Run the CREATE_CAPTURE
procedure to create the capture process:
BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'real_time_capture', rule_set_name => NULL, start_scn => NULL, source_database => 'dbs1.net', use_database_link => TRUE, first_scn => NULL, logfile_assignment => 'implicit'); END; /
Running this procedure performs the following actions:
Creates a capture process named real_time_capture
at the downstream database dbs2.net
. A capture process with the same name must not exist.
Associates the capture process with an existing queue on dbs2.net
named streams_queue
.
Specifies that the source database of the changes that the capture process will capture is dbs1.net
.
Specifies that the capture process uses a database link with the same name as the source database global name to perform administrative actions at the source database.
Specifies that the capture process accepts redo data implicitly from dbs1.net
. Therefore, the capture process scans the standby redo log at dbs2.net
for changes that it must capture. If the capture process falls behind, then it scans the archived redo log files written from the standby redo log.
This step does not associate the capture process real_time_capture
with any rule set. A rule set will be created and associated with the capture process in the next step.
If no other capture process at dbs2.net
is capturing changes from the dbs1.net
source database, then the DBMS_CAPTURE_ADM.BUILD
procedure is run automatically at dbs1.net
using the database link. Running this procedure extracts the data dictionary at dbs1.net
to the redo log, and a LogMiner data dictionary for dbs1.net
is created at dbs2.net
when the capture process real_time_capture
is started for the first time at dbs2.net
.
If multiple capture processes at dbs2.net
are capturing changes from the dbs1.net
source database, then the new capture process real_time_capture
uses the same LogMiner data dictionary for dbs1.net
as one of the existing archived-log capture process. Oracle Streams automatically chooses which LogMiner data dictionary to share with the new capture process.
Note:
Only one real-time downstream capture process is allowed at a single downstream database.
During the creation of a downstream capture process, if the first_scn
parameter is set to NULL
in the CREATE_CAPTURE
procedure, then the use_database_link
parameter must be set to TRUE
. Otherwise, an error is raised.
Set the downstream_real_time_mine
capture process parameter to y
:
BEGIN DBMS_CAPTURE_ADM.SET_PARAMETER( capture_name => 'real_time_capture', parameter => 'downstream_real_time_mine', value => 'y'); END; /
Create the positive rule set for the capture process and add a rule to it:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.departments', streams_type => 'capture', streams_name => 'real_time_capture', queue_name => 'strmadmin.streams_queue', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'dbs1.net', inclusion_rule => TRUE); END; /
Running this procedure performs the following actions:
Creates a rule set at dbs2.net
for capture process real_time_capture
. The rule set has a system-generated name. The rule set is the positive rule set for the capture process because the inclusion_rule
parameter is set to TRUE
.
Creates a rule that captures DML changes to the hr.departments
table, and adds the rule to the positive rule set for the capture process. The rule has a system-generated name. The rule is added to the positive rule set for the capture process because the inclusion_rule
parameter is set to TRUE
.
Prepares the hr.departments
table at dbs1.net
for instantiation using the database link created in Step 2.
Enables supplemental logging for any primary key, unique key, bitmap index, and foreign key columns in the table.
Connect to the source database dbs1.net
as an administrative user with the necessary privileges to switch log files.
Archive the current log file at the source database:
ALTER SYSTEM ARCHIVE LOG CURRENT;
Archiving the current log file at the source database starts real time mining of the source database redo log.
If necessary, complete the steps described in "After Configuring a Capture Process".
To create a capture process that performs downstream capture, you must use the CREATE_CAPTURE
procedure. The example in this section describes creating an archived-log downstream capture process that uses a database link to the source database for administrative purposes.
This example assumes the following:
The source database is dbs1.net
and the downstream database is dbs2.net
.
The capture process that will be created at dbs2.net
uses the streams_queue
.
The capture process will capture DML changes to the hr.departments
table.
The capture process assigns log files implicitly. That is, the downstream capture process automatically scans all redo log files added by redo transport services or manually from the source database to the downstream database.
Whether a database link from the downstream database to the source database is used or not, complete the following steps to prepare the source database to copy its redo log files to the downstream database, and to prepare the downstream database to accept these redo log files:
Complete the tasks in "Preparing to Configure a Capture Process".
Configure Oracle Net so that the source database can communicate with the downstream database.
Configure authentication at both databases to support the transfer of redo data.
Redo transport sessions are authenticated using either the Secure Sockets Layer (SSL) protocol or a remote login password file. If the source database has a remote login password file, then copy it to the appropriate directory on the downstream capture database system.
See Also:
Oracle Data Guard Concepts and Administration for detailed information about authentication requirements for redo transportSet the following initialization parameters to configure redo transport services to copy archived redo log files from the source database to the downstream database:
At the source database, set at least one archive log destination in the LOG_ARCHIVE_DEST_
n
initialization parameter to a directory on the computer system running the downstream database. To do this, set the following attributes of this parameter:
SERVICE
- Specify the network service name of the downstream database.
ARCH
or LGWR
ASYNC
- If you specify ARCH
(the default), then the archiver process (ARC
n
) will archive the redo log files to the downstream database. If you specify LGWR
ASYNC
, then the log writer process (LGWR
) will archive the redo log files to the downstream database. Either ARCH
or LGWR
ASYNC
is acceptable for a downstream database destination.
MANDATORY
or OPTIONAL
- If you specify MANDATORY
, then archiving of a redo log file to the downstream database must succeed before the corresponding online redo log at the source database can be overwritten. If you specify OPTIONAL
, then successful archiving of a redo log file to the downstream database is not required before the corresponding online redo log at the source database can be overwritten. Either MANDATORY
or OPTIONAL
is acceptable for a downstream database destination. If neither the MANDATORY
nor the OPTIONAL
attribute is specified, then the default is OPTIONAL
.
NOREGISTER
- Specify this attribute so that the downstream database location is not recorded in the downstream database control file.
TEMPLATE
- Specify a directory and format template for archived redo logs at the downstream database. The TEMPLATE
attribute overrides the LOG_ARCHIVE_FORMAT
initialization parameter settings at the downstream database. The TEMPLATE
attribute is valid only with remote destinations. Ensure that the format uses all of the following variables at each source database: %t
, %s
, and %r
.
The following example is a LOG_ARCHIVE_DEST_
n
setting that specifies a downstream database:
LOG_ARCHIVE_DEST_2='SERVICE=DBS2.NET ARCH OPTIONAL NOREGISTER TEMPLATE=/usr/oracle/log_for_dbs1/dbs1_arch_%t_%s_%r.log'
If another source database transfers log files to this downstream database, then, in the initialization parameter file at this other source database, you can use the TEMPLATE
attribute to specify a different directory and format for the log files at the downstream database. The log files from each source database are kept separate at the downstream database.
Tip:
Log files from a remote source database should be kept separate from local database log files. In addition, if the downstream database contains log files from multiple source databases, then the log files from each source database should be kept separate from each other.At the source database, set the LOG_ARCHIVE_DEST_STATE_
n
initialization parameter that corresponds with the LOG_ARCHIVE_DEST_
n
parameter for the downstream database to ENABLE
.
For example, if the LOG_ARCHIVE_DEST_2
initialization parameter is set for the downstream database, then set one LOG_ARCHIVE_DEST_STATE_2
parameter in the following way:
LOG_ARCHIVE_DEST_STATE_2=ENABLE
At the source database, ensure that the setting for the LOG_ARCHIVE_CONFIG
initialization parameter includes the send
value.
At the downstream database, ensure that the setting for the LOG_ARCHIVE_CONFIG
initialization parameter includes the receive
value.
See Also:
Oracle Database Reference and Oracle Data Guard Concepts and Administration for more information about these initialization parametersIf you reset any initialization parameters while the instance is running at a database in Step 4, then you might want to reset them in the initialization parameter file as well, so that the new values are retained when the database is restarted.
If you did not reset the initialization parameters while the instance was running, but instead reset them in the initialization parameter file in Step 4, then restart the database. The source database must be open when it sends redo log files to the downstream database, because the global name of the source database is sent to the downstream database only if the source database is open.
This section contains an example that runs the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create an archived-log downstream capture process at the dbs2.net
downstream database that captures changes made to the dbs1.net
source database. The capture process in this example uses a database link to dbs1.net
for administrative purposes.
Complete the following steps:
Connect to the downstream database dbs2.net
as the Oracle Streams administrator.
CONNECT strmadmin/user-password@dbs2.net
Create the database link from dbs2.net
to dbs1.net
. For example, if the user strmadmin
is the Oracle Streams administrator on both databases, then create the following database link:
CREATE DATABASE LINK dbs1.net CONNECT TO strmadmin
IDENTIFIED BY user-password
USING 'dbs1.net';
This example assumes that an Oracle Streams administrator exists at the source database dbs1.net
. If no Oracle Streams administrator exists at the source database, then the Oracle Streams administrator at the downstream database should connect to a user who allows remote access by an Oracle Streams administrator. You can enable remote access for a user by specifying the user as the grantee when you run the GRANT_REMOTE_ADMIN_ACCESS
procedure in the DBMS_STREAMS_AUTH
package at the source database.
While connected to the downstream database as the Oracle Streams administrator, run the CREATE_CAPTURE
procedure to create the capture process:
BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'strm04_capture', rule_set_name => NULL, start_scn => NULL, source_database => 'dbs1.net', use_database_link => TRUE, first_scn => NULL, logfile_assignment => 'implicit'); END; /
Running this procedure performs the following actions:
Creates a capture process named strm04_capture
at the downstream database dbs2.net
. A capture process with the same name must not exist.
Associates the capture process with an existing queue on dbs2.net
named streams_queue
.
Specifies that the source database of the changes that the capture process will capture is dbs1.net
.
Specifies that the capture process accepts new redo log files implicitly from dbs1.net
. Therefore, the capture process scans any new log files copied from dbs1.net
to dbs2.net
for changes that it must capture. These log files must be added to the capture process automatically using redo transport services or manually using the following DDL statement:
ALTER DATABASE REGISTER LOGICAL LOGFILE file_name FOR capture_process;
Here, file_name
is the name of the redo log file and capture_process
is the name of the capture process that will use the redo log file at the downstream database. You must add redo log files manually only if the logfile_assignment
parameter is set to explicit
.
This step does not associate the capture process strm04_capture
with any rule set. A rule set will be created and associated with the capture process in the next step.
If no other capture process at dbs2.net
is capturing changes from the dbs1.net
source database, then the DBMS_CAPTURE_ADM.BUILD
procedure is run automatically at dbs1.net
using the database link. Running this procedure extracts the data dictionary at dbs1.net
to the redo log, and a LogMiner data dictionary for dbs1.net
is created at dbs2.net
when the capture process is started for the first time at dbs2.net
.
If multiple capture processes at dbs2.net
are capturing changes from the dbs1.net
source database, then the new capture process uses the same LogMiner data dictionary for dbs1.net
as one of the existing capture process. Oracle Streams automatically chooses which LogMiner data dictionary to share with the new capture process.
Note:
During the creation of a downstream capture process, if thefirst_scn
parameter is set to NULL
in the CREATE_CAPTURE
procedure, then the use_database_link
parameter must be set to TRUE
. Otherwise, an error is raised.See Also:
Oracle Database SQL Language Reference for more information about the ALTER
DATABASE
statement
Oracle Data Guard Concepts and Administration for more information registering redo log files
While connected to the downstream database as the Oracle Streams administrator, create the positive rule set for the capture process and add a rule to it:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.departments', streams_type => 'capture', streams_name => 'strm04_capture', queue_name => 'strmadmin.streams_queue', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'dbs1.net', inclusion_rule => TRUE); END; /
Running this procedure performs the following actions:
Creates a rule set at dbs2.net
for capture process strm04_capture
. The rule set has a system-generated name. The rule set is a positive rule set for the capture process because the inclusion_rule
parameter is set to TRUE
.
Creates a rule that captures DML changes to the hr.departments
table, and adds the rule to the rule set for the capture process. The rule has a system-generated name. The rule is added to the positive rule set for the capture process because the inclusion_rule
parameter is set to TRUE
.
If necessary, complete the steps described in "After Configuring a Capture Process".
To create a capture process that performs downstream capture, you must use the CREATE_CAPTURE
procedure. This section describes creating an archived-log downstream capture process that assigns redo log files explicitly. That is, you must use the DBMS_FILE_TRANSFER
package, FTP, or some other method to transfer redo log files from the source database to the downstream database, and then you must register these redo log files with the downstream capture process manually.
In this example, assume the following:
The source database is dbs1.net
and the downstream database is dbs2.net
.
The capture process that will be created at dbs2.net
uses the streams_queue
.
The capture process will capture DML changes to the hr.departments
table.
The capture process does not use a database link to the source database for administrative actions.
Complete the following steps:
Complete the tasks in "Preparing to Configure a Capture Process".
Connect to the source database dbs1.net
as the Oracle Streams administrator. For example, if the Oracle Streams administrator is strmadmin
, then issue the following statement:
CONNECT strmadmin/user-password@dbs1.net
If you do not use a database link from the downstream database to the source database, then an Oracle Streams administrator must exist at the source database.
If there is no capture process at dbs2.net
that captures changes from dbs1.net
, then perform a build of the dbs1.net
data dictionary in the redo log. This step is optional if a capture process at dbs2.net
is already configured to capture changes from the dbs1.net
source database.
SET SERVEROUTPUT ON DECLARE scn NUMBER; BEGIN DBMS_CAPTURE_ADM.BUILD( first_scn => scn); DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn); END; / First SCN Value = 409391
This procedure displays the valid first SCN value for the capture process that will be created at dbs2.net
. Make a note of the SCN value returned because you will use it when you create the capture process at dbs2.net
.
If you run this procedure to build the data dictionary in the redo log, then when the capture process is first started at dbs2.net
, it will create a LogMiner data dictionary using the data dictionary information in the redo log.
Prepare the hr.departments
table for instantiation:
BEGIN DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION( table_name => 'hr.departments', supplemental_logging => 'keys'); END; /
Primary key supplemental logging is required for the hr.departments
table because this example creates a capture processes that captures changes to this table. Specifying keys
for the supplemental_logging
parameter in the PREPARE_TABLE_INSTANTIATION
procedure enables supplemental logging for any primary key, unique key, bitmap index, and foreign key columns in the table.
Determine the current SCN of the source database:
SET SERVEROUTPUT ON SIZE 1000000 DECLARE iscn NUMBER; -- Variable to hold instantiation SCN value BEGIN iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER(); DBMS_OUTPUT.PUT_LINE('Current SCN: ' || iscn); END; /
You can use the returned SCN as the instantiation SCN for destination databases that will apply changes to the hr.departments
table that were captured by the capture process being created. In this example, assume the returned SCN is 1001656
.
Connect to the downstream database dbs2.net
as the Oracle Streams administrator. For example, if the Oracle Streams administrator is strmadmin
, then issue the following statement:
CONNECT strmadmin/user-password@dbs2.net
Run the CREATE_CAPTURE
procedure to create the capture process and specify the value obtained in Step 3 for the first_scn
parameter:
BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'strm05_capture', rule_set_name => NULL, start_scn => NULL, source_database => 'dbs1.net', use_database_link => FALSE, first_scn => 409391, -- Use value from Step 3 logfile_assignment => 'explicit'); END; /
Running this procedure performs the following actions:
Creates a capture process named strm05_capture
at the downstream database dbs2.net
. A capture process with the same name must not exist.
Associates the capture process with an existing queue on dbs2.net
named streams_queue
.
Specifies that the source database of the changes that the capture process will capture is dbs1.net
.
Specifies that the first SCN for the capture process is 409391
. This value was obtained in Step 3. The first SCN is the lowest SCN for which a capture process can capture changes. Because a first SCN is specified, the capture process creates a new LogMiner data dictionary when it is first started, regardless of whether there are existing LogMiner data dictionaries for the same source database.
Specifies new redo log files from dbs1.net
must be assigned to the capture process explicitly. After a redo log file has been transferred to the computer running the downstream database, you assign the log file to the capture process explicitly using the following DDL statement:
ALTER DATABASE REGISTER LOGICAL LOGFILE file_name FOR capture_process;
Here, file_name
is the name of the redo log file and capture_process
is the name of the capture process that will use the redo log file at the downstream database. You must add redo log files manually if the logfile_assignment
parameter is set to explicit
.
This step does not associate the capture process strm05_capture
with any rule set. A rule set will be created and associated with the capture process in the next step.
See Also:
Oracle Database SQL Language Reference for more information about the ALTER
DATABASE
statement
Oracle Data Guard Concepts and Administration for more information registering redo log files
Create the positive rule set for the capture process and add a rule to it:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.departments', streams_type => 'capture', streams_name => 'strm05_capture', queue_name => 'strmadmin.streams_queue', include_dml => TRUE, include_ddl => FALSE, include_tagged_lcr => FALSE, source_database => 'dbs1.net', inclusion_rule => TRUE); END; /
Running this procedure performs the following actions:
Creates a rule set at dbs2.net
for capture process strm05_capture
. The rule set has a system-generated name. The rule set is a positive rule set for the capture process because the inclusion_rule
parameter is set to TRUE
.
Creates a rule that captures DML changes to the hr.departments
table, and adds the rule to the rule set for the capture process. The rule has a system-generated name. The rule is added to the positive rule set for the capture process because the inclusion_rule
parameter is set to TRUE
.
After the redo log file at the source database dbs1.net
that contains the first SCN for the downstream capture process is archived, transfer the archived redo log file to the computer running the downstream database. The BUILD
procedure in Step 3 determined the first SCN for the downstream capture process. If the redo log file is not yet archived, you can run the ALTER
SYSTEM
SWITCH
LOGFILE
statement on the database to archive it.
You can run the following query at dbs1.net
to identify the archived redo log file that contains the first SCN for the downstream capture process:
COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A50 COLUMN FIRST_CHANGE# HEADING 'First SCN' FORMAT 999999999 SELECT NAME, FIRST_CHANGE# FROM V$ARCHIVED_LOG WHERE FIRST_CHANGE# IS NOT NULL AND DICTIONARY_BEGIN = 'YES';
Transfer the archived redo log file with a FIRST_CHANGE#
that matches the first SCN returned in Step 3 to the computer running the downstream capture process.
At the downstream database dbs2.net
, connect as an administrative user and assign the transferred redo log file to the capture process. For example, if the redo log file is /oracle/logs_from_dbs1/1_10_486574859.dbf
, then issue the following statement:
ALTER DATABASE REGISTER LOGICAL LOGFILE '/oracle/logs_from_dbs1/1_10_486574859.dbf' FOR 'strm05_capture';
If necessary, complete the steps described in "After Configuring a Capture Process".
This example runs the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create a local capture process with a start SCN set to 223525
. This example assumes that there is at least one local capture process at the database, and that this capture process has taken at least one checkpoint. You can always specify a start SCN for a new capture process that is equal to or greater than the current SCN of the source database. If you want to specify a start SCN that is lower than the current SCN of the database, then the specified start SCN must be higher than the lowest first SCN for an existing local capture process that has been started successfully at least once and has taken at least one checkpoint.
You can determine the first SCN for existing capture processes, and whether these capture processes have taken a checkpoint, by running the following query:
SELECT CAPTURE_NAME, FIRST_SCN, MAX_CHECKPOINT_SCN FROM DBA_CAPTURE;
Your output looks similar to the following:
CAPTURE_NAME FIRST_SCN MAX_CHECKPOINT_SCN ------------------------------ ---------- ------------------ CAPTURE_SIMP 223522 230825
These results show that the capture_simp
capture process has a first SCN of 223522
. Also, this capture process has taken a checkpoint because the MAX_CHECKPOINT_SCN
value is non-NULL
. Therefore, the start SCN for the new capture process can be set to 223522
or higher.
To configure a local capture process with a non-NULL
start SCN, complete the following steps:
Before you proceed, complete the tasks in "Preparing to Configure a Capture Process".
Run the following procedure to create the capture process:
BEGIN DBMS_CAPTURE_ADM.CREATE_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'strm05_capture', rule_set_name => 'strmadmin.strm01_rule_set', start_scn => 223525, source_database => NULL, first_scn => NULL); END; /
Running this procedure performs the following actions:
Creates a capture process named strm05_capture
. A capture process with the same name must not exist.
Associates the capture process with an existing queue named streams_queue
.
Associates the capture process with an existing rule set named strm01_rule_set
. This rule set is the positive rule set for the capture process.
Specifies 223525
as the start SCN for the capture process. The new capture process uses the same LogMiner data dictionary as one of the existing capture processes. Oracle Streams automatically chooses which LogMiner data dictionary to share with the new capture process. Because the first_scn
parameter was set to NULL
, the first SCN for the new capture process is the same as the first SCN of the existing capture process whose LogMiner data dictionary was shared. In this example, the existing capture process is capture_simp
.
Creates a capture process that captures local changes to the source database because the source_database
parameter is set to NULL
. For a local capture process, you can also specify the global name of the local database for this parameter.
Note:
If no local capture process exists when the procedure in this example is run, then theDBMS_CAPTURE_ADM.BUILD
procedure is run automatically during capture process creation to extract the data dictionary into the redo log. The first time the new capture process is started, it uses this redo data to create a LogMiner data dictionary. In this case, a specified start_scn
parameter value must be equal to or higher than the current database SCN.If necessary, complete the steps described in "After Configuring a Capture Process".
If you plan to configure propagations and apply processes that process LCRs captured by the new capture process, then perform the configuration in the following order:
Create all of the propagations that will propagate LCRs captured by the new capture process. See "Creating Oracle Streams Propagations Between ANYDATA Queues".
Create all of the apply processes that will dequeue LCRs captured by the new capture process. See "Overview of Apply Process Creation". Configure each apply process to apply captured LCRs.
Instantiate the tables for which the new capture process captures changes at all destination databases. See the Oracle Streams Replication Administrator's Guide for detailed information about instantiation.
Start the apply processes that will process LCRs captured by the new capture process. See "Starting an Apply Process".
Start the new capture process. See "Starting a Capture Process".
Note:
Other configuration steps might be required for your Oracle Streams environment. For example, some Oracle Streams environments include transformations, apply handlers, and conflict resolution.You can use any of the following procedures to create a synchronous capture:
Both of the procedures in the DBMS_STREAMS_ADM
package create a synchronous capture with the specified name if it does not already exist, create a positive rule set for the synchronous capture if it does not exist, and can add table rules or subset rules to the rule set.
The CREATE_SYNC_CAPTURE
procedure creates a synchronous capture, but does not create a rule set or rules for the synchronous capture. However, the CREATE_SYNC_CAPTURE
procedure enables you to specify an existing rule set to associate with the synchronous capture, and it enables you to specify a capture user other than the default capture user.
The following sections describe configuring a synchronous capture:
Configuring a Synchronous Capture Using the DBMS_STREAMS_ADM Package
Configuring a Synchronous Capture Using the DBMS_CAPTURE_ADM Package
Note:
To create a synchronous capture, a user must be granted DBA
role.
If Oracle Database Vault is installed, then the user who creates the synchronous capture must be granted the BECOME
USER
system privilege. Granting this privilege to the user is not required if Oracle Database Vault is not installed. You can revoke the BECOME
USER
system privilege from the user after the synchronous capture is created, if necessary.
See Also:
Oracle Database 2 Day + Data Replication and Integration Guide for an example that configures a replication environment that uses synchronous capture
The following tasks must be completed before you configure a synchronous capture:
Ensure that the initialization parameters are set properly on any database that will use synchronous capture. See "Setting Initialization Parameters Relevant to Oracle Streams".
Create an Oracle Streams administrator on each database involved in the Oracle Streams configuration. See "Configuring an Oracle Streams Administrator". The examples in this chapter assume that the Oracle Streams administrator is strmadmin
.
Create an ANYDATA
queue to associate with the synchronous capture, if one does not exist. See "Configuring an ANYDATA Queue" for instructions. The queue must be a commit-time queue. The examples in this chapter assume that the queue used by synchronous capture is strmadmin.streams_queue
. Create the queue in the same database that will run the synchronous capture.
Create all of the propagations that will propagate LCRs captured by the new synchronous capture. See "Creating Oracle Streams Propagations Between ANYDATA Queues".
Create all of the apply processes that will dequeue LCRs captured by the new synchronous capture. See "Overview of Apply Process Creation". Configure each apply process to apply persistent LCRs by setting the apply_captured
parameter to FALSE
in the DBMS_APPLY_ADM.CREATE_APPLY
procedure. Do not start the apply process until after the instantiation performed in "After Configuring a Synchronous Capture" is complete.
When you run the ADD_TABLE_RULES
or ADD_SUBSET_RULES
procedure to create a synchronous capture, set the streams_type
parameter in these procedures to sync_capture
. A rule created by the ADD_TABLE_RULES
procedure instructs the synchronous capture to capture all DML changes to the table. A rule created by the ADD_SUBSET_RULES
procedure instructs the synchronous capture to capture a subset of the DML changes to the table.
This example assumes the following:
The source database is dbs1.net
.
The synchronous capture that will be created uses the strmadmin.streams_queue
.
The synchronous capture that will be created captures the results of DML changes made to the hr.departments
table.
The capture user for the synchronous capture that will be created is the Oracle Streams administrator strmadmin
.
Complete the following steps to create a synchronous capture using the DBMS_STREAMS_ADM
package:
Complete the tasks in "Preparing to Configure a Synchronous Capture".
In SQL*Plus, connect to the dbs1.net
database as the Oracle Streams administrator.
Run the ADD_TABLE_RULES
or ADD_SUBSET_RULES
procedure to create a synchronous capture. For example:
BEGIN DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'hr.departments', streams_type => 'sync_capture', streams_name => 'sync_capture', queue_name => 'strmadmin.streams_queue'); END; /
This procedure performs the following actions:
Creates a synchronous capture named sync_capture
at the source database.
Enables the synchronous capture. A synchronous capture cannot be disabled.
Associates the synchronous capture with the existing strmadmin.streams_queue
queue.
Creates a positive rule set for the synchronous capture. The rule set has a system-generated name.
Creates a rule that captures DML changes to the hr.departments
table, and adds the rule to the positive rule set for the synchronous capture. The rule has a system-generated name.
Configure the user who runs the procedure as the capture user for the synchronous capture.
Prepares the specified table for instantiation by running the DBMS_CAPTURE_ADM.PREPARE_SYNC_INSTANTIATION
function for the table automatically.
If necessary, complete the steps described in "After Configuring a Synchronous Capture".
Note:
When theADD_TABLE_RULES
or the ADD_SUBSET_RULES
procedure adds rules to a synchronous capture rule set, the procedure must obtain an exclusive lock on the specified table. If there are outstanding transactions on the specified table, then the procedure waits until it can obtain a lock.This section contains an example that runs procedures in the DBMS_CAPTURE_ADM
package and DBMS_STREAMS_ADM
package to configure a synchronous capture.
This example assumes the following:
The source database is dbs1.net
.
The synchronous capture that will be created uses the strmadmin.streams_queue
.
The synchronous capture that will be created uses an existing rule set named sync01_rule_set
in the strmadmin
schema. See "Creating a Rule Set".
The synchronous capture that will be created captures the results of a subset of the DML changes made to the hr.departments
table.
The capture user for the synchronous capture that will be created is hr
. The hr
user must have privileges to enqueue into the streams_queue
.
Complete the following steps to create a synchronous capture using the DBMS_CAPTURE_ADM
package:
Complete the tasks in "Preparing to Configure a Synchronous Capture".
In SQL*Plus, connect to the dbs1.net
database as the Oracle Streams administrator.
Run the CREATE_SYNC_CAPTURE
procedure to create a synchronous capture. For example:
BEGIN DBMS_CAPTURE_ADM.CREATE_SYNC_CAPTURE( queue_name => 'strmadmin.streams_queue', capture_name => 'sync01_capture', rule_set_name => 'strmadmin.sync01_rule_set', capture_user => 'hr'); END; /
Running this procedure performs the following actions:
Creates a synchronous capture named sync01_capture
. A synchronous capture with the same name must not exist.
Enables the synchronous capture. A synchronous capture cannot be disabled.
Associates the synchronous capture with an existing queue named streams_queue
.
Associates the synchronous capture with an existing rule set named sync01_rule_set
in the strmadmin
schema.
Configures hr
as the capture user for the synchronous capture.
Run the ADD_TABLE_RULES
or ADD_SUBSET_RULES
procedure to add a rule to the synchronous capture rule set. For example, run the ADD_SUBSET_RULES
procedure to instruct the synchronous capture to capture a subset of the DML changes to the hr.departments
table:
BEGIN DBMS_STREAMS_ADM.ADD_SUBSET_RULES( table_name => 'hr.departments', dml_condition => 'department_id=1700', streams_type => 'sync_capture', streams_name => 'sync01_capture', queue_name => 'streams_queue', include_tagged_lcr => FALSE); END; /
Running this procedure performs the following actions:
Adds subset rules to the rule set for the synchronous capture named sync01_capture
at the source database dbs1.net
. The subset rules instruct the synchronous capture to capture changes to rows with department_id
equal to 1700
. The synchronous capture does not capture changes to other rows in the table.
Prepares the hr.departments
table for instantiation by running the DBMS_CAPTURE_ADM.PREPARE_SYNC_INSTANTIATION
function for the table automatically.
Specifies that the synchronous capture captures a change only if the session that makes the change has a NULL
tag, because the include_tagged_lcr
parameter is set to FALSE
. This behavior is accomplished through the system-created rules for the synchronous capture.
If necessary, complete the steps described in "After Configuring a Synchronous Capture".
Note:
When theCREATE_SYNC_CAPTURE
procedure creates a synchronous capture, the procedure must obtain an exclusive lock on each table for which it will capture changes. The rules in the specified rule set for the synchronous capture determine these tables. Similarly, when the ADD_TABLE_RULES
or the ADD_SUBSET_RULES
procedure adds rules to a synchronous capture rule set, the procedure must obtain an exclusive lock on the specified table. In these cases, if there are outstanding transactions on a table for which the synchronous capture will capture changes, then the procedure waits until it can obtain a lock.If you configured propagations and apply processes that process LCRs captured by the new synchronous capture, then complete the following steps:
Instantiate the tables for which the new synchronous capture captures changes at all destination databases. See the Oracle Streams Replication Administrator's Guide for detailed information about instantiation.
Start the apply processes that will process LCRs captured by the synchronous capture. See "Starting an Apply Process".
Note:
Other configuration steps might be required for your Oracle Streams environment. For example, some Oracle Streams environments include transformations, apply handlers, and conflict resolution.