Oracle® Streams Concepts and Administration 11g Release 1 (11.1) Part Number B28321-01 |
|
|
View PDF |
This chapter briefly describes the basic concepts and terminology related to Oracle Streams. These concepts are described in more detail in other chapters in this book and in the Oracle Streams Replication Administrator's Guide.
This chapter contains these topics:
Oracle Streams enables information sharing. Using Oracle Streams, each unit of shared information is called a message, and you can share these messages in a stream. The stream can propagate information within a database or from one database to another. The stream routes specified information to specified destinations. The result is a feature that provides greater functionality and flexibility than traditional solutions for capturing and managing messages, and sharing the messages with other databases and applications. Oracle Streams provides the capabilities needed to build and operate distributed enterprises and applications, data warehouses, and high availability solutions. You can use all of the capabilities of Oracle Streams at the same time. If your needs change, then you can implement a new capability of Oracle Streams without sacrificing existing capabilities.
Using Oracle Streams, you control what information is put into a stream, how the stream flows or is routed from database to database, what happens to messages in the stream as they flow into each database, and how the stream terminates. By configuring specific capabilities of Oracle Streams, you can address specific requirements. Based on your specifications, Oracle Streams can capture, stage, and manage messages in the database automatically, including, but not limited to, data manipulation language (DML) changes and data definition language (DDL) changes. You can also put user-defined messages into a stream, and Oracle Streams can propagate the information to other databases or applications automatically. When messages reach a destination, Oracle Streams can consume them based on your specifications.
Figure 1-1 shows the Oracle Streams information flow.
Figure 1-1 Oracle Streams Information Flow
The following sections provide an overview of what Oracle Streams can do:
Oracle Streams provides two ways to capture database changes implicitly: capture processes and synchronous captures. A capture process can capture DML changes made to tables, schemas, or an entire database, as well as DDL changes. A synchronous capture can capture DML changes made to tables.
Database changes are recorded in the redo log for the database. A capture process captures changes from the redo log and formats each captured change into a message called a logical change record (LCR). The messages captured by a capture process are called captured LCRs.
A synchronous capture uses an internal mechanism to capture changes and format each captured change into an LCR. The messages captured by a synchronous capture are called persistent LCRs.
The rules used by a capture process or a synchronous capture determine which changes it captures. When changes are captured by a capture process, the database where changes are generated in the redo log is the source database. When changes are captured by a synchronous capture, the database where the synchronous capture is configured is the source database.
A capture process can capture changes locally at the source database, or it can capture changes remotely at a downstream database. A synchronous capture can only capture changes locally at the source database. Both a capture process and a synchronous capture enqueue logical change records (LCRs) into a queue. When a capture process or a synchronous capture captures changes, it is referred to as implicit capture.
Users and applications can also enqueue messages manually. These messages can be LCRs, or they can be messages of a user-defined type called user messages. When users and applications enqueue messages manually, it is referred to as explicit capture.
Messages are stored (or staged) in a queue. These messages can be logical change records (LCRs) or user messages. Capture processes and synchronous captures enqueue messages into an ANYDATA queue, which can stage messages of different types. Users and applications can enqueue messages into an ANYDATA
queue or into a typed queue. A typed queue can stage messages of one specific type only.
Oracle Streams propagations can propagate messages from one queue to another. These queues can be in the same database or in different databases. Rules determine which messages are propagated by a propagation.
A message is consumed when it is dequeued from a queue. An apply process can dequeue messages implicitly. A user, application, or messaging client can dequeue messages explicitly. The database where messages are consumed is called the destination database. In some configurations, the source database and the destination database can be the same.
Rules determine which messages are dequeued and processed by an apply process. An apply process can apply messages directly to database objects or pass messages to custom PL/SQL subprograms for processing.
Rules determine which messages are dequeued by a messaging client. A messaging client dequeues messages when it is invoked by an application or a user.
Other capabilities of Oracle Streams include the following:
automatic conflict detection and conflict resolution
These capabilities are discussed briefly later in this chapter and in detail later in this document and in the Oracle Streams Replication Administrator's Guide.
The following topics briefly describe some of the reasons for using Oracle Streams:
In some cases, Oracle Streams components provide infrastructure for various features of Oracle.
Oracle Streams can capture DML and DDL changes made to database objects and replicate those changes to one or more other databases. An Oracle Streams capture process or synchronous capture captures changes made to source database objects and formats them into LCRs, which can be propagated to destination databases and then applied by Oracle Streams apply processes.
The destination databases can allow DML and DDL changes to the same database objects, and these changes might or might not be propagated to the other databases in the environment. In other words, you can configure an Oracle Streams environment with one database that propagates changes, or you can configure an environment where changes are propagated between databases bidirectionally. Also, the tables for which data is shared do not need to be identical copies at all databases. Both the structure and the contents of these tables can differ at different databases, and the information in these tables can be shared between these databases.
See Also:
Oracle Streams Replication Administrator's Guide for more information using Oracle Streams for replication
Data warehouse loading is a special case of data replication. Some of the most critical tasks in creating and maintaining a data warehouse include refreshing existing data, and adding new data from the operational databases. Oracle Streams components can capture changes made to a production system and send those changes to a staging database or directly to a data warehouse or operational data store. Oracle Streams capture of redo data with a capture process avoids unnecessary overhead on the production systems. Support for data transformations and user-defined apply procedures enables the necessary flexibility to reformat data or update warehouse-specific data fields as data is loaded. In addition, Change Data Capture uses some of the components of Oracle Streams to identify data that has changed so that this data can be loaded into a data warehouse.
See Also:
Oracle Database Data Warehousing Guide for more information about data warehouses
You can use the features of Oracle Streams to achieve little or no database down time during database upgrade and maintenance operations. Maintenance operations include migrating a database to a different platform, migrating a database to a different character set, modifying database schema objects to support upgrades to user-created applications, and applying an Oracle software patch.
Oracle Streams Advanced Queuing (AQ) enables user applications to enqueue messages into a queue, propagate messages to subscribing queues, notify user applications that messages are ready for consumption, and dequeue messages at the destination. A queue can be configured to stage messages of a particular type only, or a queue can be configured as an ANYDATA
queue. Messages of almost any type can be wrapped in an ANYDATA
wrapper and staged in ANYDATA
queues. Oracle Streams AQ supports all the standard features of message queuing systems, including multiconsumer queues, publish and subscribe, content-based routing, Internet propagation, transformations, and gateways to other messaging subsystems.
You can create a queue at a database, and applications can enqueue messages into the queue explicitly. Subscribing applications or messaging clients can dequeue messages directly from this queue. If an application is remote, then a queue can be created in a remote database that subscribes to messages published in the source queue. The destination application can dequeue messages from the remote queue. Alternatively, the destination application can dequeue messages directly from the source queue using a variety of standard protocols.
See Also:
Oracle Streams Advanced Queuing User's Guide for more information about Oracle Streams AQ
Business events are valuable communications between applications or organizations. An application can enqueue messages that represent events into a queue explicitly, or an Oracle Streams capture process or synchronous capture can capture database events and encapsulate them into messages called LCRs. These messages can be the results of DML or DDL changes. Propagations can propagate messages in a stream through multiple queues. Finally, a user application can dequeue messages explicitly, or an Oracle Streams apply process can dequeue messages implicitly. An apply process can reenqueue these messages explicitly into the same queue or a different queue if necessary.
You can configure queues to retain explicitly-enqueued messages after consumption for a specified period of time. This capability enables you to use Oracle Streams Advanced Queuing (AQ) as a business event management system. Oracle Streams AQ stores all messages in the database in a transactional manner, where they can be automatically audited and tracked. You can use this audit trail to extract intelligence about the business operations.
Oracle Streams capture processes, synchronous captures, propagations, apply processes, and messaging clients perform actions based on rules. You specify which events are captured, propagated, applied, and dequeued using rules, and a built-in rules engine evaluates events based on these rules. The ability to capture events and propagate them to relevant consumers based on rules means that you can use Oracle Streams for event notification. Messages representing events can be staged in a queue and dequeued explicitly by a messaging client or an application, and then actions can be taken based on these events, which can include an e-mail notification, or passing the message to a wireless gateway for transmission to a cell phone or pager.
See Also:
Chapter 32, "Single-Database Capture and Apply Example" for a sample environment that explicitly dequeues messages
One solution for data protection is to create a local or remote copy of a production database. In the event of human error or a catastrophe, the copy can be used to resume processing.
You can use Oracle Data Guard SQL Apply, a data protection feature that uses some of the same infrastructure as Oracle Streams, to create and maintain a logical standby database, which is a logically equivalent standby copy of a production database. As in the case of Oracle Streams replication, a capture process captures changes in the redo log and formats these changes into LCRs. These LCRs are applied at the standby databases. The standby databases are open for read/write and can include specialized indexes or other database objects. Therefore, these standby databases can be queried as updates are applied.
It is important to move the updates to the remote site as soon as possible with a logical standby database. Doing so ensures that, in the event of a failure, lost transactions are minimal. By directly and synchronously writing the redo logs at the remote database, you can achieve no data loss in the event of a disaster. At the standby system, the changes are captured and directly applied to the standby database with an apply process.
See Also:
Oracle Data Guard Concepts and Administration for more information about logical standby databases
This section provides an overview of the following implicit capture options:
Changes made to database objects in an Oracle database are logged in the redo log to guarantee recoverability in the event of user error or media failure. A capture process is an Oracle background process that scans the database redo log to capture DML and DDL changes made to database objects. A capture process formats these changes into messages called LCRs and enqueues them into a queue. There are two types of LCRs: row LCRs contain information about a change to a row in table resulting from a DML operation, and DDL LCRs contain information about a DDL change to a database object. Rules determine which changes are captured.
Figure 1-2 shows a capture process capturing LCRs.
You can configure change capture locally at a source database or remotely at a downstream database. A local capture process runs at the source database and captures changes from the local source database redo log. The following types of configurations are possible for a downstream capture process:
A real-time downstream capture configuration means that the log writer process (LGWR) at the source database sends redo data from the online redo log to the downstream database. At the downstream database, the redo data is stored in the standby redo log, and the capture process captures changes from the standby redo log.
An archived-log downstream capture configuration means that archived redo log files from the source database are copied to the downstream database, and the capture process captures changes in these archived redo log files.
Note:
A capture process does not capture some types of DML and DDL changes, and it does not capture changes made in theSYS
, SYSTEM
, or CTXSYS
schemas.See Also:
Chapter 2, "Oracle Streams Information Capture" for more information about capture processes and for detailed information about which DML and DDL statements are captured by a capture processSynchronous capture is an optional Oracle Streams client that captures data manipulation language (DML) changes made to tables. Synchronous capture uses an internal mechanism to capture DML changes to specified tables. When synchronous capture is configured to capture changes to tables, the database that contains these tables is called the source database.
When a DML change is made to a table, it can result in changes to one or more rows in the table. Synchronous capture captures each row change and converts it into a specific message format called a row logical change record (row LCR). After capturing a row LCR, synchronous capture enqueues a message containing the row LCR into a queue.
Figure 1-3 shows a synchronous capture capturing LCRs.
Oracle Streams uses queues to stage messages for propagation or consumption. Propagations send messages from one queue to another, and these queues can be in the same database or in different databases. The queue from which the messages are propagated is called the source queue, and the queue that receives the messages is called the destination queue. There can be a one-to-many, many-to-one, or many-to-many relationship between source and destination queues.
Messages that are staged in a queue can be consumed by an apply process, a messaging client, or an application. Rules determine which messages are propagated by a propagation. Figure 1-4 shows propagation from a source queue to a destination queue.
Figure 1-4 Propagation from a Source Queue to a Destination Queue
See Also:
Chapter 3, "Oracle Streams Staging and Propagation" for more information about staging and propagationOracle Streams enables you to configure an environment in which changes are shared through directed networks. In a directed network, propagated messages pass through one or more intermediate databases before arriving at a destination database where they are consumed. The messages might or might not be consumed at an intermediate database in addition to the destination database. Using Oracle Streams, you can choose which messages are propagated to each destination database, and you can specify the route messages will traverse on their way to a destination database.
See Also:
"Directed Networks"User applications can enqueue messages into a queue explicitly. The user applications can format these messages as LCRs or user messages, and an apply process, a messaging client, or a user application can consume these messages. Messages that were enqueued explicitly can be propagated to another queue or explicitly dequeued from the same queue. Figure 1-5 shows explicit enqueue of messages into and dequeue of messages from the same queue.
Figure 1-5 Explicit Enqueue and Dequeue of Messages in a Single Queue
When messages are propagated between queues, messages that were enqueued explicitly into a source queue can be dequeued explicitly from a destination queue by a messaging client or user application. These messages can also be processed by an apply process. Figure 1-6 shows explicit enqueue of messages into a source queue, propagation to a destination queue, and then explicit dequeue of messages from the destination queue.
Figure 1-6 Explicit Enqueue, Propagation, and Dequeue of Messages
See Also:
"ANYDATA Queues and User Messages" for more information about explicit enqueue and dequeue of messagesAn apply process is an Oracle background process that dequeues messages from a queue and either applies each message directly to a database object or passes the message as a parameter to a user-defined procedure called an apply handler. Apply handlers include message handlers, DML handlers, DDL handler, precommit handlers, and error handlers.
Typically, an apply process applies messages to the local database where it is running, but, in a heterogeneous database environment, it can be configured to apply messages at a remote non-Oracle database. Rules determine which messages are dequeued by an apply process. Figure 1-7 shows an apply process processing LCRs and user messages.
A messaging client consumes persistent LCRs or persistent user messages when it is invoked by an application or a user. Rules determine which messages are dequeued by a messaging client. Figure 1-8 shows a messaging client dequeuing messages.
An apply process detects conflicts automatically when directly applying LCRs in a replication environment. A conflict is a mismatch between the old values in an LCR and the expected data in a table. Typically, a conflict results when the same row in the source database and destination database is changed at approximately the same time.
When a conflict occurs, you need a mechanism to ensure that the conflict is resolved in accordance with your business rules. Oracle Streams offers a variety of prebuilt conflict handlers. Using these prebuilt handlers, you can define a conflict resolution system for each of your databases that resolves conflicts in accordance with your business rules. If you have a unique situation that prebuilt conflict resolution handlers cannot resolve, then you can build your own conflict resolution handlers.
If a conflict is not resolved, or if a handler procedure raises an error, then all messages in the transaction that raised the error are saved in the error queue for later analysis and possible reexecution.
Oracle Streams enables you to control which information to share and where to share it using rules. A rule is specified as a condition that is similar to the condition in the WHERE
clause of a SQL query.
A rule consists of the following components:
The rule condition combines one or more expressions and conditions and returns a Boolean value, which is a value of TRUE
, FALSE
, or NULL
(unknown), based on an event.
The evaluation context defines external data that can be referenced in rule conditions. The external data can either exist as external variables, as table data, or both.
The action context is optional information associated with a rule that is interpreted by the client of the rules engine when the rule is evaluated.
You can group related rules together into rule sets. In Oracle Streams, rule sets can be positive or negative.
For example, the following rule condition can be used for a rule in Oracle Streams to specify that the schema name that owns a table must be hr
and that the table name must be departments
for the condition to evaluate to TRUE
:
:dml.get_object_owner() = 'HR' AND :dml.get_object_name() = 'DEPARTMENTS'
The :dml
variable is used in rule conditions for row LCRs. In an Oracle Streams environment, a rule with this condition can be used in the following ways:
If the rule is in a positive rule set for a capture process, then it instructs the capture process to capture row changes that result from DML changes to the hr.departments
table. If the rule is in a negative rule set for a capture process, then it instructs the capture process to discard DML changes to the hr.departments
table.
If the rule is in a positive rule set for a synchronous capture, then it instructs the synchronous capture to capture row changes that result from DML changes to the hr.departments
table. A synchronous capture cannot have a negative rule set.
If the rule is in a positive rule set for a propagation, then it instructs the propagation to propagate LCRs that contain row changes to the hr.departments
table. If the rule is in a negative rule set for a propagation, then it instructs the propagation to discard LCRs that contain row changes to the hr.departments
table.
If the rule is in a positive rule set for an apply process, then it instructs the apply process to apply LCRs that contain row changes to the hr.departments
table. If the rule is in a negative rule set for an apply process, then it instructs the apply process to discard LCRs that contain row changes to the hr.departments
table.
If the rule is in a positive rule set for a messaging client, then it instructs the messaging client to dequeue LCRs that contain row changes to the hr.departments
table. If the rule is in a negative rule set for a messaging client, then it instructs the messaging client to discard LCRs that contain row changes to the hr.departments
table.
Oracle Streams performs tasks based on rules. These tasks include capturing messages with a capture process or synchronous capture, propagating messages with a propagation, applying messages with an apply process, dequeuing messages with a messaging client, and discarding messages.
A rule-based transformation is any modification to a message that results when a rule in a positive rule set evaluates to TRUE
. There are two types of rule-based transformations: declarative and custom.
Declarative rule-based transformations cover a set of common transformation scenarios for row LCRs, including renaming a schema, renaming a table, adding a column, renaming a column, and deleting a column. You specify (or declare) such a transformation using a procedure in the DBMS_STREAMS_ADM
package. Oracle Streams performs declarative transformations internally, without invoking PL/SQL.
A custom rule-based transformation requires a user-defined PL/SQL function to perform the transformation. Oracle Streams invokes the PL/SQL function to perform the transformation. A custom rule-based transformation can modify either LCRs or user messages. For example, a custom rule-based transformation can change the data type of a particular column in an LCR.
To specify a custom rule-based transformation, use the DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION
procedure. The transformation function takes as input an ANYDATA
object containing a message and returns an ANYDATA
object containing the transformed message. For example, a transformation can use a PL/SQL function that takes as input an ANYDATA
object containing an LCR with a NUMBER
data type for a column and returns an ANYDATA
object containing an LCR with a VARCHAR2
data type for the same column.
Either type of rule-based transformation can occur at the following times:
During enqueue of a message by a capture process, which can be useful for formatting a message in a manner appropriate for all destination databases
During propagation of a message, which can be useful for transforming a message before it is sent to a specific remote site
During dequeue of a message by an apply process or messaging client, which can be useful for formatting a message in a manner appropriate for a specific destination database
When a transformation is performed during apply, an apply process can apply the transformed message directly or send the transformed message to an apply handler for processing. Figure 1-9 shows a rule-based transformation during apply.
Note:
A rule must be in a positive rule set for its rule-based transformation to be invoked. A rule-based transformation specified for a rule in a negative rule set is ignored by capture processes, propagations, apply processes, and messaging clients.
Throughout this document, "rule-based transformation" is used when the text applies to both declarative and custom rule-based transformations. This document distinguishes between the two types of rule-based transformations when necessary.
See Also:
Chapter 7, "Rule-Based Transformations"Every redo entry in the redo log has a tag associated with it. The data type of the tag is RAW
. By default, when a user or application generates redo entries, the value of the tag is NULL
for each redo entry, and a NULL
tag consumes no space in the redo entry. The size limit for a tag value is 2000 bytes.
In Oracle Streams, rules can have conditions relating to tag values to control the behavior of Oracle Streams clients. For example, a tag can be used to determine whether an LCR contains a change that originated in the local database or at a different database, so that you can avoid change cycling (sending an LCR back to the database where it originated). Also, a tag can be used to specify the set of destination databases for each LCR. Tags can be used for other LCR tracking purposes as well.
You can specify Oracle Streams tags for redo entries generated by a certain session or by an apply process. These tags then become part of the LCRs captured by a capture process or synchronous capture. Typically, tags are used in Oracle Streams replication environments, but you can use them whenever it is necessary to track database changes and LCRs.
See Also:
Oracle Streams Replication Administrator's Guide for more information about Oracle Streams tagsIn addition to information sharing between Oracle databases, Oracle Streams supports information sharing between Oracle databases and non-Oracle databases. The following sections contain an overview of this support.
See Also:
Oracle Streams Replication Administrator's Guide for more information about heterogeneous information sharing with Oracle StreamsIf an Oracle database is the source and a non-Oracle database is the destination, then the non-Oracle database destination lacks the following Oracle Streams mechanisms:
An apply process to dequeue and apply messages
To share DML changes from an Oracle source database with a non-Oracle destination database, the Oracle database functions as a proxy and carries out some of the steps that would usually be done at the destination database. That is, the messages intended for the non-Oracle destination database are dequeued in the Oracle database itself, and an apply process at the Oracle database uses Heterogeneous Services to apply the messages to the non-Oracle database across a network connection through a gateway. Figure 1-10 shows an Oracle databases sharing data with a non-Oracle database.
Figure 1-10 Oracle to Non-Oracle Heterogeneous Data Sharing
See Also:
Oracle Database Heterogeneous Connectivity Administrator's Guide for more information about Heterogeneous ServicesTo capture and propagate changes from a non-Oracle database to an Oracle database, a custom application is required. This application gets the changes made to the non-Oracle database by reading from transaction logs, using triggers, or some other method. The application must assemble and order the transactions and must convert each change into an LCR. Next, the application must enqueue the LCRs into a queue in an Oracle database by using the PL/SQL interface, where they can be processed by an apply process. Figure 1-11 shows a non-Oracle database sharing data with an Oracle database.
Figure 1-11 Non-Oracle to Oracle Heterogeneous Data Sharing
Each of the following sections provide an overview of a sample Oracle Streams configuration:
Sample Hub-and-Spoke Replication Configuration With Downstream Capture
Sample Hub-and-Spoke Replication Configuration That Uses Synchronous Captures
Sample Configuration That Performs Capture and Apply in a Single Database
Figure 1-12 shows a sample hub-and-spoke replication configuration. A hub-and-spoke replication configuration typically is used to distribute information to multiple target databases and to consolidate information from multiple databases to a single database.
A hub-and-spoke replication configuration is one in which a central database, or hub, communicates with one or more secondary databases, or spokes. The spokes do not communicate directly with each other. In a hub-and-spoke replication configuration, the spokes might or might not allow changes to the replicated database objects.
In the sample hub-and-spoke replication configuration shown in Figure 1-12, there is one hub database and two spoke databases. The spoke databases allow changes to the replicated database objects.
Figure 1-12 Sample Hub-and-Spoke Replication Configuration
For more information about this configuration, see Oracle Database 2 Day + Data Replication and Integration Guide.
Figure 1-13 shows a sample hub-and-spoke replication configuration that uses a downstream capture process. Downstream capture means that the capture process runs on a remote database instead of the source database. Using downstream capture removes the capture workload from the production database.
In the sample hub-and-spoke replication configuration shown in Figure 1-13, the downstream capture process runs at the spoke database, and the redo data is sent from the hub database to the spoke database. At the spoke database, a downstream capture process captures the changes in the redo data sent from the hub database and an apply process applies these changes to the local database objects.
Figure 1-13 Sample Hub-and-Spoke Replication Configuration With Downstream Capture
For more information about this configuration, see Oracle Database 2 Day + Data Replication and Integration Guide.
Figure 1-14 shows a sample hub-and-spoke replication configuration that uses synchronous captures to capture changes instead of capture processes. You can use a synchronous capture replication configuration to replicate changes to tables with infrequent data changes in a highly active database or in situations where capturing changes from the redo logs is not possible.
Figure 1-14 Sample Hub-and-Spoke Replication Configuration With Synchronous Captures
For more information about this configuration, see Oracle Database 2 Day + Data Replication and Integration Guide.
Figure 1-15 shows a sample n-way replication configuration. An n-way replication configuration typically is used in an environment with several peer databases and each database must replicate data with each of the other databases. An n-way replication configuration can provide load balancing, and it can provide failover protection if a single database becomes unavailable.
An n-way replication configuration is one in which each database communicates directly with each other database in the environment. The changes made to replicated database objects at one database are captured and sent directly to each of the other databases in the environment, where they are applied.
In the sample n-way replication configuration shown in Figure 1-15, each of the three databases captures changes to the replicated database objects and sends these changes to the other two databases in the configuration. Apply processes at each database apply the changes sent from the other two databases.
Figure 1-15 Sample N-Way Replication Configuration
For more information about this configuration, see Oracle Streams Replication Administrator's Guide.
Figure 1-16 shows a sample configuration that captures database changes with a capture process and applies these changes with an apply process in a single database. In this configuration, the apply process reenqueues the changes into the queue for processing by an application. Also, a DML handler inserts rows that were deleted from the hr.employees
table into a hr.emp_del
table.
Figure 1-16 Sample Single Database Capture and Apply Configuration
For more information about this configuration, see Chapter 32, "Single-Database Capture and Apply Example".
Figure 1-17 shows a sample messaging configuration. A messaging configuration sends messages from one queue to another queue. The two queues can be in the same database or in different databases. The messages can be dequeued and processed by applications in a customized way.
In the sample messaging configuration shown in Figure 1-17, a trigger at one database creates and enqueues messages. A propagation sends the messages to another database, where a PL/SQL procedure dequeues the messages and processes them.
Figure 1-17 Sample Messaging Configuration
For more information about this configuration, see Oracle Database 2 Day + Data Replication and Integration Guide.
Several tools are available for configuring, administering, and monitoring your Oracle Streams environment. Oracle-supplied PL/SQL packages are the primary configuration and management tools, and the Oracle Streams tool in Oracle Enterprise Manager provides some configuration, administration, and monitoring capabilities to help you manage your environment. Additionally, Oracle Streams data dictionary views keep you informed about your Oracle Streams environment.
The following Oracle-supplied PL/SQL packages contain procedures and functions for configuring and managing an Oracle Streams environment.
The DBMS_APPLY_ADM
package provides an administrative interface for starting, stopping, and configuring an apply process. This package includes procedures that enable you to configure apply handlers, set enqueue destinations for messages, and specify execution directives for messages. This package also provides administrative procedures that set the instantiation SCN for objects at a destination database. This package also includes subprograms for configuring conflict detection and resolution and for managing apply errors.
The DBMS_CAPTURE_ADM
package provides an administrative interface for starting, stopping, and configuring a capture process. It also provides an administrative interface for configuring a synchronous capture. This package also provides administrative procedures that prepare database objects at the source database for instantiation at a destination database.
The DBMS_COMPARISON
package provides interfaces to compare and converge database objects at different databases.
The DBMS_PROPAGATION_ADM
package provides an administrative interface for configuring propagation from a source queue to a destination queue.
The DBMS_RULE
package contains the EVALUATE
procedure, which evaluates a rule set. The goal of this procedure is to produce the list of satisfied rules, based on the data. This package also contains subprograms that enable you to use iterators during rule evaluation. Instead of returning all rules that evaluate to TRUE
or MAYBE
for an evaluation, iterators can return one rule at a time.
The DBMS_RULE_ADM
package provides an administrative interface for creating and managing rules, rule sets, and rule evaluation contexts. This package also contains subprograms for managing privileges related to rules.
The DBMS_STREAMS
package provides interfaces to convert ANYDATA
objects into LCR objects, to return information about Oracle Streams attributes and Oracle Streams clients, and to annotate redo entries generated by a session with a tag. This tag can affect the behavior of a capture process, a synchronous capture, a propagation, an apply process, or a messaging client whose rules include specifications for these tags in redo entries or LCRs.
The DBMS_STREAMS_ADM
package provides an administrative interface for adding and removing simple rules for capture processes, propagations, and apply processes at the table, schema, and database level. This package also enables you to add rules that control which messages a propagation propagates and which messages a messaging client dequeues. This package also contains procedures for creating queues and for managing Oracle Streams metadata, such as data dictionary information. This package also contains procedures that enable you to configure and maintain an Oracle Streams replication environment. This package is provided as an easy way to complete common tasks in an Oracle Streams environment. You can use other packages, such as the DBMS_CAPTURE_ADM
, DBMS_PROPAGATION_ADM
, DBMS_APPLY_ADM
, DBMS_RULE_ADM
, and DBMS_AQADM
packages, to complete these same tasks, as well as tasks that require additional customization.
The DBMS_STREAMS_ADVISOR_ADM
package provides an interface to gather information about an Oracle Streams environment and advise database administrators based on the information gathered. This package is part of the Oracle Streams Performance Advisor.
The DBMS_STREAMS_AUTH
package provides interfaces for granting privileges to and revoking privileges from Oracle Streams administrators.
The DBMS_STREAMS_MESSAGING
package provides interfaces to enqueue messages into and dequeue messages from an ANYDATA
queue.
The DBMS_STREAMS_TABLESPACE_ADM
package provides administrative procedures for creating and managing a tablespace repository. This package also provides administrative procedures for copying tablespaces between databases and moving tablespaces from one database to another. This package uses transportable tablespaces, Data Pump, and the DBMS_FILE_TRANSFER
package.
The UTL_SPADV
package provides subprograms to collect and analyze statistics for the Oracle Streams components in a distributed database environment. This package uses the Oracle Streams Performance Advisor to gather statistics.
See Also:
Oracle Database PL/SQL Packages and Types Reference for more information about these packagesEvery database in an Oracle Streams environment has Oracle Streams data dictionary views. These views maintain administrative information about local rules, objects, capture processes, propagations, apply processes, and messaging clients. You can use these views to monitor your Oracle Streams environment.
See Also:
Oracle Streams Replication Administrator's Guide for queries that are useful in an Oracle Streams replication environment
Oracle Database Reference for more information about these data dictionary views
To help configure, administer, and monitor Oracle Streams environments, Oracle provides an Oracle Streams tool in the Oracle Enterprise Manager Console. You can also use the Oracle Streams tool to generate Oracle Streams configuration scripts, which you can then modify and run to configure your Oracle Streams environment. The Oracle Streams tool online Help contains the primary documentation for this tool.
Figure 1-18 shows the top portion of the Streams page in Enterprise Manager.
Figure 1-18 Streams page in Enterprise Manager
Figure 1-19 shows the Oracle Streams Topology, which is on the bottom portion of the Streams page in the Enterprise Manager.
See Also:
Oracle Database 2 Day + Data Replication and Integration Guide
The online Help for the Oracle Streams tool in the Oracle Enterprise Manager