Oracle9i Database Globalization Support Guide Release 2 (9.2) Part Number A96529-01 |
|
This chapter illustrates how to use Unicode in an Oracle database environment. It includes the following topics:
Dealing with many different languages in the same application or database has been complicated and difficult for a long time. To overcome the limitations of existing character encodings, several organizations began working on the creation of a global character set in the late 1980s. The need for this became even greater with the development of the World Wide Web in the mid-1990s. The Internet has changed how companies do business, with an emphasis on the global market that has made a universal character set a major requirement. A global character set needs to fulfill the following conditions:
A global character set should also have the following capabilities:
This global character set exists, is in wide use, and is called Unicode.
Unicode is a universal encoded character set that enables information from any language to be stored using a single character set. Unicode provides a unique code value for every character, regardless of the platform, program, or language.
The Unicode standard has been adopted by many software and hardware vendors. Many operating systems and browsers now support Unicode. Unicode is required by standards such as XML, Java, JavaScript, LDAP, and WML. It is also synchronized with the ISO/IEC 10646 standard.
Oracle Corporation started supporting Unicode as a database character set in Oracle7. In Oracle9i, Unicode support has been expanded. Oracle9i supports Unicode 3.1.
See Also:
|
This section contains the following topics:
The first version of Unicode was a 16-bit, fixed-width encoding that used two bytes to encode each character. This allowed 65,536 characters to be represented. However, more characters need to be supported, especially additional CJK ideographs that are important for the Chinese, Japanese, and Korean markets.
Unicode 3.1 defines supplementary characters to meet this need. It uses two 16-bit code points (also known as supplementary characters) to represent a single character. This enables an additional 1,048,576 characters to be defined. The Unicode 3.1 standard added the first group of 44,944 supplementary characters.
Adding supplementary characters increases the complexity of Unicode, but it is less complex than managing several different encodings in the same configuration.
Unicode 3.1 encodes characters in different ways: UTF-8, UCS-2, and UTF-16. Conversion between different Unicode encodings is a simple bit-wise operation that is defined in the Unicode standard.
This section contains the following topics:
UTF-8 is the 8-bit encoding of Unicode. It is a variable-width encoding and a strict superset of ASCII. This means that each and every character in the ASCII character set is available in UTF-8 with the same code point values. One Unicode character can be 1 byte, 2 bytes, 3 bytes, or 4 bytes in UTF-8 encoding. Characters from the European scripts are represented in either 1 or 2 bytes. Characters from most Asian scripts are represented in 3 bytes. Supplementary characters are represented in 4 bytes.
UTF-8 is the Unicode encoding supported on UNIX platforms and used for HTML and most Internet browsers. Other environments such as Windows and Java use UCS-2 encoding.
The benefits of UTF-8 are as follows:
UCS-2 is a fixed-width, 16-bit encoding. Each character is 2 bytes. UCS-2 is the Unicode encoding used by Java and Microsoft Windows NT 4.0. UCS-2 supports characters defined for Unicode 3.0, so there is no support for supplementary characters.
The benefits of UCS-2 over UTF-8 are as follows:
UTF-16 encoding is the 16-bit encoding of Unicode. UTF-16 is an extension of UCS-2 because it supports the supplementary characters that are defined in Unicode 3.1 by using two UCS-2 code points for each supplementary character. UTF-16 is a strict superset of UCS-2.
One character can be either 2 bytes or 4 bytes in UTF-16. Characters from European and most Asian scripts are represented in 2 bytes. Supplementary characters are represented in 4 bytes. UTF-16 is the main Unicode encoding used by Microsoft Windows 2000.
The benefits of UTF-16 over UTF-8 are as follows:
Figure 5-1 shows some characters and their character codes in UTF-16, UTF-8, and UCS-2 encoding. The last character is a treble clef (a music symbol), a supplementary character that has been added to the Unicode 3.1 standard.
Oracle Corporation started supporting Unicode as a database character set in Oracle7. Table 5-1 summarizes the Unicode character sets supported by the Oracle database server.
You can store Unicode characters in an Oracle9i database in two ways.
You can create a Unicode database that enables you to store UTF-8 encoded characters as SQL CHAR
datatypes (CHAR
, VARCHAR2
, CLOB
, and LONG)
.
If you prefer to implement Unicode support incrementally or if you need to support multilingual data only in certain columns, then you can store Unicode data in either the UTF-16 or UTF-8 encoding form in SQL NCHAR
datatypes (NCHAR
, NVARCHAR2
, and NCLOB
). The SQL NCHAR
datatypes are called Unicode datatypes because they are used only for storing Unicode data.
The following sections explain how to use the two Unicode solutions and how to choose between them:
The database character set specifies the encoding to be used in the SQL CHAR
datatypes as well as the metadata such as table names, column names, and SQL statements. A Unicode database is a database with a UTF-8 character set as the database character set. There are three Oracle character sets that implement the UTF-8 encoding. The first two are designed for ASCII-based platforms while the third one should be used on EBCDIC platforms.
The AL32UTF8 character set supports the latest version of the Unicode standard. It encodes characters in one, two, or three bytes. Supplementary characters require four bytes. It is for ASCII-based platforms.
The UTF8 character set encodes characters in one, two, or three bytes. It is for ASCII-based platforms.
The UTF8 character set has supported Unicode 3.0 since Oracle8i release 8.1.7 and will continue to support Unicode 3.0 in future releases of the Oracle database server. Although specific supplementary characters were not assigned code points in Unicode until version 3.1, the code point range was allocated for supplementary characters in Unicode 3.0. If supplementary characters are inserted into a UTF8 database, then it does not corrupt the data in the database. The supplementary characters are treated as two separate, user-defined characters that occupy 6 bytes. Oracle Corporation recommends that you switch to AL32UTF8 for full support of supplementary characters in the database character set.
The UTFE character set is for EBCDIC platforms. It has the same properties as UTF8 on ASCII platforms.
To create a database with the AL32UTF8 character set, use the CREATE
DATABASE
statement and include the CHARACTER SET AL32UTF8
clause. For example:
CREATE DATABASE sample CONTROLFILE REUSE LOGFILE
GROUP 1 ('diskx:log1.log', 'disky:log1.log') SIZE 50K, GROUP 2 ('diskx:log2.log', 'disky:log2.log') SIZE 50K
MAXLOGFILES 5 MAXLOGHISTORY 100 MAXDATAFILES 10 MAXINSTANCES 2 ARCHIVELOG CHARACTER SET AL32UTF8 NATIONAL CHARACTER SET AL16UTF16 DATAFILE
'disk1:df1.dbf' AUTOEXTEND ON, 'disk2:df2.dbf' AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE temp_ts UNDO TABLESPACE undo_ts SET TIME_ZONE = '+02:00';
An alternative to storing Unicode data in the database is to use the SQL NCHAR
datatypes (NCHAR
, NVARCHAR
, NCLOB
). You can store Unicode characters into columns of these datatypes regardless of how the database character set has been defined. The NCHAR
datatype has been redefined in Oracle9i to be a Unicode datatype exclusively. In other words, it stores data encoded as Unicode.
In releases before Oracle9i, the NCHAR
datatype supported fixed-width Asian character sets that were designed to provide higher performance. Examples of fixed-width character sets are JA16SJISFIXED and ZHT32EUCFIXED. No Unicode character set was supported as the national character set before Oracle9i.
You can create a table using the NVARCHAR2
and NCHAR
datatypes. The column length specified for the NCHAR
and NVARCHAR2
columns is always the number of characters instead of the number of bytes:
CREATE TABLE product_information ( product_id NUMBER(6) , product_name NVARCHAR2(100) , product_description VARCHAR2(1000));
The encoding used in the SQL NCHAR
datatypes is the national character set specified for the database. You can specify one of the following Oracle character sets as the national character set:
This is the default character set for SQL NCHAR
datatypes. The character set encodes Unicode data in the UTF-16 encoding. It supports supplementary characters, which are stored as four bytes.
When UTF8 is specified for SQL NCHAR
datatypes, the data stored in the SQL datatypes is in UTF-8 encoding.
You can specify the national character set for the SQL NCHAR
datatypes when you create a database using the CREATE
DATABASE
statement with the NATIONAL CHARACTER SET
clause. The following statement creates a database with WE8ISO8859P1 as the database character set and AL16UTF16 as the national character set.
CREATE DATABASE sample CONTROLFILE REUSE LOGFILE
GROUP 1 ('diskx:log1.log', 'disky:log1.log') SIZE 50K, GROUP 2 ('diskx:log2.log', 'disky:log2.log') SIZE 50K
MAXLOGFILES 5 MAXLOGHISTORY 100 MAXDATAFILES 10 MAXINSTANCES 2 ARCHIVELOG CHARACTER SET WE8ISO8859P1 NATIONAL CHARACTER SET AL16UTF16 DATAFILE
'disk1:df1.dbf' AUTOEXTEND ON, 'disk2:df2.dbf' AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE temp_ts UNDO TABLESPACE undo_ts SET TIME_ZONE = '+02:00';
To choose the right Unicode solution for your database, consider the following questions:
LOB
columns?This section describes some general guidelines for choosing a Unicode database or a Unicode datatype solution. The final decision largely depends on your exact environment and requirements. This section contains the following topics:
Use a Unicode database in the situations described in Table 5-2.
Use Unicode datatypes in the situations described in Table 5-3.
Oracle9i provides two solutions to store Unicode characters in the database: a Unicode database solution and a Unicode datatype solution. After you select the Unicode database solution, the Unicode datatype solution or a combination of both, determine the character set to be used in the Unicode database or the Unicode datatype.
Table 5-4 contains advantages and disadvantages of different character sets for a Unicode database solution. The Oracle character sets that can be Unicode database character sets are AL32UTF8, UTF8, and UTFE.
Table 5-5 contains advantages and disadvantages of different character sets for a Unicode datatype solution. The Oracle character sets that can be national character sets are AL16UTF16 and UTF8.
This section describes typical scenarios for storing Unicode characters in an Oracle9i database:
An American company running a Java application would like to add German and French support in the next release of the application. They would like to add Japanese support at a later time. The company currently has the following system configuration:
In this case, a typical solution is to choose UTF8 for the database character set because of the following reasons:
ALTER
DATABASE
statement. No data conversion is required because US7ASCII is a subset of UTF8.A European company that runs its applications mainly on Windows platforms wants to add new Windows applications written in Visual C/C++. The new applications will use the existing database to support Japanese and Chinese customer names. The company currently has the following system configuration:
A typical solution is take the following actions:
NCHAR
and NVARCHAR2
datatypes to store Unicode charactersThe reasons for this solution are:
NCHAR
column meets the customer's requirements without migrating the entire database.NCHAR
datatypes are defined as number of characters. This is the same as the way they are treated when using wchar_t
strings in Windows C/C++ programs. This reduces programming complexity.A Japanese company wants to develop a new Java application on Oracle9i. The company expects that the application will support as many languages as possible in the long run.
BLOB
datatype to store documents of multiple languages.In this case, the typical solution is to create a Unicode database using AL32UTF8 as the database character set and use the SQL NCHAR
datatypes to store multilingual data. The national character set should be set to AL16UTF16. The reasons for this solution are as follows:
BLOBs
, Oracle Text requires the database character set to be one of the UTF-8 character sets. Because the applications may retrieve relational data as UTF-8 XML format (where supplementary characters are stored as four bytes), AL32UTF8 should be used as the database character set to avoid data conversion when UTF-8 data is retrieved or inserted.NCHAR
datatype for its relational data. Both Java and Windows support the UTF-16 character datatype, and the length of a character string is always measured in the number of characters.NCHAR
datatypes because AL16UTF16 offers better performance and storage efficiency.In addition to choosing a Unicode solution, the following issues should be taken into consideration when the database schema is designed to support multiple languages:
When you use NCHAR
and NVARCHAR2
datatypes for storing multilingual data, the column size specified for a column is defined in number of characters. (The number of characters means the number of Unicode code units.) Table 5-6 shows the maximum size of the NCHAR
and NVARCHAR2
datatypes for the AL16UTF16 and UTF8 national character sets.
National Character Set | Maximum Column Size of NCHAR Datatype | Maximum Column Size of NVARCHAR2 Datatype |
---|---|---|
AL16UTF16 |
1000 characters |
2000 characters |
UTF8 |
2000 bytes |
4000 bytes |
When you use CHAR
and VARCHAR2
datatypes for storing multilingual data, the maximum length specified for each column is, by default, in number of bytes. If the database needs to support Thai, Arabic, or multibyte languages such as Chinese and Japanese, then the maximum lengths of the CHAR
, VARCHAR
, and VARCHAR2
columns may need to be extended. This is because the number of bytes required to encode these languages in UTF8 or AL32UTF8 may be significantly larger than the number of bytes for encoding English and Western European languages. For example, one Thai character in the Thai character set requires 3 bytes in UTF8 or AL32UTF8. In addition, the maximum column lengths for CHAR
, VARCHAR
, and VARCHAR2
datatypes are 2000 bytes, 4000 bytes, and 4000 bytes respectively. If applications need to store more than 4000 bytes, they should use the CLOB
datatype.
The Unicode character set includes characters of most written languages around the world, but it does not contain information about the language to which a given character belongs. In other words, a character such as ä
does not contain information about whether it is a French or German character. In order to provide information in the language a user desires, data stored in a Unicode database should accompany the language information to which the data belongs.
There are many ways for a database schema to relate data to a language. The following sections provide different approaches:
For data such as product descriptions or product names, you can add a language column (language_id
) of CHAR
or VARCHAR2
datatype to the product table to identify the language of the corresponding product information. This enables applications to retrieve the information in the desired language. The possible values for this language column are the 3-letter abbreviations of the valid NLS_LANGUAGE
values of the database.
See Also:
Appendix A, "Locale Data" for a list of |
You can also create a view to select the data of the current language. For example:
ALTER TABLE scott.product_information add (language_id VARCHAR2(50)): CREATE OR REPLACE VIEW product AS SELECT product_id, product_name FROM product_information WHERE language_id = sys_context('USERENV','LANG');
Fine-grained access control enables you to limit the degree to which a user can view information in a table or view. Typically, this is done by appending a WHERE
clause. when you add a WHERE
clause as a fine-grained access policy to a table or view, Oracle9i automatically appends the WHERE
clause to any SQL statements on the table at run time so that only those rows satisfying the WHERE
clause can be accessed.
You can use this feature to avoid specifying the desired language of an user in the WHERE
clause in every SELECT
statement in your applications. The following WHERE
clause limits the view of a table to the rows corresponding to the desired language of a user:
WHERE language_id = sys_context('userenv', 'LANG')
Specify this WHERE
clause as a fine-grained access policy for product_information
as follows:
create function func1 ( sch varchar2 , obj varchar2 ) return varchar2(100); begin return 'language_id = sys_context(''userenv'', ''LANG'')'; end / DBMS_RLS.ADD_POLICY ('scott', 'product_information', 'lang_policy', 'scott', 'func1', 'select');
Then any SELECT
statement on the product_information
table automatically appends the WHERE
clause.
See Also:
Oracle9i Application Developer's Guide - Fundamentals for more information about fine-grained access control |
You can store documents in multiple languages in CLOB
, NCLOB
, or BLOB
datatypes and set up Oracle Text to enable content search for the documents.
Data in CLOB
columns is stored as UCS-2 internally when the database character set is multibyte, such as UTF8 or AL32UTF8. Document contents are converted to UTF-16 when they are inserted into a CLOB
column. This means that the storage space required for an English document doubles when the data is converted. Storage for an Asian language document in a CLOB
column requires less storage space than the same document in a LONG
column using UTF8, typically around 30% less, depending on the contents of the document.
Documents in NCLOB
are also stored as UTF-16 regardless of the database character set or national character set. The storage space requirement is the same as for CLOB
s. Document contents are converted to UTF-16 when they are inserted into a NCLOB
column. If you want to store multilingual documents in a non-Unicode database, then choose NCLOB
. However, content search on NCLOB
is not yet supported.
Documents in BLOB
format are stored as they are. No data conversion occurs during insertion and retrieval. However, SQL string manipulation functions (such as LENGTH
or SUBSTR
) and collation functions (such as NLS_SORT
and ORDER BY
) cannot be applied to the BLOB
datatype.
Table 5-7 lists the advantages and disadvantages of the CLOB
, NCLOB
, and BLOB
datatypes when storing documents:
Datatypes | Advantages | Disadvantages |
---|---|---|
|
||
|
||
|
Oracle Text enables you to build indexes for content search on multilingual documents stored as CLOB
s and BLOB
s. It uses a language-specific lexer to parse the CLOB
or BLOB
data and produces a list of searchable keywords.
Create a multilexer to search multilingual documents. The multilexer chooses a language-specific lexer for each row, based on a language column. This section describe the high level steps to create indexes for documents in multiple languages. It contains the following topics:
The first step in creating the multilexer is the creation of language-specific lexer preferences for each language supported. The following example creates English, German, and Japanese lexers with PL/SQL procedures:
ctx_ddl.create_preference('english_lexer', 'basic_lexer'); ctx_ddl.set_attribute('english_lexer','index_themes','yes'); ctx_ddl.create_preference('german_lexer', 'basic_lexer'); ctx_ddl.set_attribute('german_lexer','composite','german'); ctx_ddl.set_attribute('german_lexer','alternate_spelling','german'); ctx_ddl.set_attribute('german_lexer','mixed_case','yes'); ctx_ddl.create_preference('japanese_lexer', 'JAPANESE_VGRAM_LEXER');
After the language-specific lexer preferences are created, they need to be gathered together under a single multilexer preference. First, create the multilexer preference, using the MULTI_LEXER
object:
ctx_ddl.create_preference('global_lexer','multi_lexer');
Now add the language-specific lexers to the multilexer preference using the add_sub_lexer
call:
ctx_ddl.add_sub_lexer('global_lexer', 'german', 'german_lexer'); ctx_ddl.add_sub_lexer('global_lexer', 'japanese', 'japanese_lexer'); ctx_ddl.add_sub_lexer('global_lexer', 'default','english_lexer');
This nominates the german_lexer
preference to handle German documents, the japanese_lexer
preference to handle Japanese documents, and the english_lexer
preference to handle everything else, using DEFAULT
as the language.
The multilexer decides which lexer to use for each row based on a language column in the table. This is a character column that stores the language of the document in a text column. Use the Oracle language name to identify the language of a document in this column. For example, if you use CLOBs
to store your documents, then add the language column to the table where the documents are stored:
CREATE TABLE globaldoc (doc_id NUMBER PRIMARY KEY, language VARCHAR2(30), text CLOB);
To create an index for this table, use the multilexer preference and specify the name of the language column:
CREATE INDEX globalx ON globaldoc(text) indextype IS ctxsys.context parameters ('lexer global_lexer language column language');
In addition to the language column, the character set and format columns must be added in the table where the documents are stored. The character set column stores the character set of the documents using the Oracle character set names. The format column specifies whether a document is a text or binary document. For example, the CREATE TABLE
statement can specify columns called characterset
and format
:
CREATE TABLE globaldoc ( doc_id NUMBER PRIMARY KEY, language VARCHAR2(30), characterset VARCHAR2(30), format VARCHAR2(10), text BLOB );
You can put word-processing or spreadsheet documents into the table and specify binary
in the format
column. For documents in HTML, XML and text format, you can put them into the table and specify text
in the format
column.
Because there is a column in which to specify the character set, you can store text documents in different character sets.
When you create the index, specify the names of the format and character set columns:
CREATE INDEX globalx ON globaldoc(text) indextype is ctxsys.context parameters ('filter inso_filter lexer global_lexer language column language format column format charset column characterset');
You can use the charset_filter
if all documents are in text format. The charset_filter
converts data from the character set specified in the charset
column to the database character set.
|
Copyright © 1996, 2002 Oracle Corporation. All Rights Reserved. |
|