Previous | Contents | Index |
You can use several options to control the data scope attributes of variables for the duration of the construct in which you specify them. If you do not specify a data scope attribute option on a directive, the default is SHARED for those variables affected by the directive.
Each of the data scope attribute options accepts a list, which is a comma-separated list of named variables or named common blocks that are accessible in the scoping unit. When you specify named common blocks, they must appear between slashes (/name/).
Not all of the options are allowed on all directives, but the directives to which each option applies are listed in the clause descriptions.
The data scope attribute options are:
Use the COPYIN option on the PARALLEL, PARALLEL DO, and PARALLEL SECTIONS directives to copy named common block values from the master thread copy to threads at the beginning of a parallel region, use the COPYIN option on the PARALLEL directive. The COPYIN option applies only to named common blocks that have been previously declared thread private using the TASKCOMMON or the INSTANCE PARALLEL directive (see Section 6.2.5.1).
Use a comma-separated list to name the common blocks and variables in common blocks you want to copy.
This option is the same as the OpenMP Fortran API DEFAULT clause (see Section 6.1.5.2).
The FIRSTPRIVATE option is the same as the OpenMP Fortran API FIRSTPRIVATE clause (see Section 6.1.5.2).
LASTLOCAL or LAST LOCAL Option
Except for differences in directive name spelling, the LASTLOCAL or LAST LOCAL option is the same as the OpenMP Fortran API LASTPRIVATE clause (see Section 6.1.5.2).
Except for the alternate directive spelling of LOCAL, the PRIVATE (or LOCAL) option is the same as the OpenMP Fortran API PRIVATE clause (see Section 6.1.5.2).
Use the REDUCTION option on the PDO directive to declare variables that are to be the object of a reduction operation. Use a comma-separated list to name the variables you want to declare as objects of a reduction.
The REDUCTION option in the Compaq Fortran parallel compiler directive set is different from the REDUCTION clause in the OpenMP Fortran API directive set. In the OpenMP Fortran API directive set, both a variable and an operator type are given. In the Compaq Fortran parallel compiler directive set, the operator is not given in the directive. The compiler must be able to determine the reduction operation from the source code. The REDUCTION option can be applied to a variable in a DO loop only if the variable meets the following criteria:
x = x operator expr x = expr operator x (except for subtraction) x = operator(x, expr) x = operator(expr, x) |
where operator is one of the following supported reduction operations: +, -, *, .AND., .OR., .EQV., .NEQV., MAX, MIN, IAND, or IOR.
The compiler rewrites the reduction operation by computing partial results into local variables and then combining the results into the reduction variable. The reduction variable must be SHARED in the enclosing context.
Except for the alternate directive spelling of SHARE, the SHARED (or
SHARE) option is the same as the OpenMP Fortran API SHARED clause (see
Section 6.1.5.2).
6.2.6 Parallel Region Construct
The concepts of using a parallel region construct are the same as those
for OpenMP Fortran API (see Section 6.1.6). However, the environment
variable you use to set the default number of threads is
MP_THREAD_COUNT and the run-time library routine is OtsSetNumThreads.
6.2.7 Worksharing Constructs
At the heart of parallel processing is the concept of the worksharing construct. A worksharing construct divides the execution of the enclosed code region among the members of the team created upon entering the enclosing parallel region construct.
A worksharing construct must be enclosed lexically within a parallel region if the worksharing directive is to execute in parallel. No new threads are launched and there is no implied barrier upon entry to a worksharing construct.
The worksharing constructs are:
The PDO directive specifies that the iterations of the immediately following DO loop must be dispatched across the team of threads so that each iteration is executed in parallel by a single thread. The loop that follows a PDO directive cannot be a DO WHILE or a DO loop that does not have loop control. The iterations of the DO loop are divided among and dispatched to the existing threads in the team.
You cannot use a GOTO statement, or any other statement, to transfer control into or out of the PDO construct.
If you specify the optional END PDO directive, it must appear immediately after the end of the DO loop. If you do not specify the END PDO directive, an END PDO directive is assumed at the end of the DO loop.
If you do not specify the optional NOWAIT clause on the END PDO directive, threads synchronize at the END PDO directive. If you specify NOWAIT, threads do not synchronize at the END PDO directive. Threads that finish early proceed directly to the instructions following the END PDO directive.
The PDO directive optionally lets you:
A chunk is a contiguous group of iterations dispatched to a thread. You can explicitly define a chunk size for the current PDO directive by using the CHUNK or BLOCKED option. Chunk size must be a scalar integer expression. The specified chunk size overrides any chunk size specified by an earlier CHUNK directive, and applies only to the current PDO directive.
Refer to Section 6.2.10 for information about how chunk size and schedule type interact.
You can determine the chunk size for the current PDO or PARALLEL DO directive by using the following prioritized list. The available chunk size closest to the top of the list is used:
The schedule type specifies a scheduling algorithm that determines how chunks of loop iterations are dispatched to the threads of a team. You can explicitly define a schedule type for the current PDO or PARALLEL DO directive by using the MP_SCHEDTYPE option. The specified schedule type overrides any default schedule type specified by an earlier MP_SCHEDTYPE directive, and applies to the current PDO or PARALLEL DO directive only.
You can determine the schedule type used for the current PDO or PARALLEL DO directive by using the following prioritized list. The available schedule type closest to the top of the list is used:
For information about schedule types, see Section 6.2.11.
Another option you can use to affect the way threads are dispatched is the ORDERED option. When you specify this option, iterations are dispatched to threads in the same order they would be for sequential execution.
Terminating Loop Execution Early
If you want to terminate loop execution early because a specified condition has been satisfied, use the PDONE directive. This is an executable directive and any undispatched iterations are not executed. However, all previously dispatched iterations are completed. When the schedule type is STATIC or INTERLEAVED, this directive has no effect because all iterations are dispatched prior to loop execution.
Overriding Implicit Synchronization
Whether or not you include the END PDO directive at the end of the DO loop, by default an implicit synchronization point exists immediately after the last statement in the loop. Threads reaching this point wait until all threads complete their work and reach this synchronization point.
If there are no data dependences between the variables inside the loop
and those outside the loop, there may be no reason to make threads
wait. In this case, use the NOWAIT clause on the END PDO directive
to override synchronization and allow threads to continue.
6.2.7.2 PSECTIONS, SECTION, and END PSECTIONS Directives
Except for the different PSECTIONS directive name, this directive is
the same as the OpenMP Fortran API SECTIONS directive (see
Section 6.1.7.2).
6.2.7.3 SINGLE PROCESS and END SINGLE PROCESS Directives
Except for the different SINGLE PROCESS directive name, this directive
is the same as the OpenMP Fortran API SINGLE directive (see
Section 6.1.7.3).
6.2.8 Combined Parallel/Worksharing Constructs
The combined parallel/worksharing constructs provide an abbreviated way to specify a parallel region that contains a single worksharing construct. The combined parallel/worksharing constructs are:
This directive is the same as the OpenMP Fortran API PARALLEL DO directive with the following exceptions:
For information about the OpenMP Fortran API PARALLEL DO directive, see
Section 6.1.8.1.
6.2.8.2 PARALLEL SECTIONS and END PARALLEL SECTIONS Directives
This directive is the same as the OpenMP Fortran API PARALLEL SECTIONS directive with the following exception:
For more information about the OpenMP Fortran API PARALLEL SECTIONS
directive, see Section 6.1.8.2.
6.2.9 Synchronization Constructs
Synchronization refers to the interthread communication that ensures the consistency of shared data and coordinates parallel execution among threads.
Shared data is consistent within a team of threads when all threads obtain the identical value when the data is accessed.
To achieve explicit thread synchronization, you can use:
The BARRIER directive is the same as the OpenMP Fortran API BARRIER
directive (see Section 6.1.9.2).
6.2.9.2 CRITICAL SECTION and END CRITICAL SECTION Directives
The CRITICAL SECTION and END CRITICAL SECTION directives are the same as the OpenMP Fortran API CRITICAL and END CRITICAL directives with the following exceptions:
For additional information about the OpenMP Fortran API CRITICAL
directive, see Section 6.1.9.3.
6.2.10 Specifying a Default Chunk Size
To specify a default chunk size, use the CHUNK directive. Chunk size must be a scalar integer expression. The interaction between the chunk size and the schedule type are:
You can also specify a chunk size by using the CHUNK option of the PDO
or PARALLEL DO directive (see Specifying Chunk Size.)
6.2.11 Specifying a Default Schedule Type
To specify a default schedule type, use the MP_SCHEDTYPE directive. The following list describes the schedule types and how the chunk size affects scheduling:
The DYNAMIC and GUIDED schedule types introduce some amount of overhead required to manage the continuing dispatching of iterations to threads. However, this overhead is sometimes offset by better load balancing when the average execution time of iterations is not uniform throughout the loop.
The STATIC and INTERLEAVED schedule types dispatch all of the iterations to the threads in advance, with each thread receiving approximately equal numbers of iterations. One of these types is usually the most efficient schedule type when the average execution time of iterations is uniform throughout the loop.
You can also specify a schedule type using the MP_SCHEDTYPE option of
the
PDO or PARALLEL DO directive (see Specifying Schedule Type.)
6.3 Decomposing Loops for Parallel Processing
The following sections contain information that applies to both the OpenMP Fortran API and the Compaq Fortran parallel compiler directives. The code examples use the OpenMP API directive format. |
The term loop decomposition is used to specify the process of dividing the iterations of an iterated DO loop and running them on two or more threads of a shared-memory multi-processor computer system.
To run in parallel, the source code in iterated DO loops must be decomposed by the user, and adequate system resources must be made available. Decomposition is the process of analyzing code for data dependences, dividing up the workload, and ensuring correct results when iterations run concurrently. The only type of decomposition available with Compaq Fortran is directed decomposition using a set of parallel compiler directives.
The following sections describe how to decompose loops and how to use
the OpenMP Fortran API and the Compaq Fortran parallel compiler
directives to achieve parallel processing.
6.3.1 Directed Decomposition
When a program is compiled using the -omp or the -mp option, the compiler parses the parallel compiler directives. However, you must transform the source code to resolve any loop-carried dependences and improve run-time performance. 1
To use directed decomposition effectively, take the following steps:
In directed decomposition, you must resolve loop-carried dependences and dependences involving temporary variables to ensure safe parallel execution. Only cycles of dependences are nearly impossible to resolve.
Do one of the following:
There are several methods for resolving dependences manually:
Resolving Dependences Involving Temporary Variables
Declare temporary variables PRIVATE to resolve dependences involving them. Temporary variables are used in intermediate calculations. If they are used in more than one iteration of a parallel loop, the program can produce incorrect results.
One thread might define a value and another thread use that value instead of the one it defined for a particular iteration. Loop control variables are prime examples of temporary variables that are declared PRIVATE by default within a parallel region. For example:
DO I = 1,100 TVAR = A(I) + 2 D(I) = TVAR + Y(I-1) END DO |
As long as certain criteria are met, you can resolve this kind of dependence by declaring the temporary variable (TVAR, in the example) PRIVATE. That way, each thread keeps its own copy of the variable.
For the criteria to be met, the values of the temporary variable must be all of the following:
The default for variables in a parallel loop is SHARED, so you must explicitly declare these variables PRIVATE to resolve this kind of dependence.
Resolving Loop-Carried Dependences
You can often resolve loop-carried dependences using one or more of the following loop transformations:
These techniques also resolve dependences that inhibit autodecomposition.
Loop alignment offsets memory references in the loop so that the dependence is no longer loop carried. The following example shows a loop that is aligned to resolve the dependence in array A.
Loop with Dependence | Aligned Statements |
---|---|
DO I = 2,N
A(I) = B(I) C(I) = A(I+1) END DO |
C(I-1) = A(I) A(I) = B(I) |
To compensate for the alignment and achieve the same calculations as the original loop, you probably have to perform one or more of the following:
Example 6-1 shows two possible forms of the final loop.
Example 6-1 Aligned Loop |
---|
! First possible form: !$OMP PARALLEL PRIVATE (I) !$OMP DO DO I = 2,N+1 IF (I .GT. 2) C(I-1) = A(I) IF (I .LE. N) A(I) = B(I) END DO !$OMP END DO !$OMP END PARALLEL ! ! Second possible form; more efficient because the tests are ! performed outside the loop: ! !$OMP PARALLEL !$OMP DO DO I = 3,N C(I-1) = A(I) A(I) = B(I) END DO !$OMP END DO !$OMP END PARALLEL IF (N .GE. 2) A(2) = B(2) C(N) = A(N+1) END IF |
When a loop contains a loop-independent dependence as well as a loop-carried dependence, loop alignment alone is usually not adequate. By resolving the loop-carried dependence, you often misalign another dependence. Code replication creates temporary variables that duplicate operations and keep the loop-independent dependences inside each iteration.
In S2 of the following loop, aligning the A(I-1) reference without code replication would misalign the A(I) reference:
Loop with Multiple Dependences | Misaligned Dependence |
---|---|
DO I = 2,100
S 1 A(I) = B(I) + C(I) S 2 D(I) = A(I) + A(I-1) END DO |
D(I-1) = A(I-1) + A(I) A(I) = B(I) + C(I) |
Example 6-2 uses code replication to keep the loop-independent dependence inside each iteration. The temporary variable, TA, must be declared PRIVATE.
Example 6-2 Transformed Loop Using Code Replication |
---|
!$OMP PARALLEL PRIVATE (I,TA) A(2) = B(2) + C(2) D(2) = A(2) + A(1) !$OMP DO DO I = 3,100 A(I) = B(I) + C(I) TA = B(I-1) + C(I-1) D(I) = A(I) + TA END DO !$OMP END DO !$OMP END PARALLEL |
Loop distribution allows more parallelism when neither loop alignment nor code replication can resolve the dependences. Loop distribution divides the contents of loops into multiple loops so that dependences cross between two separate loops. The loops run serially in relation to each other, even if they both run in parallel.
The following loop contains multiple dependences that cannot be resolved by either loop alignment or code replication:
DO I = 1,100 S1 A(I) = A(I-1) + B(I) S2 C(I) = B(I) - A(I) END DO |
Example 6-3 resolves the dependences by distributing the loop. S2 can run in parallel despite the data recurrence in S1.
Example 6-3 Distributed Loop |
---|
DO I 1,100 S1 A(I) = A(I-1) + B(I) END DO DO I 1,100 S2 C(I) = B(I) - A(I) END DO |
Restructuring a Loop into an Inner and Outer Nest
Restructuring a loop into an inner and outer loop nest can resolve some recurrences that are used as rapid approximations of a function of the loop control variable. For example, the following loop uses sines and cosines:
THETA = 2.*PI/N DO I=0,N-1 S = SIN(I*THETA) C = COS(I*THETA) . . ! use S and C . END DO |
Using a recurrence to approximate the sines and cosines can make the serial loop run faster (with some loss of accuracy), but it prevents the loop from running in parallel:
1 Another method of supporting parallel processing does not involve iterated DO loops. Instead, it allows large amounts of independent code to be run in parallel using the SECTIONS and SECTION directives. |
Previous | Next | Contents | Index |