Previous | Contents | Index |
The following sections contain information that applies to both the OpenMP Fortran API and the DIGITAL Fortran parallel compiler directives. The code examples use the OpenMP API directive format. |
The term loop decomposition is used to specify the process of dividing the iterations of an iterated DO loop and running them on two or more threads of a shared-memory multi-processor computer system.
To run in parallel, the source code in iterated DO loops must be decomposed by the user, and adequate system resources must be made available. Decomposition is the process of analyzing code for data dependences, dividing up the workload, and ensuring correct results when iterations run concurrently. The only type of decomposition available with DIGITAL Fortran is directed decomposition using a set of parallel compiler directives.
The following sections describe how to decompose loops and how to use
the OpenMP Fortran API and the DIGITAL Fortran parallel compiler
directives to achieve parallel processing.
6.3.1 Directed Decomposition
When a program is compiled using the -omp or the -mp option, the compiler parses the parallel compiler directives. However, you must transform the source code to resolve any loop-carried dependences and improve run-time performance. 1
To use directed decomposition effectively, take the following steps:
In directed decomposition, you must resolve loop-carried dependences and dependences involving temporary variables to ensure safe parallel execution. Only cycles of dependences are nearly impossible to resolve.
Do one of the following:
There are several methods for resolving dependences manually:
Resolving Dependences Involving Temporary Variables
Declare temporary variables PRIVATE to resolve dependences involving them. Temporary variables are used in intermediate calculations. If they are used in more than one iteration of a parallel loop, the program can produce incorrect results.
One thread might define a value and another thread use that value instead of the one it defined for a particular iteration. Loop control variables are prime examples of temporary variables that are declared PRIVATE by default within a parallel region. For example:
DO I = 1,100 TVAR = A(I) + 2 D(I) = TVAR + Y(I-1) END DO |
As long as certain criteria are met, you can resolve this kind of dependence by declaring the temporary variable (TVAR, in the example) PRIVATE. That way, each thread keeps its own copy of the variable.
For the criteria to be met, the values of the temporary variable must be all of the following:
The default for variables in a parallel loop is SHARED, so you must explicitly declare these variables PRIVATE to resolve this kind of dependence.
Resolving Loop-Carried Dependences
You can often resolve loop-carried dependences using one or more of the following loop transformations:
These techniques also resolve dependences that inhibit autodecomposition.
Loop Alignment
Loop alignment offsets memory references in the loop so that the dependence is no longer loop carried. The following example shows a loop that is aligned to resolve the dependence in array A.
Loop with Dependence | Aligned Statements |
---|---|
DO I = 2,N
A(I) = B(I) C(I) = A(I+1) END DO |
C(I-1) = A(I) A(I) = B(I) |
To compensate for the alignment and achieve the same calculations as the original loop, you probably have to perform one or more of the following:
Example 6-1 shows two possible forms of the final loop.
Example 6-1 Aligned Loop |
---|
! First possible form: !$OMP PARALLEL PRIVATE (I) !$OMP DO DO I = 2,N+1 IF (I .GT. 2) C(I-1) = A(I) IF (I .LE. N) A(I) = B(I) END DO !$OMP END DO !$OMP END PARALLEL ! ! Second possible form; more efficient because the tests are ! performed outside the loop: ! !$OMP PARALLEL !$OMP DO DO I = 3,N C(I-1) = A(I) A(I) = B(I) END DO !$OMP END DO !$OMP END PARALLEL IF (N .GE. 2) A(2) = B(2) C(N) = A(N+1) END IF |
Code Replication
When a loop contains a loop-independent dependence as well as a loop-carried dependence, loop alignment alone is usually not adequate. By resolving the loop-carried dependence, you often misalign another dependence. Code replication creates temporary variables that duplicate operations and keep the loop-independent dependences inside each iteration.
In S2 of the following loop, aligning the A(I-1) reference without code replication would misalign the A(I) reference:
Loop with Multiple Dependences | Misaligned Dependence |
---|---|
DO I = 2,100
S 1 A(I) = B(I) + C(I) S 2 D(I) = A(I) + A(I-1) END DO |
D(I-1) = A(I-1) + A(I) A(I) = B(I) + C(I) |
Example 6-2 uses code replication to keep the loop-independent dependence inside each iteration. The temporary variable, TA, must be declared PRIVATE.
Example 6-2 Transformed Loop Using Code Replication |
---|
!$OMP PARALLEL PRIVATE (I,TA) A(2) = B(2) + C(2) D(2) = A(2) + A(1) !$OMP DO DO I = 3,100 A(I) = B(I) + C(I) TA = B(I-1) + C(I-1) D(I) = A(I) + TA END DO !$OMP END DO !$OMP END PARALLEL |
Loop Distribution
Loop distribution allows more parallelism when neither loop alignment nor code replication can resolve the dependences. Loop distribution divides the contents of loops into multiple loops so that dependences cross between two separate loops. The loops run serially in relation to each other, even if they both run in parallel.
The following loop contains multiple dependences that cannot be resolved by either loop alignment or code replication:
DO I = 1,100 S1 A(I) = A(I-1) + B(I) S2 C(I) = B(I) - A(I) END DO |
Example 6-3 resolves the dependences by distributing the loop. S2 can run in parallel despite the data recurrence in S1.
Example 6-3 Distributed Loop |
---|
DO I 1,100 S1 A(I) = A(I-1) + B(I) END DO DO I 1,100 S2 C(I) = B(I) - A(I) END DO |
Restructuring a Loop into an Inner and Outer Nest
Restructuring a loop into an inner and outer loop nest can resolve some recurrences that are used as rapid approximations of a function of the loop control variable. For example, the following loop uses sines and cosines:
THETA = 2.*PI/N DO I=0,N-1 S = SIN(I*THETA) C = COS(I*THETA) . . ! use S and C . END DO |
Using a recurrence to approximate the sines and cosines can make the serial loop run faster (with some loss of accuracy), but it prevents the loop from running in parallel:
THETA = 2.*PI/N STH = SIN(THETA) CTH = COS(THETA) S = 0.0 C = 1.0 DO I=0,N-1 . . ! use S and C . S = S*CTH + C*STH C = C*CTH - S*STH END DO |
To resolve the dependences, substitute the SIN and COS calls (however, this loses the performance improvement gained from using the recurrence). You can also restructure the loop into an outer parallel loop and an inner serial loop. Each iteration of the outer loop reinitializes the recurrence, and the inner loop uses the value:
!$OMP PARALLEL SHARED (THETA,STH,CTH,LCHUNK) PRIVATE (ISTART,I,S,C) THETA = 2.*PI/N STH = SIN(THETA) CTH = COS(THETA) LCHUNK = (N + NWORKERS()-1) / NWORKERS !$OMP DO DO ISTART = 0,N-1,LCHUNK S = SIN(ISTART*THETA) C = COS(ISTART*THETA) DO I = ISTART, MIN(N,ISTART+LCHUNK-1) . . ! use S and C . S = S*CTH + C*STH C = C*CTH - S*STH END DO END DO !$OMP END DO !$OMP END PARALLEL |
Dependences Requiring Locks
When no other method can resolve a dependence, you can put locks around the critical section that contains them. Locks force threads to execute the critical section serially, while allowing the rest of the loop to run in parallel.
However, locks degrade performance because they force the critical section to run serially and increase the overhead. They are best used only when no other technique resolves the dependence, and only in CPU-intensive loops.
To create locks in a loop, enclose the critical section between the CRITICAL and END CRITICAL directives. When a thread executes the CRITICAL directive and the latch variable is open, it takes possession of the latch variable, and other threads must wait to execute the section. The latch variable becomes open when the thread executing the section executes the END CRITICAL directive.
The latch variable is closed when a thread has possession of it and open when the latch variable is free.
In Example 6-4, the statement updating the sum is locked for safe parallel execution of the loop.
Example 6-4 Decomposed Loop Using Locks |
---|
INTEGER(4) LCK !$OMP PARALLEL PRIVATE (I,Y) SHARED (LCK,SUM) LCK = 0 . . . !$OMP DO DO I = 1,1000 Y = some_calculation !$OMP CRITICAL (LCK) SUM = SUM + Y !$OMP END CRITICAL (LCK) END DO !$OMP END DO !$OMP END PARALLEL |
This particular example is better solved using a reduction clause as shown in Example 6-5.
Example 6-5 Decomposed Loop Using a Reduction Clause |
---|
INTEGER(4) LCK !$OMP PARALLEL PRIVATE (I,Y) SHARED (LCK,SUM) LCK = 0 . . . !$OMP DO REDUCTION (SUM) DO I = 1,1000 Y = some_calculation SUM = SUM + Y END DO !$OMP END DO !$OMP END PARALLEL |
Because iterations in a parallel DO loop execute in an indeterminate order and in different threads, certain constructs in these loops can cause unpredictable run-time behavior.
The following restrictions are flagged:
The following restrictions are not flagged:
To manually optimize structures containing parallel loops:
Interchanging Loops
The following example shows a case in which an inner loop can run in parallel and an outer loop cannot, because of a loop-carried dependence. The inner loop also has a more effective memory-referencing pattern for parallel processing than the outer loop. By interchanging the loops, more work executes in parallel and the cache can perform more efficiently.
Original Structure | Interchanged Structure |
---|---|
!$OMP PARALLEL PRIVATE (J,I) SHARED (A) | |
!$OMP DO | |
DO I = 1,100 | DO J = 1,300 |
DO J = 1,300 | DO I = 1,100 |
A(I,J) = A(I+1,J) + 1 | A(I,J) = A(I+1,J) + 1 |
END DO | END DO |
END DO | END DO |
!$OMP END DO | |
!$OMP END PARALLEL |
Balancing the Workload
On the DO directive, you can specify the SCHEDULE(GUIDED) clause to use guided self-scheduling in manually decomposed loops, which is effective for most loops. However, when the iterations contain a predictably unbalanced workload, you can obtain better performance by manually balancing the workload. To do this, specify the chunk size in the SCHEDULE clause of the DO directive.
In the following loop, it might be very inefficient to divide the iterations into chunks of 50. A chunk size of 25 would probably be much more efficient on a system with two processors, depending on the amount of work being done by the routine SUB.
DO I = 1,100 . . . IF (I .LT. 50) THEN CALL SUB(I) END IF . . . END DO |
The OpenMP Fortran API and the DIGITAL Fortran parallel compiler directive sets also provide environment variables that adjust the run-time environment in unusual situations.
Regardless of whether you used the -omp or the -mp compiler option, when the compiler needs information supplied by an environment variable, the compiler first looks for an OpenMP Fortran API environment variable and then for a DIGITAL Fortran parallel compiler environment variable. If neither one is found, the compiler uses a default.
The compiler looks for environment variable information in the following situations:
The OpenMP Fortran API environment variables are listed in Table 6-4.
Environment Variable1 | Interpretation |
---|---|
OMP_SCHEDULE | |
This variable applies only to DO and PARALLEL DO directives that have
the schedule type of RUNTIME. You can set the schedule type and an
optional chunk size for these loops at run time. The schedule types are
STATIC, DYNAMIC, GUIDED, and RUNTIME.
For directives that have a schedule type other than RUNTIME, this variable is ignored. The compiler default schedule type is STATIC. If the optional chunk size is not set, a chunk size of one is assumed, except for the STATIC schedule type. For this schedule type, the default chunk size is set to the loop iteration space divided by the number of threads applied to the loop. |
|
OMP_NUM_THREADS | |
Use this environment variable to set the number of threads to use
during execution. This number applies unless you explicitly change it
by calling the OMP_SET_NUM_THREADS run-time library routine.
When you have enabled dynamic thread adjustment, the value assigned to this environment variable represents the maximum number of threads that can be used. The default value is the number of processors in the current system. For more information about dynamic thread adjustment, see the online release notes. |
|
OMP_DYNAMIC | |
Use this environment variable to enable or disable dynamic thread adjustment for the execution of parallel regions. When set to TRUE, the number of threads used can be adjusted by the run-time environment to best utilize system resources. When set to FALSE, dynamic adjustment is disabled. The default is FALSE. For more information about dynamic thread adjustment, see the online release notes. | |
OMP_NESTED | |
Use this environment variable to enable or disable nested parallelism. When set to TRUE, nested parallelism is enabled. When set to FALSE, it is disabled. The default is FALSE. For more information about nested parallelism, see the online release notes. |
The DIGITAL Fortran parallel compiler environment variables are listed in Table 6-5.
Environment Variable | Interpretation |
---|---|
MP_THREAD_COUNT | |
Specifies the number of threads the run-time system is to create. The default is the number of processors available to your process. | |
MP_CHUNK_SIZE | |
Specifies the chunk size the run-time system uses when dispatching loop iterations to threads if the program specified the RUNTIME schedule type or specified another schedule type requiring a chunk size, but omitted the chunk size. The default chunk size is 1. | |
MP_STACK_SIZE | |
Specifies how many bytes of stack space the runtime system allocates for each thread when creating it. If you specify zero, the runtime system uses the default, which is very small. Therefore, if a program declares any large arrays to be PRIVATE, specify a value large enough to allocate them. If you do not use this environment variable at all, the runtime system allocates 5 MB. | |
MP_SPIN_COUNT | |
Specifies how many times the runtime system spins while waiting for a condition to become true. The default is 16,000,000, which is approximately one second of CPU time. | |
MP_YIELD_COUNT | |
Specifies how many times the runtime system alternates between calling sched_yield and testing the condition before going to sleep by waiting for a thread condition variable. The default is 10. |
1 Another method of supporting parallel processing does not involve iterated DO loops. Instead, it allows large amounts of independent code to be run in parallel using the SECTIONS and SECTION directives. |
Previous | Next | Contents | Index |