0% found this document useful (0 votes)
30 views

W8L2 OpenMP7 Tasks

OpenMP tasks allow specifying independent units of work that can be executed concurrently by threads. [1] Tasks consist of a code block and associated data. [2] Tasks are generated by threads and executed by threads at some point, with the runtime controlling assignment. [3] Tasks can be nested and depend on predecessor tasks completing through barriers, taskwait, or taskgroups.

Uploaded by

l215376
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

W8L2 OpenMP7 Tasks

OpenMP tasks allow specifying independent units of work that can be executed concurrently by threads. [1] Tasks consist of a code block and associated data. [2] Tasks are generated by threads and executed by threads at some point, with the runtime controlling assignment. [3] Tasks can be nested and depend on predecessor tasks completing through barriers, taskwait, or taskgroups.

Uploaded by

l215376
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Threaded Programming

Lecture 7: Tasks
What are OpenMP tasks?

• Tasks are independent units of work


• Tasks are composed of:
- a block of code to execute
- data to compute with (per-task private copies)

• Tasks are assigned to threads for


execution. Parallel
- programmer has little control over how this is
done

Serial

3
OpenMP tasks

• The task construct includes a structured block of code


• Inside a parallel region, a thread encountering a task
construct will package up the code block and its data for
execution
• Some thread in the parallel region will execute the task at
some point in the future
- note: could be encountering thread, right now

• Tasks can be nested: i.e. a task may itself generate tasks.

4
task directive
Syntax:
Fortran:
!$OMP TASK [clauses]
structured block
!$OMP END TASK
C/C++:
#pragma omp task [clauses]
structured-block

5
Example
#pragma omp parallel Create some threads
{
#pragma omp master Thread 0 packages
{ tasks
#pragma omp task
fred();
#pragma omp task
Tasks executed by
daisy(); some thread in some
#pragma omp task order
billy();
#pragma omp task
molly();
}
}
6
When/where are tasks complete?
• At thread barriers (explicit or implicit)
- applies to all tasks generated in the current parallel region up to the
barrier

• At taskwait directive
- i.e. Wait until all tasks defined in the current task have completed.
- Fortran: !$OMP TASKWAIT
- C/C++: #pragma omp taskwait

- Note: applies only to tasks generated in the current task, not to


“descendants” .
- The code executed by a thread in a parallel region is considered a task
here

7
When/where are tasks complete?
• At the end of a taskgroup region
- Fortran:
!$OMP TASKGROUP
structured block
!$OMP END TASKGROUP
- C/C++:
#pragma omp taskgroup
structured-block
- wait until all tasks created within the taskgroup have
completed
- applies to all “descendants”

8
Example
#pragma omp parallel
{
#pragma omp master
{
#pragma omp task
fred();
#pragma omp task fred() and
daisy(); daisy() must
#pragma omp taskwait complete before
#pragma omp task billy() starts
billy();
}
}

9
Linked list traversal
p = listhead ;
while (p) {
process(p);
p=next(p) ;
}
• Classic linked list traversal
• Do some work on each item in the list
• Assume that items can be processed independently
• Cannot use an OpenMP loop directive

10
Parallel linked list traversal
Only one thread
#pragma omp parallel packages tasks
{
#pragma omp master
{
p = listhead ;
while (p) {
#pragma omp task firstprivate(p)
{
process (p);
}
p=next (p) ;
makes a copy of p
}
when the task is
} packaged
}
11
Parallel linked list traversal
Thread 0: Other threads:

p = listhead ;
while (p) {
< package up task >
p=next (p) ; while (tasks_to_do) {
} < execute task >
}
while (tasks_to_do){
< execute task >
}

< barrier > < barrier >


12
Parallel pointer chasing on multiple lists
#pragma omp parallel All threads package
{ tasks
#pragma omp for private(p)
for ( int i =0; i <numlists; i++) {
p = listheads[i] ;
while (p ) {
#pragma omp task firstprivate(p)
{
process(p);
}
p=next(p);
}
}
}
13
Data scoping with tasks
• Variables can be shared, private or firstprivate with respect
to a task
• These concepts are a little bit different compared with
threads:
- If a variable is shared on a task construct, the references to it inside
the construct are to the storage with that name at the point where the
task was encountered
- If a variable is private on a task construct, the references to it inside
the construct are to new uninitialized storage that is created when the
task is executed
- If a variable is firstprivate on a construct, the references to it inside the
construct are to new storage that is created and initialized with the
value of the existing storage of that name when the task is
encountered

14
Data scoping defaults
• The behavior you want for tasks is usually firstprivate, because the task
may not be executed until later (and variables may have gone out of
scope)
- Variables that are private when the task construct is encountered are firstprivate by
default
• Variables that are shared in all constructs starting from the innermost
enclosing parallel construct are shared by default

#pragma omp parallel shared(A) private(B)


{
...
#pragma omp task
{
A is shared
int C;
compute(A, B, C); B is firstprivate
} C is private
}

15
Example: Fibonacci numbers

int fib (int n) • Fn = Fn-1 + Fn-2


{
int x,y; • Inefficient O(n2) recursive
if ( n < 2 ) return n; implementation!
x = fib(n-1);
y = fib(n-2);
return x+y;
}

int main()
{
int NN = 5000;
int res = fib(NN);
}

16
Parallel Fibonacci
int fib ( int n )
{ • Binary tree of tasks
int x,y;
if ( n < 2 ) return n; • Traversed using a recursive
#pragma omp task shared(x) function
x = fib(n-1);
#pragma omp task shared(y) • A task cannot complete until
y = fib(n-2);
all tasks below it in the tree
#pragma omp taskwait
return x+y; are complete (enforced with
} taskwait)
int main()
{ int NN = 5000; • x,y are local, and so
#pragma omp parallel
private to current task
{
#pragma omp master – must be shared on child tasks
so they don’t create their own
int res = fib(NN);
firstprivate copies at this level!
}
}

17
Using tasks
• Getting the data attribute scoping right can be quite tricky
- default scoping rules different from other constructs
- as ever, using default(none) is a good idea

• Don’t use tasks for things already well supported by OpenMP


- e.g. standard do/for loops
- the overhead of using tasks is greater

• Don’t expect miracles from the runtime


- best results usually obtained where the user controls the
number and granularity of tasks

18
Parallel pointer chasing again
#pragma omp parallel
{
#pragma omp master private(p)
{
p = listhead ;
while (p) {
#pragma omp task firstprivate(p)
{
process (p,nitems);
} process
for (i=0; i<nitems &&p; i++){ nitems at
p=next (p) ; a time
}
}
} skip nitems ahead
} in the list

19
Parallel Fibonacci again
int fib ( int n ) • Stop creating
{ tasks at some
int x,y;
if ( n < 2 ) return n; level in the tree.
#pragma omp task shared(x) if(n>LEVEL)
x = fib(n-1);
#pragma omp task shared(y) if(n>LEVEL)
y = fib(n-2);
#pragma omp taskwait
return x+y;
}
int main()
{ int NN = 5000;
#pragma omp parallel
{
#pragma omp master
int res = fib(NN);
}
}
20
Exercise

• Mandelbrot example using tasks.

21
Reusing this material

This work is licensed under a Creative Commons Attribution-


NonCommercial-ShareAlike 4.0 International License.
https://creativecommons.org/licenses/by-nc-sa/4.0/

This means you are free to copy and redistribute the material and adapt and build on the
material under the following terms: You must give appropriate credit, provide a link to the
license and indicate if changes were made. If you adapt or build on the material you must
distribute your work under the same license as the original.

Acknowledge EPCC as follows: “© EPCC, The University of Edinburgh, www.epcc.ed.ac.uk”

Note that this presentation contains images owned by others. Please seek their permission
before reusing these images.

22

You might also like