Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
Y
yaxt
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
dkrz-sw
yaxt
Commits
67740f78
Commit
67740f78
authored
12 years ago
by
Moritz Hanke
Browse files
Options
Downloads
Patches
Plain Diff
some changes to rrobin documentation
parent
c9ce06f6
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
doc/src/rrobin.dox
+86
-41
86 additions, 41 deletions
doc/src/rrobin.dox
with
86 additions
and
41 deletions
doc/src/rrobin.dox
+
86
−
41
View file @
67740f78
/** \example rrobin.c
*/
/**
\page rrobin Round Robin example on how to use
\ref
yaxt
.h
\page rrobin Round Robin example on how to use yaxt
This example ist the first step to understand working with yaXt.
First of all include the necessary headers:
This example(\ref rrobin.c) is the first step to understand working with yaxt.
First of all include the basic header files:
\code
#include "mpi.h"
...
...
@@ -10,61 +14,88 @@
#include "xt_mpi.h"
\endcode
Depending on what we need next, we will include some more yaXt headers.
We need to initialize MPI and yaXt as follows:
Depending on what we need next, we will include some more yaxt headers.
\code
#include "xt_xmap_all2all.h"
#include "xt_idxlist.h"
#include "xt_redist_p2p.h"
#include "xt_idxstripes.h"
\endcode
We need to initialize MPI and yaxt as follows:
\code
mpi_err_handler(MPI_Init(NULL, NULL), MPI_COMM_WORLD);
Xt_initialize (MPI_COMM_WORLD);
\endcode
Find out n
procs and
rank like in common MPI
.
Find out n
umber of processes and the local
rank like in common MPI
programs.
\code
int rank, size;
MPI_Comm_rank (MPI_COMM_WORLD, &rank);
MPI_Comm_size (MPI_COMM_WORLD, &size);
\endcode
In this exmaple we are going to create a source array of length leng = 5 for each process.
Each of them will be filled with different values. It is recommendable to use an array length < 10
and nporcs < 10 to see the difference between values on an easier way. In this case the first position of
the value numer will be 1. The second position is going to be the rank of the process, which owns the value array by now.
The third position is the index of the array element. Ex.: 132 is the source value of 3rd rank ans is on
the position 2 in the local array.
The goal of this little program is to rotate the source arrays. First rank owns first five elements indexed by
0, 1, 2, 3, 4 and is going to get next five elements indexed by 5, 6, 7, 8, 9, without knowing who owns those.
In fact first rank gets value array from second rank, the second one from the third and so on. The last rank gets an array from the first.
We fillen the source array with values, and fill the destination array with -1 to see the change.
\code
{
int leng = 5;
int src_array[leng];
int dst_array[leng];
In this exmaple we are going to create a source array of length 5 for each process. \n
Each process will have an array the is filled with unique values. It is recommendable
to use an array length < 10 and a number of processes < 10 to get a readable output. \n
In this case each process gets values of the form "abc", where \n
- a = 1
- b = rank of the local process
- c = index of the element within the local array
For example: 132 is the source value of on the process with rank 3 and is on the
position 3 in the local array. \n
The goal of this little program is to rotate the source arrays in a round robin fashion.
First proces owns first five elements indexed by 0, 1, 2, 3, 4 and is going to get the
next five elements from the second process indexed by 5, 6, 7, 8, 9, without knowing who
owns those. In fact first rank gets value array from second rank, the second one from
the third and so on. The last rank gets an array from the first. We fill the source
arrays with the previously defined values, and fill the destination arrays with -1 to
see the change.
for (int i = 0; i < leng; i++) {
src_array[i] = 100 + rank*10 + i;
dst_array[i] = -1;
\code
int leng = 5;
int src_array[leng];
int dst_array[leng];
//print source
printf("SOURCEvalue: %d, element_index: %d, rank: %d \n", src_array[i], i, rank);
for (int i = 0; i < leng; i++) {
src_array[i] = 100 + rank*10 + i;
dst_array[i] = -1;
//print source
printf("SOURCEvalue: %d, element_index: %d, rank: %d \n", src_array[i], i, rank);
}
\endcode
There are many ways to tell which elements we want to have - destination. We could call them all by index
using index vector \ref xt_idxvec.h, or we name the area of elements we want to have using stripes \ref xt_idx_stripes.h
Using stripes we have to name local start index, how many elements we want to have, an the offset. Here we need every time
one stripe of 5 elements next to each other, no offset, beginnig by 0 for rank 0, 1*leng for rank 1 etc.
There are many ways to define, which elements are locally available (source) and which
are required (destination). We could define them with an array of indices using an index
vector (\ref xt_idxvec.h), or we could define a block of elements we want to have using
index stripes (\ref xt_idxstripes.h). Using stripes we have to name the local start index,
how many elements we want to have, an the stride between the elements. Here we need for
the source an index stripe containing 5 elements with a stride of 1, beginnig at 0 for
rank 0, at 1*leng for rank 1 etc.
\code
// source index list by stripes
xt_idxlist src_idxlist;
\code
// source index list by stripes
xt_idxlist src_idxlist;
struct Xt_stripe src_stripes[1] = {rank*leng, leng, 1};
src_idxlist = xt_idxstripes_new(src_stripes, 1);
struct Xt_stripe src_stripes = {rank*leng, leng, 1};
src_idxlist = xt_idxstripes_new(&src_stripes, 1);
// destination index list by stripes
xt_idxlist dst_idxlist;
struct Xt_stripe dst_stripes = {((rank+1)*leng)%(size*leng), leng, 1};
dst_idxlist = xt_idxstripes_new(&dst_stripes, 1);
\endcode
Now we need mapping and redist. The strategy of mapping, here all2all is not the best, could be changed, but does right work.
Now, we need the mapping of source and destination data between the processes and a
redistribution object for the actual data exchange. There multiple strategies for doing
the mapping, in this example all2all is used. An alternative would be %dist_dir
(\ref xt_xmap_dist_dir.h).
\code
// xmap
Xt_xmap xmap;
...
...
@@ -77,8 +108,9 @@
redist = xt_redist_p2p_new(xmap, MPI_INTEGER);
\endcode
To do the main step, we need pointers of source and destination arrays. Here it is "overdressed", but shows the main charachter if you have
higher number of data arrays.
To do the main step, we need pointers of source and destination arrays. Here it is
"overdressed", but shows the main charachter if you have higher number of data arrays.
\code
//array poiter, especially necessary for data array number > 1
int* src_array_p = &src_array[0];
...
...
@@ -89,13 +121,26 @@
\endcode
To see the result:
\code
for (int p = 0; p < leng; p++)
printf("DESTvalue: %d, element_index: %d, rank: %d \n", dst_array[p], p, rank);
}
\endcode
Common MPI final by MPI_Finalize();
Once the created yaxt objects are not needed anymore they need to be deleted.
\code
xt_redist_delete(redist);
xt_xmap_delete(xmap);
xt_idxlist_delete(dst_idxlist);
xt_idxlist_delete(src_idxlist);
\endcode
Common MPI finalisation
\code
MPI_Finalize();
\endcode
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment