*High Speed Server[#z8902a9c]
-High Speed Servers are SGI UV2000 8nodes.
-UV2000 has 16 sockets(128 cores) and 1TB shared memory per node for running OpenMP program, shared memory program.
-This is how to submit on B~F class

**OpenMP [#bbad1605]
-script sample(OpenMP program "md" 32 threads on job class C)
 #!/bin/csh
 #$ -ac P=32
 #$ -ac n=32
 #$ -jc C
 #$ -cwd
 setenv OMP_NUM_THREADS 32
 dplace -x2 ./md
Define 32 to -ac P,-ac n and OMP_NUM_THREADS~
Define 8 on B class, 64 on E and F class like above.~
Please attach ''dplace -x2'' before program name.~
~
|~B|~C,D|~E,F|
|#$ -ac P=8&br;#$ -ac n=8&br;#$ -jc B&br;#$ -cwd|#$ -ac P=32&br;#$ -ac n=32&br;#$ -jc C&br;#$ -cwd|#$ -ac P=64&br;#$ -ac n=64&br;#$ -jc E&br;#$ -cwd|

**MPI [#w8877f51]
-script sample(MPI program "xhpl" 32 mpi processes on job class C)
 #!/bin/csh
 #$ -ac P=32
 #$ -ac n=32
 #$ -jc C
 #$ -cwd
 mpiexec_mpt -np 32 dplace -s1 ./xhpl
Please use ''mpiexec_mpt'' for mpi prgoram on eic system.~
Define 32 to -ac P,-ac n and mpiexec_mpt~
Define 8 on B class, 64 on E and F class like above.~
Please attach ''dplace -s1'' before program name.~
~
|~B|~C,D|~E,F|
|#$ -ac P=8&br;#$ -ac n=8&br;#$ -jc B&br;#$ -cwd|#$ -ac P=32&br;#$ -ac n=32&br;#$ -jc C&br;#$ -cwd|#$ -ac P=64&br;#$ -ac n=64&br;#$ -jc E&br;#$ -cwd|

**MPI/OpenMP Hybrid [#caac5752]
-script sample(Hybrid program "a.out" 8 MPI x 8 OpenMP on job class E)
 #!/bin/csh
 #$ -ac P=64
 #$ -ac n=64
 #$ -jc E
 #$ -cwd
 setenv OMP_NUM_THREADS 8
 mpiexec_mpt -np 8 omplace ./a.out
Define 64(total cores) to -ac P,-ac n, define 8 to mpiexec_mpt、OMP_NUM_THREADS~

-script sample(Hybrid program "a.out" 8 MPI x 4 OpenMP on job class C)
 #!/bin/csh
 #$ -ac P=32
 #$ -ac n=32
 #$ -jc C
 #$ -cwd
 setenv OMP_NUM_THREADS 4
 mpiexec_mpt -np 8 omplace ./a.out
Define 32(total cores) to -ac P,-ac n, define 8 to mpiexec_mpt and 4 to OMP_NUM_THREADS~
&color(#FF0000){Please attach ''omplace'' before program name.};~
~
|~B|~C,D|~E,F|
|#$ -ac P=8&br;#$ -ac n=8&br;#$ -jc B&br;#$ -cwd&br;setenv OMP_NUM_THREADS 2&br;mpiexec_mpt -np 4 &color(#FF0000){omplace}; ./a.out&br;or&br;setenv OMP_NUM_THREADS 4&br;mpiexec_mpt -np 2 &color(#FF0000){omplace}; ./a.out|#$ -ac P=32&br;#$ -ac n=32&br;#$ -jc C&br;#$ -cwd&br;setenv OMP_NUM_THREADS 8&br;mpiexec_mpt -np 4 &color(#FF0000){omplace}; ./a.out&br;or&br;setenv OMP_NUM_THREADS 4&br;mpiexec_mpt -np 8 &color(#FF0000){omplace}; ./a.out|#$ -ac P=64&br;#$ -ac n=64&br;#$ -jc E&br;#$ -cwd&br;setenv OMP_NUM_THREADS 8&br;mpiexec_mpt -np 8 &color(#FF0000){omplace}; ./a.out&br;or&br;setenv OMP_NUM_THREADS 4&br;mpiexec_mpt -np 16 &color(#FF0000){omplace}; ./a.out&br;|

----
**OpenMP core hopping [#j30569ef]
-You can use only 32 cores in 64 cores.
-Define real used cores for -ac P, occupied cores for -ac n.
-Define -nt as same as -ac P
-"-c 1-:s=2" means job threads should be on every other core like 1,3,5,7....~
-You don't have to OMP_NUM_THREADS because omplace works.~
 #!/bin/csh
 #$ -ac P=32
 #$ -ac n=64
 #$ -jc E
 #$ -cwd
 omplace -nt 32 -c 1-:st=2 ./md

**MPI core hopping [#h719c45a]
-You can use only 32 cores in 64 cores.
-Define real used cores for -ac P, occupied cores for -ac n.
-Define -np as same as -ac P
-"setenv MPI_DSM_CPULIST 0-63/2" means processes should be on every other core like 0,2,4,6...~
-''Do not attach "dplace" when you hop core.''~
 #!/bin/csh
 #$ -ac P=32
 #$ -ac n=64
 #$ -jc F
 #$ -cwd
 setenv MPI_DSM_CPULIST 0-63/2
 mpiexec_mpt -np 32 ./xhpl

**Hybrid core hopping [#e6ea6c67]
-This sample shows 8 MPI x 4 OpenMP in 64 cores, every MPI procces has only 4 threads on 1 socket (8 cores).
-Define real used cores for -ac P, occupied cores for -ac n.
-Define -np as number of MPI procces
-"omplace -nt 4 -c 1-:st=2" means every MPI procces has 4 OpenMP threads, that should be every other core like 1,3,5...~
 #!/bin/sh
 #$ -ac P=32
 #$ -ac n=64
 #$ -jc E
 #$ -cwd
 mpiexec_mpt -np 8 omplace -nt 4 -c 1-:st=2   ./a.out


トップ   新規 一覧 単語検索 最終更新   ヘルプ