*Parallel Server [#f9d242ea]
-Parallel Server is SGI ICE-X 144 nodes.
-SGI ICE-X has 2 sockets, 128GB memory per nodes, each 18 nodes connected to one Infiniband FDR switch. These 18 nodes is called eich1~eich8.
-You can use Parallel Server for Large MPI prgoram and Hybrid program.
-Basically you can occupy 18 nodes for your jobs like following
--18 nodes x 24(MPI processes)
--18 nodes x 2(MPI processes)x12(OpenMP threads)
--18 nodes x 12(MPI processes)
--18ノードx 2(MPI processes)x6(OpenMP threads)
•This is how to submit on G~H class.

**MPI [#q3f6a432]
-scriptfile sample(432 MPI program "xhpl" on G class)~
 #!/bin/csh
 #$ -ac P=24
 #$ -ac n=432
 #$ -ac T=1
 #$ -jc G
 #$ -cwd
 mpiexec_mpt -np 432 dplace -s1 ./xhpl

Please use ''mpiexec_mpt'' for mpi prgoram on eic system.~
Define 432 to -ac n= and mpiexec_mpt~
Define 24 to -ac P=(this is fix)
Define 1 to -ac T=(this is 1 only for MPI program)
Please attach ''dplace -s1'' before program name.~
~
※Define #$ -ac n=432 when you run not 432 mpi proccesses. you actullay can use 256 mpi proceesses if you define mpiexec_mpt -np 256.~

**MPI/OpenMP Hybrid [#xa1656ce]
-scriptfile sample(432 Hybrid program "a.out" 36 MPI processesx 12 OpenMP threads on H class)~
 #!/bin/csh
 #$ -ac P=2
 #$ -ac n=432
 #$ -ac T=12
 #$ -jc H
 #$ -cwd
 source ${TMP}/hybrid-mpi-$JOB_ID/env.csh
 mpiexec_mpt -np $TOTAL_PROCS omplace -nt $OMP_NUM_THREADS ./a.out
Please use ''mpiexec_mpt'' for mpi prgoram on eic system.~
Define 432 to -ac n=~
Define 2 to -ac P=(this is MPI processes)
Define 12 to -ac T=(this is OpenMP threads)
Please attach ''dplace -s1'' before program name.~

folloring lines automatically define MPI processes and OpneMP threads.~
 source ${TMP}/hybrid-mpi-$JOB_ID/env.csh
 mpiexec_mpt -np $TOTAL_PROCS omplace -nt $OMP_NUM_THREADS ./a.out
you can define real number for mpiexec_mpt, omplace.
On this sample, it automatically sets 36 and 12.~
''T is only for 6 and 12. PxT=24 is MUST''
----
**MPI core hopping [#aa0f89a3]
-scriptfile sample(216 MPI program "xhpl" 12 MPI process per node x 18 nodes on G class)~
 #!/bin/csh
 #$ -ac P=12
 #$ -ac n=216
 #$ -ac T=1
 #$ -jc G
 #$ -cwd
 setenv MPI_DSM_CPULIST "0-23/2:allhosts"
 mpiexec_mpt -np 216 ./xhpl
**Hybrid core hopping [#a6fa00db]
-scriptfile sample(432 Hybrid program "a.out" 36 MPI processes x 6 OpenMP threads on H class)~
 #!/bin/csh
 #$ -ac P=2
 #$ -ac n=216
 #$ -ac T=6
 #$ -jc H
 #$ -cwd
 source ${TMP}/hybrid-mpi-$JOB_ID/env.csh
 mpiexec_mpt -np $TOTAL_PROCS omplace -nt $OMP_NUM_THREADS -c 1-:st=2 ./a.out
 mpiexec_mpt -np $TOTAL_PROCS omplace -nt $OMP_NUM_THREADS -c 0-:st=2 ./a.out

トップ   編集 差分 バックアップ 添付 複製 名前変更 リロード   新規 一覧 単語検索 最終更新   ヘルプ