NERSC Berkeley 2016 HOWTO: Difference between revisions

From VASP Wiki
No edit summary
No edit summary
 
(18 intermediate revisions by 2 users not shown)
Line 1: Line 1:
* '''The examples''':
:* can be copied from:
/project/projectdirs/training/www/VASPWorkshop2016/exercises
:* or may be downloaded from the [[NERSC_Berkeley_2016|workshop wiki page]].
* '''Running the examples''':
* '''Running the examples''':
:Almost all examples come with a bash-script that runs them. In most cases these scripts are called ''job.sh'', ''doall.sh'', ''loop.sh'', ''run.sh'' or something like that.
:Almost all examples come with a bash-script that runs them. These scripts are called ''job.sh'', ''doall.sh'', ''loop.sh'', ''run.sh'' or something original like that.
:To submit these jobscripts to the Haswell nodes on Cori you may do the following:
:To submit these jobscripts to the Haswell nodes on Cori you may do the following:
:* copy the files <tt>hsw.sl</tt> and <tt>sub.sh</tt> to the directory where you want to run the example:
:* Copy the files <tt>hsw.sl</tt> and <tt>sub.sh</tt> to the directory where you want to run the example:


  cd path-to-your-directory
  cd path-to-your-directory
  cp /.../hsw.sl .
  cp /project/projectdirs/training/www/VASPWorkshop2016/exercises/hsw.sl .
  cp /.../sub.sh .
  cp /project/projectdirs/training/www/VASPWorkshop2016/exercises/sub.sh .


:* then to submit for instance the jobscripts ''run.sh'', you specify:
:* To submit for instance the jobscripts ''run.sh'', you specify:


  ./sub.sh run.sh
  ./sub.sh run.sh


: The file <tt>hsw.sl</tt> contains the slurm-preamble, loads the relevant environment modules, and sets the command with which VASP will be called by the jobscript:
* '''<tt>hsw.sl</tt>
:The file <tt>hsw.sl</tt> contains the slurm-preamble, loads the relevant environment modules, and sets the command with which VASP will be called by the jobscript:


  #!/bin/bash -l
  #!/bin/bash -l
Line 25: Line 31:
   
   
  module use ~usgsw/modulefiles
  module use ~usgsw/modulefiles
  module load vasp/hsw
  module load vasp/20161102-hsw
export vasp_std="srun -n 4 -c 16 --cpu_bind=cores /global/homes/u/usgsw/vasp/20161102/hsw/intel/bin/vasp_std"
 
:In this case <tt>hsw.sl</tt> requests 1 Haswell node (32 cores), and the <tt>vasp_std</tt> command will run VASP using 4 MPI-ranks, that each will be able to start 4 openMP-threads.
 
* '''<tt>sub.sh</tt>'''
:The bash-script <tt>sub.sh</tt> does very little but concatenate <tt>hsw.sl</tt> and a call to the jobscript specified as its argument, and submits the result (<tt>sub.sl</tt>):
 
#!/bin/bash -l
cat ./hsw.sl > sub.sl
echo ./$1  >> sub.sl
sbatch sub.sl
 
:To be specific, the command
./sub.sh run.sh
:will submit the following <tt>sub.sl</tt> file:
#!/bin/bash -l
#SBATCH -N 1
#SBATCH -p regular
#SBATCH -t 30:00
#SBATCH -C haswell
export OMP_NUM_THREADS=4
export OMP_PROC_BIND=spread
export OMP_PLACES=threads
module use ~usgsw/modulefiles
module load vasp/20161102-hsw
   
   
In this case <tt>hsw.sl</tt> requests 1 Haswell node (32 cores).
  export vasp_std="srun -n 4 -c 16 --cpu_bind=cores /global/homes/u/usgsw/vasp/20161102/hsw/intel/bin/vasp_std"
The <tt>vasp_std</tt> command will run VASP using 4 MPI-ranks, that each will be able to start 4 openMP-threads.
./run.sh
  export vasp_std="srun -n 4 -c 16 --cpu_bind=cores /global/homes/u/usgsw/vasp/20161014/hsw/intel/bin/vasp_std"

Latest revision as of 05:08, 8 November 2016

  • The examples:
  • can be copied from:
/project/projectdirs/training/www/VASPWorkshop2016/exercises
  • Running the examples:
Almost all examples come with a bash-script that runs them. These scripts are called job.sh, doall.sh, loop.sh, run.sh or something original like that.
To submit these jobscripts to the Haswell nodes on Cori you may do the following:
  • Copy the files hsw.sl and sub.sh to the directory where you want to run the example:
cd path-to-your-directory
cp /project/projectdirs/training/www/VASPWorkshop2016/exercises/hsw.sl .
cp /project/projectdirs/training/www/VASPWorkshop2016/exercises/sub.sh .
  • To submit for instance the jobscripts run.sh, you specify:
./sub.sh run.sh
  • hsw.sl
The file hsw.sl contains the slurm-preamble, loads the relevant environment modules, and sets the command with which VASP will be called by the jobscript:
#!/bin/bash -l
#SBATCH -N 1
#SBATCH -p regular
#SBATCH -t 30:00
#SBATCH -C haswell

export OMP_NUM_THREADS=4
export OMP_PROC_BIND=spread
export OMP_PLACES=threads

module use ~usgsw/modulefiles
module load vasp/20161102-hsw

export vasp_std="srun -n 4 -c 16 --cpu_bind=cores /global/homes/u/usgsw/vasp/20161102/hsw/intel/bin/vasp_std"
In this case hsw.sl requests 1 Haswell node (32 cores), and the vasp_std command will run VASP using 4 MPI-ranks, that each will be able to start 4 openMP-threads.
  • sub.sh
The bash-script sub.sh does very little but concatenate hsw.sl and a call to the jobscript specified as its argument, and submits the result (sub.sl):
#!/bin/bash -l

cat ./hsw.sl > sub.sl
echo ./$1   >> sub.sl

sbatch sub.sl
To be specific, the command
./sub.sh run.sh
will submit the following sub.sl file:
#!/bin/bash -l
#SBATCH -N 1
#SBATCH -p regular
#SBATCH -t 30:00
#SBATCH -C haswell

export OMP_NUM_THREADS=4
export OMP_PROC_BIND=spread
export OMP_PLACES=threads

module use ~usgsw/modulefiles
module load vasp/20161102-hsw

export vasp_std="srun -n 4 -c 16 --cpu_bind=cores /global/homes/u/usgsw/vasp/20161102/hsw/intel/bin/vasp_std"
./run.sh