Toolchains: Difference between revisions

From VASP Wiki
No edit summary
No edit summary
Line 205: Line 205:


<!--
<!--
VASP depends on several software libraries that have to be installed on the system and compatible with each other.
This can sometimes be challenging, so that we provide a list of four compatible toolchains that have been used successfully for building and validating VASP 6.1.0 on [https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes/18.04 Ubuntu Server 18.04].
Apart from a C and FORTRAN compiler, the parallel versions "vasp_std, vasp_gam, vasp_ncl" depend on four libraries:
* [http://www.fftw.org/ FFTW]
* [http://www.fftw.org/ FFTW]
* [http://www.netlib.org/lapack/index.html LAPACK]
* [http://www.netlib.org/lapack/index.html LAPACK]
Line 215: Line 210:
* [http://www.netlib.org/scalapack/ ScaLAPACK]
* [http://www.netlib.org/scalapack/ ScaLAPACK]


The CUDA port of VASP does not depend on ScaLAPACK, however, a compatible [https://developer.nvidia.com/cuda-downloads CUDA] toolkit is required.
The following toolchains have been tested successfully on Ubuntu Server 18.04.
* [https://software.intel.com/en-us/parallel-studio-xe/choose-download Intel Parallel Studio 2018.4.057] (contains all required libraries)
* [https://software.intel.com/en-us/parallel-studio-xe/choose-download Intel Parallel Studio 2020.0.166] (contains all required libraries)
* PGI (contains LAPACK, OpenMPI and ScaLAPACK)
** [https://www.pgroup.com/index.htm PGI 19.10]
** [https://gcc.gnu.org/fortran/ gcc and gfortran 7.3.0]
** [https://www.davidhbailey.com/dhbsoftware/ QD 2.3.17 ] (quadruple precision software library)
** [http://www.fftw.org/ FFTW 3.3.6] compiled with gcc 7.3.0


* GNU
* GNU
Line 240: Line 223:
** [https://www.open-mpi.org/software/ompi/v4.0/ OpenMPI 4.0.1]
** [https://www.open-mpi.org/software/ompi/v4.0/ OpenMPI 4.0.1]
** [http://www.netlib.org/scalapack/ ScaLAPACK 2.1.0]
** [http://www.netlib.org/scalapack/ ScaLAPACK 2.1.0]
The CUDA ports "vasp_gpu, vasp_gpu_ncl" have been successfully build and validated with following toolchains:
* PGI (contains LAPACK, OpenMPI)
** [https://www.pgroup.com/index.htm PGI 19.10]
** [https://www.davidhbailey.com/dhbsoftware/ QD 2.3.17 ] (quadruple precision software library)
** [https://developer.nvidia.com/cuda-downloads CUDA 10.0]
** [http://www.fftw.org/ FFTW 3.3.6] compiled with gcc 7.3.0
* Intel (contains LAPACK and FFTW)
** [https://software.intel.com/en-us/parallel-studio-xe/choose-download Intel Parallel Studio 2018.4.057]
** [https://www.mpich.org/ MPICH 3.2.1]
** [https://developer.nvidia.com/cuda-downloads CUDA 10.0]
* GNU
** [https://gcc.gnu.org/fortran/ gcc and gfortran 7.3.0]
** [https://www.openblas.net/ Openblas 0.3.7]
** [http://www.fftw.org/ FFTW 3.3.8]
** [https://www.mpich.org/ MPICH 3.2.1]
** [https://developer.nvidia.com/cuda-downloads CUDA 10.0]
-->
-->



Revision as of 08:45, 7 April 2022

Below we list the toolchains (compilers + assorted libraries) we have used to build and test VASP in our nightly tests. Starting from VASP.6.3.0 the toolchains are listed for each version of VASP separately.

  • These lists of toolchains are not comprehensive. They just shows what we have used on a regular basis. Other/newer versions of the compilers and libraries listed below will in all probability work just as well (or better). In fact we encourage using up-to-date versions of the aforementioned since compilers and libraries are continuously improved and bugs are found and fixed.
  • We recommend using up-to-date versions of compilers and libraries to build older versions of VASP as well. In most cases this will not be a problem. However, in some cases the code was changed for instance to accommodate changes in the behaviour of a compiler. An example: compilation with GCC > 7.X.X is only possible as of VASP.6.2.0. The GCC compilers became more strict and do not accept certain code constructs used in older VASP versions.[1]


VASP.6.3.0

Compilers MPI FFT BLAS LAPACK ScaLAPACK CUDA HDF5 Other Remarks Known issues
intel-oneapi-compilers-2022.0.1 intel-oneapi-mpi-2021.5.0 intel-oneapi-mkl-2022.0.1 - hdf5-1.13.0 wannier90-3.1.0 Centos 8.3
Intel Broadwell
-
intel-parallel-studio-xe-2021.4.0 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
-
gcc-11.2.0 openmpi-4.1.2 intel-oneapi-mkl-2022.0.1 netlib-scalapack-2.1.0 - hdf5-1.13.0 wannier90-3.1.0
libxc-5.2.2
Centos 8.3
Intel Broadwell
-
gcc-11.2.0 openmpi-4.1.2 fftw-3.3.10 openblas-0.3.18 netlib-scalapack-2.1.0 - hdf5-1.13.0 wannier90-3.1.0
libxc-5.2.2
Centos 8.3
Intel Broadwell
-
gcc-11.2.0 openmpi-4.1.2 amdfftw-3.1 amdblis-3.1 amdlibflame-3.1 amdscalapack-3.1 - hdf5-1.13.0 wannier90-3.1.0
libxc-5.2.2
Centos 8.3
AMD Zen3
-
gcc-9.3.0 openmpi-4.0.5 fftw-3.3.8 openblas-0.3.10 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[2]
gcc-7.5.0 openmpi-4.0.5 intel-mkl-2020.2.254 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[2]
nvhpc-22.2
(OpenACC)
openmpi-4.1.2 intel-oneapi-mkl-2022.0.1 netlib-scalapack-2.1.0 nvhpc-22.2
(cuda-11.0)
hdf5-1.13.0 wannier90-3.1.0 Centos 8.3
NVIDIA GPUs
(P100 & V100)
-
nvhpc-21.2
(OpenACC)
openmpi-4.0.5
(CUDA-aware)
intel-mkl-2020.2.254 netlib-scalapack-2.1.0 nvhpc-21.2
(cuda-11.0)
hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
NVIDIA GPUs
(P100 & V100)
Memory-leak[2]
nvhpc-21.2 openmpi-4.0.5 intel-mkl-2020.2.254 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[2]
nvhpc-21.2 openmpi-4.0.5 fftw-3.3.8 openblas-0.3.10 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[2]
aocc-3.2.0 openmpi-4.1.2 amdfftw-3.1 amdblis-3.1 amdlibflame-3.1 amdscalapack-3.1 - hdf5-1.13.0 wannier90-3.1.0
libxc-5.2.2
On AMD CPUs
(Zen3)
-
nec-3.4.0 nmpi-2.18.0 nlc-2.3.0 netlib-scalapack-2.2.0 - - wannier90-3.1.0 Centos 8.3
NEC SX-Aurora TSUBASA
vector engine
VASP >= 6.3.0[3]

Older versions of VASP.6

Compilers MPI FFT BLAS LAPACK ScaLAPACK CUDA HDF5 Other Remarks Known issues
intel-parallel-studio-xe-2021.1.1 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
-
gcc-9.3.0 openmpi-4.0.5 fftw-3.3.8 openblas-0.3.10 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[2]
VASP >= 6.2.0[1]
gcc-7.5.0 openmpi-4.0.5 intel-mkl-2020.2.254 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[2]
nvhpc-21.2
(OpenACC)
openmpi-4.0.5
(CUDA-aware)
intel-mkl-2020.2.254 netlib-scalapack-2.1.0 nvhpc-21.2
(cuda-11.0)
hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
NVIDIA GPUs
(P100 & V100)
Memory-leak[2]
nvhpc-21.2 openmpi-4.0.5 intel-mkl-2020.2.254 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[2]
nvhpc-21.2 openmpi-4.0.5 fftw-3.3.8 openblas-0.3.10 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[2]

References

  1. a b Support for GCC > 7.X.X was added with VASP.6.2.0. Do not use GCC-8.X.X compilers: the way we use the CONTIGUOUS construct in VASP is broken when using these compilers.
  2. a b c d e f g h i j A bug in OpenMPI versions 4.0.4-4.1.1 causes a memory leak in some ScaLAPACK calls. This mainly affects long Molecular Dynamics runs. This issue is fixed as of openmpi-4.1.2.
  3. The NEC SX-Aurora TSUBASA vector engine is supported as of VASP.6.3.0.


Related Sections

Installing VASP.6.X.X, Precompiler flags, OpenACC GPU port of VASP, Validation tests


Contents