Toolchains: Difference between revisions

From VASP Wiki
No edit summary
No edit summary
 
(21 intermediate revisions by 5 users not shown)
Line 1: Line 1:
Below we list the [[toolchains]] (compilers + assorted libraries) that we have used to build and test VASP in our nightly tests during the development.
Starting from VASP.6.3.0, the [[toolchains]] are listed separately for each version of VASP.
* These lists of [[toolchains]] are not comprehensive. They show what we have employed on a regular basis. Other/newer versions of the compilers and libraries than those listed below will, in all probability, work just as well (or better).
{{NB|tip|We encourage using up-to-date versions of compilers and libraries since they are continuously improved and bugs are identified and fixed.|:}}
* Also for older versions of VASP, we recommend using up-to-date versions of compilers and libraries. In most cases, this will not be a problem. Except in some cases, VASP code was adjusted, e.g., to accommodate changes in the behavior of a compiler. Example: Compilation with GCC > 7.X.X is only possible as of VASP.6.2.0. That is because the GCC compilers became more strict and do not accept certain code constructs used in older VASP versions.<ref name="gcc-beyond-7-support"/>
== VASP.6.4.3 ==
{| class="wikitable" style="text-align: center;
|-
! Compilers
! MPI
! FFT
! BLAS
! LAPACK
! ScaLAPACK
! CUDA
! HDF5
! Other
! Remarks
! Known issues
|-
| intel-oneapi-compilers-2024.0.2 || intel-oneapi-mpi-2021.10.0 || colspan="4" | intel-oneapi-mkl-2023.2.0 || - || hdf5-1.14.0 || wannier90-3.1.0<br />libxc-5.2.3
| Rocky Linux 8.8
| -
|-
| gcc-12.3.0 || openmpi-4.1.6 || colspan="3" | intel-oneapi-mkl-2023.2.0 || netlib-scalapack-2.2.0 || - || hdf5-1.14.0 || wannier90-3.1.0<br />libxc-5.2.3
| Rocky Linux 8.8
| -
|-
| nvhpc-24.1<br />(OpenACC)
| openmpi-4.1.6</br />(CUDA-aware)
| colspan="3" | intel-oneapi-mkl-2023.2.0
| netlib-scalapack-2.2.0
| cuda-11.8
| hdf5-1.14.0
| wannier90-3.1.0
| Rock Linux 8.8<br />NVIDIA GPUs<br />(A30)
| -
|-
| style="background-color:#f5d9da;" | aocc-4.0.0
| openmpi-4.1.4
| amdfftw-4.0
| amdblis-4.0
| amdlibflame-4.0
| amdscalapack-4.0
| -
| hdf5-1.13.0
| wannier90-3.1.0<br />libxc-5.2.2
| On AMD CPUs<br />(Zen3)
| <span style=background:#f5d9da>[[Known_issues#KnownIssue11|Reduce optimization level]]</span>
|-
| nec-5.0.2
| nmpi-2.25.0
| colspan="3" | nlc-3.0.0
| netlib-scalapack-2.2.0
| -
| -
| wannier90-3.1.0
| Rocky Linux 8.8<br />NEC SX-Aurora TSUBASA<br />vector engine
| VASP >= 6.3.0<ref name="nec-aurora-support"/>
|}
== VASP.6.3.0 ==
== VASP.6.3.0 ==
{| class="wikitable" style="text-align: center;
{| class="wikitable" style="text-align: center;
Line 47: Line 111:
| wannier90-3.1.0
| wannier90-3.1.0
| Centos 8.3<br />Intel Broadwell
| Centos 8.3<br />Intel Broadwell
| <span style=background:#f5d9da>Memory-leak<ref name="ompi-bug-1"/></span><br />VASP >= 6.2.0<ref name="gcc-9-support"/>
| <span style=background:#f5d9da>Memory-leak<ref name="ompi-bug-1"/></span>
|-
|-
| gcc-7.5.0
| gcc-7.5.0
Line 61: Line 125:
| nvhpc-22.2<br />(OpenACC) || openmpi-4.1.2 || colspan="3" | intel-oneapi-mkl-2022.0.1 || netlib-scalapack-2.1.0 || nvhpc-22.2<br />(cuda-11.0) || hdf5-1.13.0 || wannier90-3.1.0
| nvhpc-22.2<br />(OpenACC) || openmpi-4.1.2 || colspan="3" | intel-oneapi-mkl-2022.0.1 || netlib-scalapack-2.1.0 || nvhpc-22.2<br />(cuda-11.0) || hdf5-1.13.0 || wannier90-3.1.0
| Centos 8.3<br />NVIDIA GPUs<br />(P100 & V100)
| Centos 8.3<br />NVIDIA GPUs<br />(P100 & V100)
| -
| OpenACC<br />+<br />OpenMP<ref name="omp-acc-bug-1"/>
|-
|-
| nvhpc-21.2<br />(OpenACC)
| nvhpc-21.2<br />(OpenACC)
Line 100: Line 164:
| nec-3.4.0 || nmpi-2.18.0 || colspan="3" | nlc-2.3.0 || netlib-scalapack-2.2.0 || - || - || wannier90-3.1.0  
| nec-3.4.0 || nmpi-2.18.0 || colspan="3" | nlc-2.3.0 || netlib-scalapack-2.2.0 || - || - || wannier90-3.1.0  
| Centos 8.3<br />NEC SX-Aurora TSUBASA<br />vector engine
| Centos 8.3<br />NEC SX-Aurora TSUBASA<br />vector engine
| VASP >= 6.3.0
| VASP >= 6.3.0<ref name="nec-aurora-support"/>
<!-- these toolchains were used to build the code, but not to regularly run the test suite  
<!-- these toolchains were used to build the code, but not to regularly run the test suite  
|-
|-
Line 145: Line 209:
| wannier90-3.1.0
| wannier90-3.1.0
| Centos 8.3<br />Intel Broadwell
| Centos 8.3<br />Intel Broadwell
| <span style=background:#f5d9da>Memory-leak<ref name="ompi-bug-1"/></span><br />VASP >= 6.2.0<ref name="gcc-9-support"/>
| <span style=background:#f5d9da>Memory-leak<ref name="ompi-bug-1"/></span><br />VASP >= 6.2.0<ref name="gcc-beyond-7-support"/>
|-
|-
| gcc-7.5.0
| gcc-7.5.0
Line 189: Line 253:
|}
|}


== References ==
== Footnotes and references ==
<references>
<references>
<ref name="ompi-bug-1">A bug in OpenMPI versions 4.0.4-4.1.1 causes a memory leak in some ScaLAPACK calls. This mainly affects long Molecular Dynamics runs. This issue is fixed as of openmpi-4.1.2.</ref>
<ref name="ompi-bug-1">A bug in OpenMPI versions 4.0.4-4.1.1 causes a memory leak in some ScaLAPACK calls. This mainly affects long [[:Category:Molecular dynamics|molecular-dynamics]] runs. This issue is fixed as of openmpi-4.1.2.</ref>
<ref name="gcc-9-support">Support for the gcc-9.X.X compilers was added in VASP.6.2.0.</ref>
<ref name="nec-aurora-support">The NEC SX-Aurora TSUBASA vector engine is supported as of VASP.6.3.0.</ref>
<ref name="omp-acc-bug-1">The NVIDIA HPC-SDK versions 22.1 and 22.2 have a serious bug that prohibits the execution of the OpenACC GPU port of VASP in conjunction with OpenMP-threading. When using these compiler versions you should compile the OpenACC GPU port of VASP without OpenMP-support. This bug is fixed as of NVIDIA HPC-SDK version 22.3.</ref>
<ref name="gcc-beyond-7-support">Support for GCC > 7.X.X was added with VASP.6.2.0. Do not use GCC-8.X.X compilers: the way we use the <tt>CONTIGUOUS</tt> construct in VASP is broken when using these compilers.</ref>
</references>
</references>
<!--
<!--
VASP depends on several software libraries that have to be installed on the system and compatible with each other.
This can sometimes be challenging, so that we provide a list of four compatible toolchains that have been used successfully for building and validating VASP 6.1.0 on [https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes/18.04 Ubuntu Server 18.04].
Apart from a C and FORTRAN compiler, the parallel versions "vasp_std, vasp_gam, vasp_ncl" depend on four libraries:
* [http://www.fftw.org/ FFTW]
* [http://www.fftw.org/ FFTW]
* [http://www.netlib.org/lapack/index.html LAPACK]
* [http://www.netlib.org/lapack/index.html LAPACK]
Line 206: Line 266:
* [http://www.netlib.org/scalapack/ ScaLAPACK]
* [http://www.netlib.org/scalapack/ ScaLAPACK]


The CUDA port of VASP does not depend on ScaLAPACK, however, a compatible [https://developer.nvidia.com/cuda-downloads CUDA] toolkit is required.
The following toolchains have been tested successfully on Ubuntu Server 18.04.
* [https://software.intel.com/en-us/parallel-studio-xe/choose-download Intel Parallel Studio 2018.4.057] (contains all required libraries)
* [https://software.intel.com/en-us/parallel-studio-xe/choose-download Intel Parallel Studio 2020.0.166] (contains all required libraries)
* PGI (contains LAPACK, OpenMPI and ScaLAPACK)
** [https://www.pgroup.com/index.htm PGI 19.10]
** [https://gcc.gnu.org/fortran/ gcc and gfortran 7.3.0]
** [https://www.davidhbailey.com/dhbsoftware/ QD 2.3.17 ] (quadruple precision software library)
** [http://www.fftw.org/ FFTW 3.3.6] compiled with gcc 7.3.0


* GNU
* GNU
Line 231: Line 279:
** [https://www.open-mpi.org/software/ompi/v4.0/ OpenMPI 4.0.1]
** [https://www.open-mpi.org/software/ompi/v4.0/ OpenMPI 4.0.1]
** [http://www.netlib.org/scalapack/ ScaLAPACK 2.1.0]
** [http://www.netlib.org/scalapack/ ScaLAPACK 2.1.0]
The CUDA ports "vasp_gpu, vasp_gpu_ncl" have been successfully build and validated with following toolchains:
* PGI (contains LAPACK, OpenMPI)
** [https://www.pgroup.com/index.htm PGI 19.10]
** [https://www.davidhbailey.com/dhbsoftware/ QD 2.3.17 ] (quadruple precision software library)
** [https://developer.nvidia.com/cuda-downloads CUDA 10.0]
** [http://www.fftw.org/ FFTW 3.3.6] compiled with gcc 7.3.0
* Intel (contains LAPACK and FFTW)
** [https://software.intel.com/en-us/parallel-studio-xe/choose-download Intel Parallel Studio 2018.4.057]
** [https://www.mpich.org/ MPICH 3.2.1]
** [https://developer.nvidia.com/cuda-downloads CUDA 10.0]
* GNU
** [https://gcc.gnu.org/fortran/ gcc and gfortran 7.3.0]
** [https://www.openblas.net/ Openblas 0.3.7]
** [http://www.fftw.org/ FFTW 3.3.8]
** [https://www.mpich.org/ MPICH 3.2.1]
** [https://developer.nvidia.com/cuda-downloads CUDA 10.0]
-->
-->


== Related Sections ==
== Related articles ==
[[Installing VASP.6.X.X]],
[[Installing VASP.6.X.X]],
[[Precompiler flags]],
[[makefile.include]],
[[Compiler options]],
[[Precompiler options]],
[[Linking to libraries]],
[[OpenACC GPU port of VASP]],
[[OpenACC GPU port of VASP]],
[[Validation tests]]
[[Validation tests]],
[[Known issues]],
[[Personal computer installation]]


----
----
[[The_VASP_Manual|Contents]]


[[Category:VASP]][[Category:Installation]][[Category:Performance]][[Category:GPU]][[Category:VASP6]]
[[Category:VASP]][[Category:Installation]][[Category:Performance]][[Category:GPU]]

Latest revision as of 08:08, 11 April 2024

Below we list the toolchains (compilers + assorted libraries) that we have used to build and test VASP in our nightly tests during the development. Starting from VASP.6.3.0, the toolchains are listed separately for each version of VASP.

  • These lists of toolchains are not comprehensive. They show what we have employed on a regular basis. Other/newer versions of the compilers and libraries than those listed below will, in all probability, work just as well (or better).
Tip: We encourage using up-to-date versions of compilers and libraries since they are continuously improved and bugs are identified and fixed.
  • Also for older versions of VASP, we recommend using up-to-date versions of compilers and libraries. In most cases, this will not be a problem. Except in some cases, VASP code was adjusted, e.g., to accommodate changes in the behavior of a compiler. Example: Compilation with GCC > 7.X.X is only possible as of VASP.6.2.0. That is because the GCC compilers became more strict and do not accept certain code constructs used in older VASP versions.[1]

VASP.6.4.3

Compilers MPI FFT BLAS LAPACK ScaLAPACK CUDA HDF5 Other Remarks Known issues
intel-oneapi-compilers-2024.0.2 intel-oneapi-mpi-2021.10.0 intel-oneapi-mkl-2023.2.0 - hdf5-1.14.0 wannier90-3.1.0
libxc-5.2.3
Rocky Linux 8.8 -
gcc-12.3.0 openmpi-4.1.6 intel-oneapi-mkl-2023.2.0 netlib-scalapack-2.2.0 - hdf5-1.14.0 wannier90-3.1.0
libxc-5.2.3
Rocky Linux 8.8 -
nvhpc-24.1
(OpenACC)
openmpi-4.1.6
(CUDA-aware)
intel-oneapi-mkl-2023.2.0 netlib-scalapack-2.2.0 cuda-11.8 hdf5-1.14.0 wannier90-3.1.0 Rock Linux 8.8
NVIDIA GPUs
(A30)
-
aocc-4.0.0 openmpi-4.1.4 amdfftw-4.0 amdblis-4.0 amdlibflame-4.0 amdscalapack-4.0 - hdf5-1.13.0 wannier90-3.1.0
libxc-5.2.2
On AMD CPUs
(Zen3)
Reduce optimization level
nec-5.0.2 nmpi-2.25.0 nlc-3.0.0 netlib-scalapack-2.2.0 - - wannier90-3.1.0 Rocky Linux 8.8
NEC SX-Aurora TSUBASA
vector engine
VASP >= 6.3.0[2]

VASP.6.3.0

Compilers MPI FFT BLAS LAPACK ScaLAPACK CUDA HDF5 Other Remarks Known issues
intel-oneapi-compilers-2022.0.1 intel-oneapi-mpi-2021.5.0 intel-oneapi-mkl-2022.0.1 - hdf5-1.13.0 wannier90-3.1.0 Centos 8.3
Intel Broadwell
-
intel-parallel-studio-xe-2021.4.0 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
-
gcc-11.2.0 openmpi-4.1.2 intel-oneapi-mkl-2022.0.1 netlib-scalapack-2.1.0 - hdf5-1.13.0 wannier90-3.1.0
libxc-5.2.2
Centos 8.3
Intel Broadwell
-
gcc-11.2.0 openmpi-4.1.2 fftw-3.3.10 openblas-0.3.18 netlib-scalapack-2.1.0 - hdf5-1.13.0 wannier90-3.1.0
libxc-5.2.2
Centos 8.3
Intel Broadwell
-
gcc-11.2.0 openmpi-4.1.2 amdfftw-3.1 amdblis-3.1 amdlibflame-3.1 amdscalapack-3.1 - hdf5-1.13.0 wannier90-3.1.0
libxc-5.2.2
Centos 8.3
AMD Zen3
-
gcc-9.3.0 openmpi-4.0.5 fftw-3.3.8 openblas-0.3.10 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[3]
gcc-7.5.0 openmpi-4.0.5 intel-mkl-2020.2.254 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[3]
nvhpc-22.2
(OpenACC)
openmpi-4.1.2 intel-oneapi-mkl-2022.0.1 netlib-scalapack-2.1.0 nvhpc-22.2
(cuda-11.0)
hdf5-1.13.0 wannier90-3.1.0 Centos 8.3
NVIDIA GPUs
(P100 & V100)
OpenACC
+
OpenMP[4]
nvhpc-21.2
(OpenACC)
openmpi-4.0.5
(CUDA-aware)
intel-mkl-2020.2.254 netlib-scalapack-2.1.0 nvhpc-21.2
(cuda-11.0)
hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
NVIDIA GPUs
(P100 & V100)
Memory-leak[3]
nvhpc-21.2 openmpi-4.0.5 intel-mkl-2020.2.254 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[3]
nvhpc-21.2 openmpi-4.0.5 fftw-3.3.8 openblas-0.3.10 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[3]
aocc-3.2.0 openmpi-4.1.2 amdfftw-3.1 amdblis-3.1 amdlibflame-3.1 amdscalapack-3.1 - hdf5-1.13.0 wannier90-3.1.0
libxc-5.2.2
On AMD CPUs
(Zen3)
-
nec-3.4.0 nmpi-2.18.0 nlc-2.3.0 netlib-scalapack-2.2.0 - - wannier90-3.1.0 Centos 8.3
NEC SX-Aurora TSUBASA
vector engine
VASP >= 6.3.0[2]

Older versions of VASP.6

Compilers MPI FFT BLAS LAPACK ScaLAPACK CUDA HDF5 Other Remarks Known issues
intel-parallel-studio-xe-2021.1.1 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
-
gcc-9.3.0 openmpi-4.0.5 fftw-3.3.8 openblas-0.3.10 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[3]
VASP >= 6.2.0[1]
gcc-7.5.0 openmpi-4.0.5 intel-mkl-2020.2.254 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[3]
nvhpc-21.2
(OpenACC)
openmpi-4.0.5
(CUDA-aware)
intel-mkl-2020.2.254 netlib-scalapack-2.1.0 nvhpc-21.2
(cuda-11.0)
hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
NVIDIA GPUs
(P100 & V100)
Memory-leak[3]
nvhpc-21.2 openmpi-4.0.5 intel-mkl-2020.2.254 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[3]
nvhpc-21.2 openmpi-4.0.5 fftw-3.3.8 openblas-0.3.10 netlib-scalapack-2.1.0 - hdf5-1.10.7 wannier90-3.1.0 Centos 8.3
Intel Broadwell
Memory-leak[3]

Footnotes and references

  1. a b Support for GCC > 7.X.X was added with VASP.6.2.0. Do not use GCC-8.X.X compilers: the way we use the CONTIGUOUS construct in VASP is broken when using these compilers.
  2. a b The NEC SX-Aurora TSUBASA vector engine is supported as of VASP.6.3.0.
  3. a b c d e f g h i j A bug in OpenMPI versions 4.0.4-4.1.1 causes a memory leak in some ScaLAPACK calls. This mainly affects long molecular-dynamics runs. This issue is fixed as of openmpi-4.1.2.
  4. The NVIDIA HPC-SDK versions 22.1 and 22.2 have a serious bug that prohibits the execution of the OpenACC GPU port of VASP in conjunction with OpenMP-threading. When using these compiler versions you should compile the OpenACC GPU port of VASP without OpenMP-support. This bug is fixed as of NVIDIA HPC-SDK version 22.3.

Related articles

Installing VASP.6.X.X, makefile.include, Compiler options, Precompiler options, Linking to libraries, OpenACC GPU port of VASP, Validation tests, Known issues, Personal computer installation