Requests for technical support from the VASP group should be posted in the VASP-forum.

Installing VASP.6.X.X

From Vaspwiki
Jump to navigationJump to search
40px-Ambox important.pngThe vasp.6.1.0, vasp.6.1.1 and vasp.6.1.2 releases have a potentially serious issue related to the test suite. Please read about it here


For the compilation of VASP the following software is mandatory:

  • Fortran (at least F2008 compliant), C and C++ compilers.
  • An implementation of the Message Passing Interface (MPI).
  • Numerical libraries: FFTW, BLAS, LAPACK, and scaLAPACK.
  • For the OpenACC GPU-port: the NVIDIA HPC-SDK or a recent version (>=19.10) of PGI's Compilers & Tools.
  • When compiling with PGI Compilers & Tools: the QD (software emulated quadruple precision arithmetic) and NCCL (>=2.7.8) libraries.
  • And for the CUDA-C GPU-port (deprecated): NVIDIA's CUDA toolkit

N.B: Unfortunately finding a combination of compilers and libraries that actually works turns out to be less straightforward than one might hope! Please have a look at:

for combinations of Linux distros, compilers, and libraries that we have used to successfully build VASP.

Build system

The build system of VASP (as of versions >= 5.4.1) has the following structure:

                  vasp.X.X.X (root directory)
        |        |        |         |          |         |
       arch     bin     build      src     testsuite   tools
                         |      |      |       |
                        lib   parser  CUDA   fftlib
Holds the high-level makefile, and several subdirectories.
Holds the source files of VASP, and a low-level makefile.
Holds the source of the VASP library (used to be vasp.X.lib), and a low-level makefile.
Holds the source of the LOCPROJ parser (as of versions >= 5.4.4), and a low-level makefile.
Holds the source of the CUDA-C code that will be executed on the GPU by the CUDA-C GPU port of VASP.
Holds the source of the fftlib library that may be used to cache fftw plans.
Holds a collection of makefile.include.* files.
The different versions of VASP, i.e., the standard, gamma-only, non-collinear, and CUDA-C GPU versions will be build in separate subdirectories of this directory.
Here make will store the binaries.
Holds a suite of correctness tests to check your build.
Holds several python scripts related to the (optional) use of HDF5 input/output files.

How to make VASP

Copy one of the makefile.include.* files in root/arch to root/makefile.include. Take one that most closely reflects your system (hopefully). For instance, on a linux box with the Intel oneAPI:

cp arch/makefile.include.linux_intel  ./makefile.include

In many cases these makefile.include files will have to be adapted to the particulars of your system (see below).

When you've finished setting up makefile.include, build VASP:

make all

This will build the standard, gamma-only, and non-collinear version of VASP one after the other. Alternatively on may build these versions individually:

make std
make gam
make ncl

To compile mutiple source files concurrently (parallel make):

make DEPS=1 -jN <target>

where N is the number of jobs you want to run.

OpenACC port to GPU

Recently VASP has been ported to GPU by means of OpenACC. To build the OpenACC GPU-port of VASP you need the NVIDIA HPC-SDK or a recent version (>=19.10) of PGI's Compilers & Tools. When using the latter you will also need to install the NCCL (>=2.7.8) and QD libraries. (Conveniently, these libraries are already bundled into the NVIDIA HPC-SDK.)

When aformentioned requirements are fulfilled, building the OpenACC port is just a matter of taking one of the makefile.include.linux_*_acc files and adapting it to the particulars of your system.

For instance, when using the NVIDIA HPC-SDK:

cp arch/makefile.include.linux_nv_acc ./makefile.include

and adapt it to the particulars of your systems.

Subsequently building the standard, gamma-only, and non-collinear versions of the OpenACC port of VASP is done as usual, by means of:

make all

or separately (make std, make gam, or make ncl).

N.B.: When you choose to use the NVIDIA HPC-SDK (which we recommend), then be sure to use version 20.9!: We were informed that version 20.11 has performance issues with some of the programming constructs used by VASP, and in version 21.1 a bug was introduced. All of these issues should be solved with the release of the NVIDIA HPC-SDK 21.2.

CUDA-C port to GPU (deprecated)

To compile the CUDA-C GPU-port of VASP:

cp arch/makefile.include.linux_intel ./makefile.include

and adapt it to the particulars of your system, followed by:

make gpu
make gpu_ncl

to built the CUDA-C GPU-ports of the standard and non-collinear versions, respectively.

N.B.I: Unfortunately we do not offer a CUDA-C GPU-port of the gamma-only version.

N.B.II: The CUDA-C GPU-port of VASP is deprecated and no longer actively developed, maintained, or supported. In the near future, the CUDA-C GPU-port of VASP will be dropped completely. We encourage you to switch to the OpenACC GPU-port of VASP as soon as possible.

Validation tests

After building the vasp_std, vasp_gam, and vasp_ncl executables (e.g. by means of make all), you can test your build by means of:

make test

Running the validation testsuite is described in some detail in: Validation tests.

Archetypical makefile.include files

Typical ways to build VASP (and the corresponding makefile.include files) may be categorized as follows:

MPI: Parallelized using MPI

This is by far the most common case.

OMP: Parallelized using MPI + OpenMP

ACC: Ported to GPU using OpenACC

CUDA: Ported to GPU using CUDA

N.B.: Deprecated as of VASP.6.2.0: No longer under active development. Replaced by the OpenACC GPU-port.

SER: Serial

Just listed for the sake of completeness. Probably hardly used in practice.

N.B.: Where the makefile.include.* files labelled intel, gnu, nv, and pgi are meant for use with the Intel Composer suite and oneAPI Base + HPC toolkits, GNU's compilers, NVIDIA HPC-SDK, and PGI Compilers & Tools, respectively.

Adapting makefile.include

Any makefile.include should contain the following information:

Pre-compiler flags are used to (de)-activate certain code features (e.g. the use of MPI) at compile-time.
The makefile.include files provided above already include all recommended pre-compiler flags.
For more information have a look at the list and description of the commonly used VASP precompiler flags.
Here you specify the compiler and compiler flags you want to use.
This is the most involved part. The following sections are mandatory:
Here you define which FFT library should be used (in most cases fftw).
These libraries need to be build from source (included in the VASP distro). make needs you to specify some variables to be able to do so (this is a bit of a relic that we will get rid of in future).
In most cases BLAS, LAPACK, and scaLAPACK.
  • When compiling with PGI Compilers & Tools:
The QD library, and (for the OpenACC GPU-port) a sufficiently recent version (>=2.7.8) of the NCCL library.
Conveniently these libraries are bundled in the NVIDIA HPC-SDK (the successor of PGI's Compilers & Tools).
There are a libraries you may optionally link against to extend the functionality of VASP. Some notable examples are:
If you want to build the CUDA-C GPU-port of VASP (i.e. make gpu gpu_ncl) then there is a whole bunch of additional variables that need to be defined.

Writing a makefile.include file from scratch is not so easy, so we suggest you take one of the files mentioned above as a template and adapt it to suit your needs.

Precompiler variables

Specify the precompiler flags:
CPP_OPTIONS=[-Dflag1 [-Dflag2] ... ]
Take a lead from the makefile.include.* files listed in the previous section, and have a look at the description of the commonly used VASP precompiler flags.
  • N.B.I: -DNGZhalf, -DwNGZhalf, -DNGXhalf, -DwNGXhalf are deprecated options.
    Building the standard, gamma-only, or non-collinear version of the code is specified through an additional argument to the make command (see the make section).
  • N.B.II: CPP_OPTIONS is only used in this file, where it should be added to CPP (see next item).
The command to invoke the precompiler you want to use, for instance:
  • Using Intel's Fortran precompiler:
CPP=fpp -f_com=no -free -w0 $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)
  • Using cpp:
CPP=/usr/bin/cpp -P -C -traditional $*$(FUFFIX) >$*$(SUFFIX) $(CPP_OPTIONS)
  • N.B.: This variable has to include $(CPP_OPTIONS). If not, CPP_OPTIONS will be ignored!

Compiler variables

The Fortran compiler will be invoked as:

The command to invoke your Fortran compiler (e.g. gfortran, ifort, mpif90, mpiifort, ... ).
The command that invokes the linker. In most cases:
FCL=$(FC) [+ some options]
The general level of optimization (default: OFLAG=-O2).
Additional compiler flags. To enable debugging, for instance, the following line could be added:
(default: -O2) In the vast majority of makefile.include files this variable is set:
The optimization level with which the main program (main.F) will be compiled, usually:
Use this variable to specify objects to be included in the sense of:
Specify the options that your Fortran compiler needs for it to accept free-form source layout, without line-length limitation. For instance:
  • Using Intel's Fortran compiler:
FREE=-free -names lowercase
  • Using gfortran:
FREE=-ffree-form -ffree-line-length-none

Linking against libraries

The linker will be invoked as:

$(FCL) -o vasp  ..all-objects.. $(LLIBS) $(LINK)
The command that invokes the linker. In most cases:
FCL=$(FC) [+ some options]
  • Using the Intel Composer suite (Fortran compiler + MKL libraries), typically:
FCL=$(FC) -mkl=sequential
Specify libraries and/or objects to be linked against, in the usual ways:
LLIBS=[-Ldirectory -llibrary] [path/library.a] [path/object.o]
Usually one has to specify several numerical libraries (BLAS, LAPACK or scaLAPACK, etc).
For instance using the Intel Composer suite (and compiling with CPP_OPTIONS= .. -DscaLAPACK ..):
MKL_PATH   = $(MKLROOT)/lib/intel64
BLACS      = -lmkl_blacs_intelmpi_lp64
SCALAPACK  = $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)
For other configurations please take a lead from the makefile.include.* files under root/arch, or the files listed above.

The list of objects

The standard list of objects needed to compile VASP is given by the variable SOURCE in the root/src/.objects file that is part of the distribution.

Objects to be added to this list can be specified in makefile.include by means of:

OBJECTS= .. your list of objects ..

N.B.: Several objects will have to be added in this manner (see the section on Fast-Fourier-Transforms).

Special rules

The current src/makefile contains a set of recipes to allow for the compilation of objects at different levels of optimization (other than the general level specified by OFLAG). In these recipes the compiler will be invoked as:

$(FC) $(FREE) $(FFLAGS_x) $(OFLAG_x) $(INCS_x) 

where x stands for: 1, 2, 3, or IN.

Default: FFLAGS_x=$(FFLAGS), for x=1, 2, 3, and IN.
Default: OFLAG_x=-Ox (for x=1, 2, 3), and OFLAG_IN=-O2
Default: INCS_x=$(INCS), for x=1, 2, 3, and IN.

The objects to be compiled in accordance with these recipes have to be specified by means of the variables:

Several objects are compiled at -O1 and -O2 by default. These lists of objects are specified in the root/src/.objects file through the variables:

and reflect the special rules as they were present in most of the makefiles of the old build system.

To completely overrule a default setting (for instance for the -O1 special rules) you can use the following construct:

OBJECTS_O1= .. your list of objects ..

Libraries for linear algebra

To build VASP you need to link against the BLAS, LAPACK, and scaLAPACK linear algebra libraries.

Some compiler suites (like the Intel Composer suite and PGI Compilers&Tools) pre-package these libraries so you do not have to download and build them yourself.

  • Using the Intel Composer suite, linking against the BLAS, LAPACK, and scaLAPACK libraries that are part of MKL is done as follows:
FCL        = mpiifort -mkl=sequential

MKL_PATH   = $(MKLROOT)/lib/intel64
BLAS       =
LAPACK     =
BLACS      = -lmkl_blacs_intelmpi_lp64
SCALAPACK  = $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)

In case you use Intel's compilers with OpenMPI instead of Intel-MPI replace the BLACS line above by:
BLACS       = -lmkl_blacs_openmpi_lp64
  • Using PGI Compiler&Tools it is even simpler:
BLAS       = -lblas
LAPACK     = -llapack
SCALAPACK  = -Mscalapack

  • When you have built BLAS, LAPACK, and scaLAPACK yourself, you should link against the corresponding shared-objects (lib*.so) as follows:
LIBDIR     = /path/to/my/libs/
BLAS       = -L$(LIBDIR) -lrefblas
LAPACK     = -L$(LIBDIR) -ltmglib -llapack
SCALAPACK  = -L$(LIBDIR) -lscalapack

To link against a static library object (*.a), for instance for scaLAPACK, is straightforward. Simply replace the SCALAPACK line above by:
SCALAPACK  = $(LIBDIR)/libscalapack.a

Note on LAPACK 3.6.0 and newer

As of LAPACK-3.6.0 the subroutine DGEGV is deprecated and replaced by DGGEV.

Linking against LAPACK-3.6.0 or higher will result in following error message:

broyden.o: In function `__broyden_MOD_broyd':
broyden.f90:(.text+0x4bb0): undefined reference to `dgegv_'
dynbr.o: In function `brzero_':
dynbr.f90:(.text+0xe78): undefined reference to `dgegv_'
dynbr.f90:(.text+0x112c): undefined reference to `dgegv_'

To solve this problem add the following pre-compiler flag to your makefile.include file:



Add the objects to be compiled (or linked against) that provide the FFTs (may include static libraries of objects *.a).
Most probably you are building with -DMPI and want use the FFTs from some fftw library (recommended). In that case:
OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
Or possibly you want to use Juergen Furtmueller's FFT implementation (not recommended), in which case:
OBJECTS = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o
In case one compiles using an fftw-library, i.e.,
OBJECTS= .. fftw3d.o fftmpiw.o ..
then INCS can be set include the directory that holds fftw3.f:
(needed because fftw3d.F and fftmpiw.F include fftw3.f).
N.B.: If in the aformentioned case INCS is not set, then fftw3.f has to be present in root/src.

Linking against an fftw library

Common choices are:

  • When building with the Intel Composer suite it is best to link against the MKL fftw wrappers of Intel's FFTs:
FCL = $(FC) -mkl=sequential

OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
INCS   += -I$(MKLROOT)/include/fftw
  • To explicitly link against an fftw library (in this case fftw-3.3.4):
OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
FFTW       ?= /path/to/your/fftw-3.3.4
LLIBS      += -L$(FFTW)/lib -lfftw3
INCS       += -I$(FFTW)/include

Special rules for the optimization level of FFT related objects

Based on past experience the optimization level for the compilation of the FFT related objects is set explicitly. This is done as follows:

OBJECTS_O1 += fft3dfurth.o fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o

The lib library

The command to invoke the precompiler. In most cases it will suffice to set:
The command to invoke your Fortran compiler. In most cases:
N.B.: the library can be compiled without MPI support, i.e., when FC=mpif90, FC_LIB may specify a Fortran compiler without MPI support, e.g. FC_LIB=ifort.
Fortran compiler flags, including a specification of the level of optimization. In most cases:
Specify the options that your Fortran compiler needs for it to accept free-form source layout, without line-length limitation. In most cases it will suffise to set:
The command to invoke your C compiler (e.g. gcc, icc, ..).
N.B.: the library can be compiled without MPI support.
C compiler flags, including a specification of the level of optimization. In most cases:
List of "non-standard" objects to be added to the library. In most cases:
OBJECTS_LIB= linpack_double.o
When compiling VASP with -Duse_shmem, one has to add getshmem.o as well, i.e., add the following line:
OBJECTS_LIB+= getshmem.o

The parser library

The command to invoke your C++ compiler (e.g. g++, icpc, ..).

The parser needs to be linked against a C++ Standard-library. If this is not already part of the definition of FCL it can be added to LLIBS. When using the Intel Composer suite this would amount to:

LLIBS += -lstdc++

The QD library

Unlike Intel and GNU, PGI’s compilers do not natively support quadruple precision arithmetic. To build VASP with PGI Compilers&Tools you need to download and compile the QD library (that supplies software emulated quadruple precision data types and arithmetic) and add the following lines to your makefile.include:

CPP_OPTIONS += -Dqd_emulate
QD          ?= /path/to/your/QD-library
LLIBS       += -L$(QD)/lib -lqdmod -lqd
INCS        += -I$(QD)/include/qd

HDF5 file support (optional)

Since VASP 6.2.0 we started support for reading/writting HDF5 files. To activate this feature set the -DVASP_HDF5 precompiler flags. You also need to specify:

HDF5DIR = /your-hdf5-directory
LLIBS   += -L$(HDF5DIR)/lib -lhdf5_fortran
INCS    += -I$(HDF5DIR)/include

The HDF5 library is available for download on the HDF5 official website

For the interface to Wannier90 (optional)

To include the interface to Wannier90 (-DVASP2WANNIER90 or -DVASP2WANNIER90v2), one needs to specify:

LLIBS += /your-wannier90-directory/libwannier.a

And one needs to download Wannier90 and compile libwannier.a.

For libbeef (optional)

In case one wants to compile VASP with the BEEF van-der-Waals functionals (-Dlibbeef), one needs to add:

LLIBS += -Lyour-libbeef-directory -lbeef

And one needs to download and compile libbeef, of course.

For DFTD4 (optional)

To include the DFTD4 van-der-Waals functional (-DDFTD4) one needs to add

LLIBS       += -Lyour-libdftd4-build -ldftd4
INCS        += -Iyour-libdftd4-build/libdftd4.a.p

after installing the DFTD4 library from the source.

For the OpenACC GPU-port

To compile the OpenACC GPU-port with the NVIDIA HPC-SDK or PGI's Compilers & Tools (>=19.19) be sure to base your makefile.include file on arch/makefile.include.linuc_nv_acc or arch/makefile.include.linux_pgi_acc, respectively.

These makefile.include files can be pretty much used out of the box with the aforementioned compiler suites, since these conveniently bundle pretty much all libraries you need.

Here we will only highlight a few of the aspects that are specific to OpenACC:

  • The following precompiler flags:
  • Some compiler/linker flags. For the NV HPC-SDK:
FC  = mpif90 -acc -gpu=cc60,cc70,cc80,cuda11.0
FCL = mpif90 -acc -gpu=cc60,cc70,cc80,cuda11.0 -c++libs
and for PGI's Compilers & Tools:
FC  = mpif90 -acc -ta=tesla:cc60,cc70,cuda10.1
FCL = mpif90 -acc -ta=tesla:cc60,cc70,cuda10.1 -pgc++libs
where one specifies cc60, cc70, and/or cc80, to compile for the Tesla, Volta, and/or Ampere GPU architectures, respectively.
N.B.: To compile for the Ampere architecture (cc80) you will need to use the NV HPC-SDK since this requires CUDA >= 11.0, which is not included in the PGI Compiler & Tools distros.
  • When using the NVIDIA HPC-SDK you might have to set:
The common base path to the compilers and libraries of the version of the NVIDIA HPC-SDK you are using. For the NVIDIA HPC-SDK version 20.9, for instance, this would be something like:
NVROOT = /opt/nvidia/hpc_sdk/Linux_x86_64/20.9
The makefile.include.linux_nv* files attempt to set this automatically, but in case this fails, you will have to specify NVROOT manually.
  • When using PGI's Compilers & Tools (>=19.10) you need to add your installation of the NCCL library ( to LLIBS. For instance, as follows:
 NCCL   = /usr/lib/x86_64-linux-gnu
 LLIBS += -L$(NCCL) -lnccl
N.B.: Be sure to use NCCL >= 2.7.8. If, for some reason, this is not possible then you will have to remove the -DUSENCCLP2P precompiler flag.
Last but not least, using PGI's Compilers & Tools requires you to install the QD library (software emulation of quadruple precision arithmetic) and add it to LLIBS and INCS.

For the CUDA-C GPU-port (deprecated)

Location of CUDA toolkit install. For example:
CUDA_ROOT := /opt/cuda
CUDA toolkit libraries to link to. Typically:
CUDA_LIB := -L$(CUDA_ROOT)/lib64 -lnvToolsExt -lcudart -lcuda -lcufft -lcublas
Location of CUDA compiler and flags. Typically:
NVCC := $(CUDA_ROOT)/bin/nvcc -g
Add the objects to be compiled (or linked againts) that provide the FFTs (may include static libraries of objects *.a). For FFTW:
OBJECTS_GPU = fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d_gpu.o fftmpiw_gpu.o
CUDA compiler options to generate code for your particular GPU architecture.
For Kepler:
GENCODE_ARCH := -gencode=arch=compute_35,code=\"sm_35,compute_35\"
For Maxwell:
GENCODE_ARCH := -gencode=arch=compute_53,code=\"sm_53,compute_53\"
Multiple `-gencode` statements can be compiled to create cross-platform executables.
For details see the NVIDIA nvcc documentation.
Path to MPI include files so the CUDA compiler can find them. For example:
MPI_INC := /opt/openmpi/include
These can often be found with mpicc --show.
Preprocessor options for GPU compilation.
Always include:
Build cross-platform sources for GPU
Overlap communication and computation in RPROJ_MU.
  • -UscaLAPACK
Do not use scaLAPACK
  • -Ufock_dblbuf
Do not use the new Hartree-Fock routines
Use pinned memory for transfer buffers
Intercept any FFT calls of size greater than N3 and evaluate on GPU.
Use MAGMA for LAPACK-like calls on the GPU.
So typically:
If using the experimental MAGMA support, path to MAGMA (>=1.6). Typically:
MAGMA_ROOT := /opt/magma/lib


As of VASP.6.1.0 each distribution contains a suite of validation tests.

Related Sections

Toolchains, Precompiler flags, CUDA-C GPU port of VASP, Validation tests