Choice of GPU for VASP 6

questions related to VASP with GPU support (vasp.5.4.1, version released in Feb 2016)

Moderators: Global Moderator, Moderator

Post Reply
Message
Author
roman_malyshev
Newbie
Newbie
Posts: 3
Joined: Tue Sep 08, 2020 11:07 am

Choice of GPU for VASP 6

#1 Post by roman_malyshev » Wed Apr 07, 2021 11:13 pm

At my department, we are currently trying to build a small GPU cluster that will fill the needs of the different groups that do computations. The choice so far has landed on NVIDIA Titan RTX with 24 GB GDDR6 RAM. I am trying to check if it will work with VASP 6 (we recently bought the license but have not installed it anywhere yet -- I'm still using 5.4.4 for "normal/non-GPU" computations). Is this card compatible with the VASP OpenACC port requirements? I see some conflicting data around the web and don't understand all the different coding libraries and standards that are in use in the world of GPUs. Here's a quick summary of the card: https://www.techpowerup.com/gpu-specs/titan-rtx.c3311.

For example, I don't understand how VASP can require CUDA 10 -compatible libraries when even the most current NVIDIA architecture (Ampere, at least the one for consumer cards like RTX 3000 series) use CUDA 8, unless they support everything from CUDA 8 and upwards. The Titan RTX is of Turing architecture, which seems to be the consumer architecture of the same generation as Volta, so from 2018-2019, and uses CUDA 7.5. Also, I cannot find any info about OpenACC. Like, what versions are supported by what cards, or if they all support all versions. In an NVIDIA article (https://developer.nvidia.com/blog/nvidi ... rformance/) they say "Scientists can run VASP 6 on any NVIDIA GPU". Is it that simple?

The recommendation at https://www.vasp.at/wiki/index.php/Open ... rt_of_VASP list only "professional/Datacenter" cards, but at our department the group that does AI chose the hardware, since they are the ones that actually know all this stuff about hardware, and I guess the choice was to buy several "normal" graphics cards per workstation rather than one really expensive one like the V100. Will two 24 GB Titan cards per workstation have enough RAM for models that use about 64 GB on non-GPU VASP? These workstations will also have at least 128 GB normal RAM -- does GPU-ported VASP use video RAM in conjunction with normal RAM to off-load the demands on the graphics memory? Does the normal CPU have any impact? Again, I have no clue about how GPU computations work or what kind of demands they put on the system as a whole.

Hope someone can clarify this.

mmarsman
Global Moderator
Global Moderator
Posts: 12
Joined: Wed Nov 06, 2019 8:44 am

Re: Choice of GPU for VASP 6

#2 Post by mmarsman » Tue Apr 13, 2021 2:42 pm

Hi,

I agree things can get quite confusing!

For instance, the information on the website you have consulted for information about the Titan RTX is not completely correct:
It states that this card needs "CUDA" version 7.5. This is a mistake: the card needs "compute capability" version 7.5 (see https://developer.nvidia.com/cuda-gpus#compute) which actually means CUDA >= 10. Cards with chips based on the Ampere microarchitecture need "compute capability" version 8.0, meaning CUDA 11.

OpenACC is an extension (compiler directives) to for instance Fortran. It has to be supported by the compiler.
In VASP we use features of OpenACC that are part of the OpenACC standard 2.6 (and beyond). So essentially any compiler that is OpenACC >= 2.6 compliant should be able to compile the OpenACC version of VASP. In practice this boils down to the current NVIDIA HPC-SDK (and some of the previous PGI compilers).
Upon compilation you then specify for which "compute capability" (e.g., cc70, cc80) you want to compile the code.
So in that sense the GPU itself has no awareness of OpenACC, which is why you will not find it mentioned with the card specifications.

As to which cards are suitable (or best suited) to run VASP: the statement that VASP.6 will run on any NVIDIA GPU is correct, but the performance depends strongly on the "fp64" capability of the GPU ("double precision" or 64 bit floating point performance). Here you will see large differences between cards (roughly speaking between "consumer" and "data center" GPUs).
Whereas a top-notch HPC GPU will show a ratio of fp32:fp64 performance of 2:1, for a typical consumer card like the Titan RTX this will be something like 32:1 ...
BTW: This information you will find on the website you quoted in your post.
Most Ai applications do not need to do a lot of double precision arithmetic so that probably explains their decision to go for the Titan RTX in this case.

As far as memory needs are concerned: it is hard to give a general statement on this topic. One thing that I can definitely say is that the wave functions need to be stored on the GPU entirely. So these need to fit in the GPU memory. This size of a WAVECAR file will give you some hint as to how much memory this will require at least (multiply the size by 2 x since the wave function are stored on file in single precision if I remember correctly).
Unfortunately this is not the final answer: methods like GW will have large storage demands besides the wave functions, but these methods are not ported to GPU yet (this is currently in progress).

I hope this answers (some of) your questions!

Post Reply