Page 1 of 1

modelBSE

Posted: Tue Aug 31, 2021 2:49 pm
by laurent_pedesseau1
Dear VASP Admin, VASP colleagues,

We are working with the very nice modelBSE method. I found a technical bottleneck for the parallelization this method use only NPAR=total number of cores.

This is not practical if you are increasing the number of k-point which hugely increases the memory per core. In principle, one can play with kpar or ncore and npar to decrease the memory per node.

So my question: Have you a way of decreasing the memory per node? instead of using kpar, ncore, etc.

Sincerely,

Laurent Pedesseau

Re: modelBSE

Posted: Mon Sep 06, 2021 8:54 am
by ferenc_karsai
The large arrays for BSE (also modelBSE) are distributed by scaLAPACK. So increasing the number of cores in the calculation should bring down the amount of memory needed per core.

Re: modelBSE

Posted: Mon Sep 06, 2021 9:15 am
by laurent_pedesseau1
In the limit of having NPAR=total number of cores and NBAND/NPAR... However when we use the modelBSE, in principle, we need a huge amount of k-point which is increasing a lot the memory more than NBAND.

Many thanks for your reply

Re: modelBSE

Posted: Mon Sep 06, 2021 9:20 am
by ferenc_karsai
The block-cyclic distribution via scaLAPACK is apparent for NPAR/NCORE values. The important line is "BSE (scaLAPACK) attempting allocation of XXX Gbyte". This scales almost linearly with increasing number of cores.