Page 1 of 1

Error while installing vasp6.2 on cluster

Posted: Fri Nov 08, 2024 6:01 am
by kousika_a

Hello all,

I am trying to install vasp.6.2.1 in a supercomputer. After loading the required modules(given below), I am getting the following error: "nvfortran-Error-Please run makelocalrc to complete your installation"

The loaded modules
1) nvhpc_23.5/nvhpc/23.5
2) ohpc
3) intel/oneapi/compiler-rt/2021.2.0

I am also attaching the makefile.include. Please suggest me what needs to be done.

Thanks
Kousika

# Precompiler options
CPP_OPTIONS= -DHOST=\"LinuxIFC\"\
-DMPI -DMPI_BLOCK=8000 -Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=4000 \
-Davoidalloc \
-Dvasp6 \
-Duse_bse_te \
-Dtbdyn \
-Dfock_dblbuf

CPP = fpp -f_com=no -free -w0 $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)

FC = mpiifort
FCL = mpiifort -mkl=sequential

FREE = -free -names lowercase

FFLAGS = -assume byterecl -w -xHOST
OFLAG = -xCORE-AVX2
OFLAG_IN = $(OFLAG)
DEBUG = -O0

MKLROOT = /opt/ohpc/pub/compiler/intel/2018_update4/compilers_and_libraries_2018.5.274/linux/mkl
MKL_PATH = $(MKLROOT)/lib/intel64
BLAS =
LAPACK =
BLACS = -lmkl_blacs_intelmpi_lp64
SCALAPACK = $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS)

OBJECTS = fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d.o /opt/ohpc/pub/compiler/intel/2018_update4/compilers_and_libraries_2018.5.274/linux/mkl/interfaces/fftw3xf/libfftw3xf_intel.a

INCS =-I$(MKLROOT)/include/fftw

LLIBS = $(SCALAPACK) $(LAPACK) $(BLAS)

OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o

# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = /opt/ohpc/pub/compiler/intel/2018_update4/compilers_and_libraries/linux/bin/intel64/icc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)

OBJECTS_LIB= linpack_double.o getshmem.o

# For the parser library
CXX_PARS = icpc
LLIBS += -lstdc++

# Normally no need to change this
SRCDIR = ../../src
BINDIR = ../../bin

#================================================
# GPU Stuff

CPP_GPU = -DCUDA_GPU -DRPROMU_CPROJ_OVERLAP -DUSE_PINNED_MEMORY -DCUFFT_MIN=28 -UscaLAPACK -Ufock_dblbuf

OBJECTS_GPU= fftmpiw.o fftmpi_map.o fft3dlib.o fftw3d_gpu.o fftmpiw_gpu.o

CC = mpiicc
CXX = mpiicpc
CFLAGS = -fPIC -DADD_ -Wall -qopenmp -DMAGMA_WITH_MKL -DMAGMA_SETAFFINITY -DGPUSHMEM=300 -DHAVE_CUBLAS

# Minimal requirement is CUDA >= 10.X. For "sm_80" you need CUDA >= 11.X.
CUDA_ROOT ?= /opt/ohpc/pub/cuda/cuda-11.5.0
NVCC := $(CUDA_ROOT)/bin/nvcc -ccbin=icc -allow-unsupported-compiler
CUDA_LIB := -L$(CUDA_ROOT)/lib64 -lnvToolsExt -lcudart -lcuda -lcufft -lcublas

GENCODE_ARCH := -gencode=arch=compute_60,code=\"sm_60,compute_60\" \
-gencode=arch=compute_70,code=\"sm_70,compute_70\" \
-gencode=arch=compute_80,code=\"sm_80,compute_80\"

## For all legacy Intel MPI versions (before 2021)
I_MPI_ROOT = /opt/ohpc/pub/compiler/intel/2018_update4/impi/2018.4.274
MPI_INC = $(I_MPI_ROOT)/intel64/include/

# Or when you are using the Intel oneAPI compiler suite
#MPI_INC = $(I_MPI_ROOT)/include/


Re: Error while installing vasp6.2 on cluster

Posted: Fri Nov 08, 2024 9:41 am
by henrique_miranda

That sounds like a problem with your installation of the nvidia compilers, and not a problem with the VASP installation.
Are you able to compile a very simple fortran program?

Code: Select all

program hello
  ! This is a comment line; it is ignored by the compiler
  print *, 'Hello, World!'
end program hello

Re: Error while installing vasp6.2 on cluster

Posted: Fri Nov 08, 2024 11:30 am
by kousika_a

Thank you for the reply.

For this program also, I am getting the same error.

So does installing nvidia compilers in my account and then using it for installation of vasp work?


Re: Error while installing vasp6.2 on cluster

Posted: Mon Nov 11, 2024 11:29 am
by henrique_miranda

If you get the same issue with this example, then you have a problem with your installation of the NVIDIA compilers.
Perhaps it is a matter of just running makelocalrc to complete your installation, but that is highly dependent on your setup so we cannot provide support for that.
If you are using some HPC facility, usually there is someone responsible for maintaining the toolchains which you might contact for help.