Makefile.include: Difference between revisions
m (Huebsch moved page Construction:Makefile.include to Makefile.include without leaving a redirect) |
No edit summary |
||
Line 10: | Line 10: | ||
* [[makefile.include.intel|makefile.include.intel]]: Parallelized using MPI. | * [[makefile.include.intel|makefile.include.intel]]: Parallelized using MPI. | ||
* [[makefile.include.intel_omp|makefile.include.intel_omp]]: Parallelized using MPI + OpenMP. | * [[makefile.include.intel_omp|makefile.include.intel_omp]]: Parallelized using MPI + OpenMP. | ||
* [[makefile.include.intel_ompi_mkl_omp|makefile.include.intel_ompi_mkl_omp]]: Parallelized using OpenMPI + MKL | * [[makefile.include.intel_ompi_mkl_omp|makefile.include.intel_ompi_mkl_omp]]: Parallelized using OpenMPI + OpenMP MKL. | ||
* [[makefile.include.intel_serial|makefile.include.intel_serial]]: Not parallelized, i.e., not suitable for production. | * [[makefile.include.intel_serial|makefile.include.intel_serial]]: Not parallelized, i.e., not suitable for production. | ||
=== GNU compilers for CPUs === | === GNU compilers for CPUs === | ||
* [[makefile.include.gnu|makefile.include.gnu]]: Parallelized using MPI. | * [[makefile.include.gnu|makefile.include.gnu]]: Parallelized using MPI using fully open source software. | ||
* [[makefile.include.gnu_omp|makefile.include.gnu_omp]]: Parallelized using MPI + OpenMP. | * [[makefile.include.gnu_omp|makefile.include.gnu_omp]]: Parallelized using MPI + OpenMP. | ||
* [[makefile.include.gnu_ompi_mkl_omp|makefile.include.gnu_ompi_mkl_omp]]: Parallelized using OpenMPI + MKL | * [[makefile.include.gnu_ompi_mkl_omp|makefile.include.gnu_ompi_mkl_omp]]: Parallelized using OpenMPI + OpenMP using MKL. | ||
* [[makefile.include.gnu_ompi_aocl]]: Parallelized using OpenMPI + AMD Optimizing CPU Libraries (AOCL). | * [[makefile.include.gnu_ompi_aocl]]: Parallelized using OpenMPI + AMD Optimizing CPU Libraries (AOCL). | ||
* [[makefile.include.gnu_ompi_aocl_omp]]: Parallelized using OpenMPI + AOCL using OpenMP. | * [[makefile.include.gnu_ompi_aocl_omp]]: Parallelized using OpenMPI + AOCL using OpenMP. |
Revision as of 13:17, 20 January 2022
Writing a makefile.include
file from scratch is not easy, so we suggest taking one of archetypical files that closely resembles your system as a starting point. It is necessary to customize it anyways to set appropriate paths etc. Optionally, you can enable additional features by setting precompiler flags or linking VASP to other libraries. For instance, we strongly recommend enabling HDF5 support.
Archetypical files
The templates contain information such as precompiler options, compiler options, and how to link libraries. Choose the template based on the compiler, parallelization etc. from the list below and mind the description:
Intel Composer suite and oneAPI Base + HPC toolkits for CPUs
- makefile.include.intel: Parallelized using MPI.
- makefile.include.intel_omp: Parallelized using MPI + OpenMP.
- makefile.include.intel_ompi_mkl_omp: Parallelized using OpenMPI + OpenMP MKL.
- makefile.include.intel_serial: Not parallelized, i.e., not suitable for production.
GNU compilers for CPUs
- makefile.include.gnu: Parallelized using MPI using fully open source software.
- makefile.include.gnu_omp: Parallelized using MPI + OpenMP.
- makefile.include.gnu_ompi_mkl_omp: Parallelized using OpenMPI + OpenMP using MKL.
- makefile.include.gnu_ompi_aocl: Parallelized using OpenMPI + AMD Optimizing CPU Libraries (AOCL).
- makefile.include.gnu_ompi_aocl_omp: Parallelized using OpenMPI + AOCL using OpenMP.
NVIDIA HPC-SDK
- makefile.include.nvhpc: CPU version parallelized using MPI.
- makefile.include.nvhpc_omp: CPU version parallelized using MPI + OpenMP.
- makefile.include.nvhpc_ompi_mkl_omp: CPU version parallelized using OpenMPI + MKL using OpenMP.
- makefile.include.nvhpc_acc: Ported to GPUs using OpenACC and parallelized using MPI.
- makefile.include.nvhpc_omp_acc: Ported to GPUs using OpenACC and parallelized using MPI + OpenMP.
- makefile.include.nvhpc_ompi_mkl_omp_acc: Ported to GPUs using OpenACC and parallelized using OpenMPI + MKL using OpenMP.
Others
An advanced system administrator might benefit from a more detailed discussion about the precompiler options, compiler options, and how to link libraries.
Customize
Open the selected template of the archetypical files and add the required information as explained in the comments towards the end of the file. Then, add any optional feature as listed below. For more details see the list of precompiler options.
HDF5 support (strongly recommended)
This is essential for reading and writing HDF5 files, such as vaspout.h5. The HDF5 library is available for download on the HDF5 official website. To activate this feature set the following:
CPP_OPTIONS+= -DVASP_HDF5 HDF5_ROOT ?= /path/to/your/hdf5/installation LLIBS += -L$(HDF5_ROOT)/lib -lhdf5_fortran INCS += -I$(HDF5_ROOT)/include
Available for VASP >= 6.2.0.
Wannier90 (optional)
Download Wannier90 and compile libwannier.a
.
Important: In case of Wannier90 3.x, you should compile a serial version by removing COMMS=mpi in the make.inc of Wannier90.
|
Then, execute make lib
to build the Wannier90 library. To activate this feature set the following:
CPP_OPTIONS += -DVASP2WANNIER90 WANNIER90_ROOT ?= /path/to/your/wannier90/installation LLIBS += -L$(WANNIER90_ROOT)/lib -lwannier
Mind: VASP version <= 6.1.x are compatible with Wannier90 <= 1.2. To interface VASP 6.1.x with Wannier90 2.x, set -DVASP2WANNIER90v2 instead. As of VASP 6.2.x only Wannier90 2.x and 3.x are supported. |
Libbeef (optional)
The library of BEEF Van-der-Waals functionals is available for download on github. Then, set the following:
CPP_OPTIONS += -DUSELIBXC LIBBEEF_ROOT ?= /path/to/your/libbeef/installation LLIBS += -L$(LIBBEEF_ROOT)/lib -lbeef
Libxc (optional)
Install Libxc >= 5.1.7 from the source.
Important: Regarding meta-GGA functionals and the kinetic-energy density (see LTBOUNDLIBXC), the following important patch must be applied to Libxc version 5.1.7 before compiling it. |
In the file libxc-5.1.7/src/work_mgga.c, the following 8 lines should be deleted:
if(p->info->family != XC_KINETIC) my_sigma[0] = m_min(my_sigma[0], 8.0*my_rho[0]*my_tau[0]); if(p->info->family != XC_KINETIC) my_sigma[2] = m_min(my_sigma[2], 8.0*my_rho[1]*my_tau[1]); if(p->info->family != XC_KINETIC) my_sigma[0] = m_min(my_sigma[0], 8.0*my_rho[0]*my_tau[0]); if(p->info->family != XC_KINETIC) my_sigma[2] = m_min(my_sigma[2], 8.0*my_rho[1]*my_tau[1]);
Then, add the following
CPP_OPTIONS += -DUSELIBXC LIBXC_ROOT ?= /path/to/your/libxc/installation LLIBS += -L$(LIBXC_ROOT)/lib -lxcf03 -lxc INCS += -I$(LIBXC_ROOT)/include
DFTD4 (optional)
To include the DFTD4 van-der-Waals functional, install the DFTD4 library from the source on github. Then, add the following
CPP_OPTIONS += -DDFTD4 DFTD4_ROOT ?= /path/to/your/dft4/installation LLIBS += -L$(DFTD4_ROOT)/build -ldftd4 INCS += -I$(DFTD4_ROOT)/libdftd4.a.p