Known issues: Difference between revisions
No edit summary |
No edit summary |
||
Line 11: | Line 11: | ||
! style=width:5em | Version fixed !! style="text-align:center;"| Version first noticed !! Date !! Description | ! style=width:5em | Version fixed !! style="text-align:center;"| Version first noticed !! Date !! Description | ||
|- | |- | ||
| style="background:#EAAEB2" | Open || style="background:#EAAEB2" |<6||2024-05-13||<div id="KnownIssue10" style="display:inline;"></div> | | style="background:#EAAEB2" | Open || style="background:#EAAEB2" |<6||2024-05-27||<div id="KnownIssue10" style="display:inline;"></div> | ||
'''Calculations with {{TAG|LMODELHF}}=.TRUE. crash if started without {{TAG|WAVECAR}} file in the directory''': | |||
The crash occurs because of a division by the screening parameter that is zero during the first few iterations that are done with the functional from the {{TAG|POTCAR}} file. If a {{TAG|WAVECAR}} file is present, then these first few iterations are skipped. | |||
|- | |||
| style="background:#9AB7FE" | Will be fixed in upcoming version || style="background:#EAAEB2" |<6||2024-05-13||<div id="KnownIssue10" style="display:inline;"></div> | |||
'''Reading the file {{TAG|DYNMATFULL}} may lead to a crash in MPI-parallel calculations ''': | '''Reading the file {{TAG|DYNMATFULL}} may lead to a crash in MPI-parallel calculations ''': | ||
If {{TAG|SCALEE}}<math>\ne</math>1, then the file {{TAG|DYNMATFULL}} is read if present. This may lead to a crash in MPI-parallel calculations, in particular with the gfortran compiler. | If {{TAG|SCALEE}}<math>\ne</math>1, then the file {{TAG|DYNMATFULL}} is read if present. This may lead to a crash in MPI-parallel calculations, in particular with the gfortran compiler. |
Revision as of 17:59, 27 May 2024
Below we provide an incomplete list of known issues. Please mind the description to see whether the issue has been fixed.
Color legend: Open Resolved Planned Obsolete
Version fixed | Version first noticed | Date | Description |
---|---|---|---|
Open | <6 | 2024-05-27 |
Calculations with LMODELHF=.TRUE. crash if started without WAVECAR file in the directory: The crash occurs because of a division by the screening parameter that is zero during the first few iterations that are done with the functional from the POTCAR file. If a WAVECAR file is present, then these first few iterations are skipped. |
Will be fixed in upcoming version | <6 | 2024-05-13 |
Reading the file DYNMATFULL may lead to a crash in MPI-parallel calculations : If SCALEE1, then the file DYNMATFULL is read if present. This may lead to a crash in MPI-parallel calculations, in particular with the gfortran compiler. Thanks to Vyacheslav Bryantsev for the bug report. |
Will be fixed in upcoming version | 6.4.3 | 2024-04-10 |
Compilation error for GCC with ELPA support:
Compilation with ELPA support (Makefile.include#ELPA_(optional)) failes for the GNU Fortran compiler because the Fortran standard for c_loc was not strictly followed. Other compilers (e.g. NVIDIAs Fortran compiler) might not enforce the standard in this case and will produce a working binary.
Solution: Add the Thanks to user rogeli_grima for the bug report! |
Open | 6.4.2 | 2024-04-10 |
AOCC >= 4.0 does not produce runnable code when compiling without OpenMP-support: The AOCC compiler version >= 4.0 apparently uses a more aggressive optimization on a particular symmetry routine (SGRGEN) when compiling without OpenMP-support. Thus, code produced using arch/makefile.include.aocc_ompi_aocl exits with:
Solution: adapt your makefile.include by adding Thanks to users jelle_lagerweij, huangjs, and jun_yin2 for the bug report and investigations. |
Open | 6.4.3 | 2024-04-03 |
-DnoAugXCmeta is broken: We no longer recommend compilation of VASP with this precompiler option since it negatively affects the results of SCAN and SCAN-like meta-GGA calculations. To make matters worse, this feature is broken in VASP.6.4.3. So definitely do not compile VASP.6.4.3 with -DnoAugXCmeta. |
Open | 6.4.2 | 2024-03-21 |
Wannier90 exclude_bands not supported for SCDM method: When using LSCDM together with LWANNIER90 or LWANNIER90_RUN, the use of exclude_bands in the Wannier90 input file is currently not supported. |
6.4.3 | 6.4.2 | 2024-02-06 |
The combination of VCAIMAGES and |
6.4.3 | 6.2.1 | 2023-10-19 |
Phonon calculations ( Thanks for barshab for the bug report. |
6.4.3 | 6.4.2 | 2023-09-20 |
Specific cases of SAXIS gave unexpected quantization axis: For sx=0 and sy<0, VASP falsely assumes alpha=pi/2. It should correctly yield alpha=-pi/2. This error has probably been there for a long time but on one hand the setting is probably rarely chosen and on the other hand as the treatment is consistent within the calculation, the results should not be affected much. |
6.4.3 | 6.4.2 | 2023-08-21 |
Restarting a calculation from vaspwave.h5 when the number of k points changed crashes with a bug message: This can happen, e.g., because ISYM is changed. VASP should behave the same as restarting from WAVECAR. |
6.4.3 | 6.4.0 | 2023-04-06 |
LOCPOT file for vasp_ncl is not written correctly: LVTOT=T for vasp_ncl should write the potential in the "density, magnetization" representation, i.e., the scalar potential (v0), and magnetic field (Bx, By, Bz), to the LOCPOT file. However, VASP writes the potential in the (upup, updown, downup, downdown) representation to real numbers, which is incomplete. |
6.4.2 | 6.4.0 | 2023-05-31 |
Fast-mode predictions will crash together with finite difference (IBRION=5,6): At the end of the calculation the fast-mode is supposed to deallocate important arrays using NSW. In the finite differences method NSW is not used and the fast-mode can wrongly deallocate at an earlier stage. This results in an error if the code wants to access the deallocated arrays. Until a patch is released we suggest two possible quick fixes: (1) Avoid explicit deallocations at the end of the program and let the compiler deallocate when the code runs out of scope. For that remove lines 568, 569, 570 and 572 in the ml_ff_ff2.F file. (2) Avoid the fast-prediction mode: Retrain the MLFF without support for the fast mode, i.e., use Thanks for Soungminbae for the bug report. |
6.4.2 | 6.4.0 | 2023-05-17 |
Incorrect MLFF fast-mode predictions for some triclinic geometries:
Due to an error in the cell list algorithm the MLFF predictions (energy, forces and stress tensor) in the fast-execution mode ( (1) Avoid using the cell list algorithm for neighbor list builds (recommended): Add (2) Avoid the fast-prediction mode: Retrain the MLFF without support for the fast mode, i.e., use Thanks to Johan for a very detailed bug report. |
6.4.2 | 6.4.1 | 2023-05-15 |
Bugs in interface to wannier90:
Thanks to guyohad for the bug report. |
6.4.1 | 6.4.0 | 2023-03-07 |
Output of memory estimate in machine learning force fields is wrong for SVD refitting: The SVD algorithm (ML_IALGO_LINREG=3, 4) uses the design matrix and two helping arrays with the size of the design matrix. In the memory estimates these two helping arrays are not considered correctly. The entry "FMAT for basis" at the beginning of the ML_LOGFILE should be three times larger. The algorithm will be fixed such that it only requires twice the design matrix arrays instead of three times and the outputs for the estimates will contain the correct values. |
6.4.1 | 6.4.0 | 2023-03-07 |
Bug in sparsification routine for machine learning force fields: This bug effects more severely calculatoins where the number of local reference configurations is getting close to ML_MB. By setting ML_MB to a high value this bug can be avoided in most cases (there are still some cases, especially where a small number of local reference configurations is picked and the structure contains many atoms per type or ML_MCONF_NEW is set to a high value). This bug can especially affect refitting runs, resulting in no ML_FFN file. |
6.4.1 | 6.4.0 | 2023-03-07 |
ML_ISTART=2 on sub element types broken for fast force field: When the force is trained for multiple element types, but the production runs (ML_ISTART=2) are carried out for a subset of types, the code most likely crashes. This bug will be urgently fixed. |
6.4.1 | 6.2.0 | 2023-02-20 |
INCAR reader issues:
|
6.4.1 | 6.4.0 | 2023-02-17 |
Corrupt ML_FFN files on some file systems: Insufficient protection against concurrent write statements may lead to corrupt ML_FFN files on some file systems. The broken files will often remain unnoticed until they are used in a prediction-only run with ML_ISTART=2. Then, VASP is likely to exit with some misleading error message about incorrect types present in the ML_FF file. As a workaround it may help to refit starting from the last ML_AB file with ML_MODE=refit which may generate a working ML_FFN file (this is anyway highly recommended to gain access to the fast execution mode in ML_ISTART=2). Alternatively, there is a patch for VASP.6.4.0 available (see attachment to this forum post). Thanks a lot to xiliang_lian and szurlle for reporting this and testing the patch. |
6.4.0 | 6.3.2 | 2023-01-18 |
makefile.include template does not work for AOCC 4.0.0:
The flang preprocessor explicitly requires specifying that the code is in free format |
6.4.0 | 6.1.0 | 2022-11-23 |
Memory leak in MD in OpenMP version compiled with AOCC and NV:
This problem originates from the |
6.3.2 | 5.4.4 | 2021-11-12 |
Ionic contributions to the macroscopic polarization with atoms at the periodic boundary: Removed a section of code from POINT_CHARGE_DIPOL that adds a copy of the atom when it is at the periodic boundary. This can lead to a different value of "Ionic dipole moment: p[ion]" being reported in the OUTCAR with respect to previous versions of VASP. This result, although numerically different is still correct since the polarization is defined up to integer multiples of the polarization quantum. Thanks to Chengcheng Xiao the bug report. |
6.3.2 | 6.3.1 | 2022-05-11 |
ML_ISTART=1 fails for some scenarios: Due to a bug in the rearrangement of the structures found on the ML_AB file, restarting the training of a force field by means of ML_ISTART=1 fails in some cases. N.B.: this problem only occurs in a scenario where one repeatedly restarts the training, and returns to training for a structure that was trained on before (that means exactly same element types and number of atoms per element), but not immediately before. Example: one starts training a force field for structure A, follows this by a continuation run to train for structure B, and then restarts a second time returning to training for structure A again. |
6.3.1 | 6.2.0 | 2022-05-05 |
Treatment of the Coulomb divergence in hybrid-functional band-structure calculations is only correct for PBE0: The Coulomb divergence correction for states at and near the Γ-point in hybrid-functional band-structure calculations (see HFRCUT) was only correctly implemented for PBE0 and HFRCUT=-1. Note: HSE band-structure calculations are not expected to be (strongly) affected because this hybrid functional only includes “short-range” Fock exchange. |
6.3.1 | 6.2.0 | 2022-03-14 |
Bug in interface with Wannier90 for non-collinear spin calculations:
The spin axis for non-collinear spin calculations is not correctly read from the wannier90 input file. This is because this line in the |
6.3.1 | 6.3.0 | 2022-02-04 |
Incompatibility with Fujitsu compiler:
Fujitsu's Fortran compiler does not support overloaded internal subroutines. A simple workaround is to compile without machine-learning–force-fileds capabilities. Comment out the macro definition of |
6.3.0 | 6.2.0 | 2021-05-28 |
Bug in interface with Wannier90 writing UNK when exclude_bands present: The UNK files generated by VASP include all bands where bands specified by `exclude_bands` should be excluded. The fix is to pass the `exclude_bands` array to `get_wave_functions` in mlwf.F. Thanks to Chengcheng Xiao for reporting this bug. |
6.2.0 | 6.1.0 | 2022-08-29 |
Inconsistent energy for fixed electron occupancies:
Rickard Armiento pointed out that the HF total energy for fixed electron occupancies was inconsistent when compared to 5.4.4 or older versions.
This bug was introduced in 6.1.0 in order to support IALGO=3 in combination with ISMEAR=-2 (for |
>=6 | <6 | 2023-10-31 |
For LORBIT >= 11 and ISYM = 2, the partial charge densities are not correctly symmetrized: This can result in different charges for symmetrically equivalent partial charge densities. For older versions of VASP, we recommend a two-step procedure:
To avoid unnecessary large WAVECAR files, we recommend setting LWAVE=.FALSE. in step 2. |