Newer
Older
"Higher level section"_Build.html - "LAMMPS WWW Site"_lws - "LAMMPS
Documentation"_ld - "LAMMPS Commands"_lc :c
:link(lws,http://lammps.sandia.gov)
:link(ld,Manual.html)
:link(lc,Commands_all.html)
:line
Packages with extra build options :h3
When building with some packages, additional steps may be required,
in addition to:
-D PKG_NAME=yes # CMake
make yes-name # make :pre
as described on the "Build_package"_Build_package.html doc page.
For a CMake build there may be additional optional or required
variables to set. For a build with make, a provided library under the
lammps/lib directory may need to be built first. Or an external
library may need to exist on your system or be downloaded and built.
You may need to tell LAMMPS where it is found on your system.
This is the list of packages that may require additional steps.
"GPU"_#gpu,
"KIM"_#kim,
"KOKKOS"_#kokkos,
"LATTE"_#latte,
"MEAM"_#meam,
"MSCG"_#mscg,
"OPT"_#opt,
"POEMS"_#poems,
"PYTHON"_#python,
"REAX"_#reax,
"VORONOI"_#voronoi,
"USER-ATC"_#user-atc,
"USER-AWPMD"_#user-awpmd,
"USER-COLVARS"_#user-colvars,
"USER-H5MD"_#user-h5md,
"USER-INTEL"_#user-intel,
"USER-MOLFILE"_#user-molfile,
"USER-NETCDF"_#user-netcdf,
"USER-OMP"_#user-omp,
"USER-QMMM"_#user-qmmm,
"USER-QUIP"_#user-quip,
"USER-SMD"_#user-smd,
"USER-VTK"_#user-vtk :tb(c=6,ea=c)
:line
To build with this package you must have the zlib compression library
available on your system.
[CMake build]:
If CMake cannot find the library, you can set these variables:
-D ZLIB_INCLUDE_DIR=path # path to zlib.h header file
-D ZLIB_LIBRARIES=path # path to libz.a (.so) file :pre
[Traditional make]:
If make cannot find the library, you can edit the
lib/compress/Makefile.lammps file to specify the paths and library
name.
:line
To build with this package, you must choose options for precision and
which GPU hardware to build for.
[CMake build]:
-D GPU_API=value # value = opencl (default) or cuda
-D GPU_PREC=value # precision setting
# value = double or mixed (default) or single
-D OCL_TUNE=value # hardware choice for GPU_API=opencl
# generic (default) or intel (Intel CPU) or fermi, kepler, cypress (NVIDIA)
-D GPU_ARCH=value # hardware choice for GPU_API=cuda
# value = sm_XX, see below
# default is Cuda-compiler dependent, but typically sm_20
-D CUDPP_OPT=value # optimization setting for GPU_API=cudea
# enables CUDA Performance Primitives Optimizations
# yes (default) or no :pre
GPU_ARCH settings for different GPU hardware is as follows:
sm_20 for Fermi (C2050/C2070, deprecated as of CUDA 8.0) or GeForce GTX 580 or similar
sm_30 for Kepler (K10)
sm_35 for Kepler (K40) or GeForce GTX Titan or similar
sm_37 for Kepler (dual K80)
sm_50 for Maxwell
sm_60 for Pascal (P100)
sm_70 for Volta :ul
Before building LAMMPS, you must build the GPU library in lib/gpu.
You can do this manually if you prefer; follow the instructions in
lib/gpu/README. Note that the GPU library uses MPI calls, so you must
use the same MPI library (or the STUBS library) settings as the main
LAMMPS code. This also applies to the -DLAMMPS_BIGBIG,
-DLAMMPS_SMALLBIG, or -DLAMMPS_SMALLSMALL settings in whichever
Makefile you use.
You can also build the library in one step from the lammps/src dir,
using a command like these, which simply invoke the lib/gpu/Install.py
script with the specified args:
make lib-gpu # print help message
make lib-gpu args="-b" # build GPU library with default Makefile.linux
make lib-gpu args="-m xk7 -p single -o xk7.single" # create new Makefile.xk7.single, altered for single-precision
make lib-gpu args="-m mpi -a sm_60 -p mixed -b" # build GPU library with mixed precision and P100 using other settings in Makefile.mpi :pre
Note that this procedure starts with a Makefile.machine in lib/gpu, as
specified by the "-m" switch. For your convenience, machine makefiles
for "mpi" and "serial" are provided, which have the same settings as
the corresponding machine makefiles in the main LAMMPS source
folder. In addition you can alter 4 important settings in the
Makefile.machine you start from via the corresponding -h, -a, -p, -e
switches (as in the examples above), and also save a copy of the new
Makefile if desired:
CUDA_HOME = where NVIDIA CUDA software is installed on your system
CUDA_ARCH = sm_XX, what GPU hardware you have, same as CMake GPU_ARCH above
CUDA_PRECISION = precision (double, mixed, single)
EXTRAMAKE = which Makefile.lammps.* file to copy to Makefile.lammps :ul
If the library build is successful, 3 files should be created:
lib/gpu/libgpu.a, lib/gpu/nvc_get_devices, and
lib/gpu/Makefile.lammps. The latter has settings that enable LAMMPS
to link with CUDA libraries. If the settings in Makefile.lammps for
your machine are not correct, the LAMMPS build will fail, and
lib/gpu/Makefile.lammps may need to be edited.
NOTE: If you re-build the GPU library in lib/gpu, you should always
un-install the GPU package in lammps/src, then re-install it and
re-build LAMMPS. This is because the compilation of files in the GPU
package uses the library settings from the lib/gpu/Makefile.machine
used to build the GPU library.
To build with this package, the KIM library must be downloaded and
built on your system. It must include the KIM models that you want to
use with LAMMPS.
Note that in LAMMPS lingo, a KIM model driver is a pair style
(e.g. EAM or Tersoff). A KIM model is a pair style for a particular
element or alloy and set of parameters, e.g. EAM for Cu with a
specific EAM potential file. Also note that installing the KIM API
library with all its models, may take around 30 min to build. Of
course you only need to do that once.
See the list of KIM model drivers here:
https://openkim.org/kim-items/model-drivers/alphabetical
See the list of all KIM models here:
https://openkim.org/kim-items/models/by-model-drivers
See the list of example KIM models included by default here:
https://openkim.org/kim-api on the "What is in the KIM API source
package?" page.
[CMake build]:
-D DOWNLOAD_KIM=value # download OpenKIM API v1 for build, value = no (default) or yes
-D KIM_LIBRARY=path # path to KIM shared library (only needed if a custom location)
-D KIM_INCLUDE_DIR=path # path to KIM include directory (only needed if a custom location) :pre
You can download and build the KIM library manually if you prefer;
follow the instructions in lib/kim/README. You can also do it in one
step from the lammps/src dir, using a command like these, which simply
invoke the lib/kim/Install.py script with the specified args.
make lib-kim # print help message
make lib-kim args="-b " # (re-)install KIM API lib with only example models
make lib-kim args="-b -a Glue_Ercolessi_Adams_Al__MO_324507536345_001" # ditto plus one model
make lib-kim args="-b -a everything" # install KIM API lib with all models
make lib-kim args="-n -a EAM_Dynamo_Ackland_W__MO_141627196590_002" # add one model or model driver
make lib-kim args="-p /usr/local/kim-api" # use an existing KIM API installation at the provided location
make lib-kim args="-p /usr/local/kim-api -a EAM_Dynamo_Ackland_W__MO_141627196590_002" # ditto but add one model or driver :pre
:line
To build with this package, you must choose which hardware you want to
build for, either CPUs (multi-threading via OpenMP) or KNLs (OpenMP)
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
or GPUs (NVIDIA Cuda).
For a CMake or make build, these are the possible choices for the
KOKKOS_ARCH settings described below. Note that for CMake, these are
really Kokkos variables, not LAMMPS variables. Hence you must use
case-sensitive values, e.g. BDW, not bdw.
ARMv80 = ARMv8.0 Compatible CPU
ARMv81 = ARMv8.1 Compatible CPU
ARMv8-ThunderX = ARMv8 Cavium ThunderX CPU
BGQ = IBM Blue Gene/Q CPUs
Power8 = IBM POWER8 CPUs
Power9 = IBM POWER9 CPUs
SNB = Intel Sandy/Ivy Bridge CPUs
HSW = Intel Haswell CPUs
BDW = Intel Broadwell Xeon E-class CPUs
SKX = Intel Sky Lake Xeon E-class HPC CPUs (AVX512)
KNC = Intel Knights Corner Xeon Phi
KNL = Intel Knights Landing Xeon Phi
Kepler30 = NVIDIA Kepler generation CC 3.0
Kepler32 = NVIDIA Kepler generation CC 3.2
Kepler35 = NVIDIA Kepler generation CC 3.5
Kepler37 = NVIDIA Kepler generation CC 3.7
Maxwell50 = NVIDIA Maxwell generation CC 5.0
Maxwell52 = NVIDIA Maxwell generation CC 5.2
Maxwell53 = NVIDIA Maxwell generation CC 5.3
Pascal60 = NVIDIA Pascal generation CC 6.0
Pascal61 = NVIDIA Pascal generation CC 6.1 :ul
For multicore CPUs using OpenMP, set these 2 variables.
-D KOKKOS_ARCH=archCPU # archCPU = CPU from list above
-D KOKKOS_ENABLE_OPENMP=yes :pre
For Intel KNLs using OpenMP, set these 2 variables:
-D KOKKOS_ARCH=KNL
-D KOKKOS_ENABLE_OPENMP=yes :pre
For NVIDIA GPUs using CUDA, set these 4 variables:
-D KOKKOS_ARCH="archCPU;archGPU" # archCPU = CPU from list above that is hosting the GPU
# archGPU = GPU from list above
-D KOKKOS_ENABLE_CUDA=yes
-D KOKKOS_ENABLE_OPENMP=yes
-D CMAKE_CXX_COMPILER=wrapper # wrapper = full path to Cuda nvcc wrapper :pre
The wrapper value is the Cuda nvcc compiler wrapper provided in the
Kokkos library: lib/kokkos/bin/nvcc_wrapper. The setting should
include the full path name to the wrapper, e.g.
-D CMAKE_CXX_COMPILER=/home/username/lammps/lib/kokkos/bin/nvcc_wrapper :pre
Choose which hardware to support in Makefile.machine via
KOKKOS_DEVICES and KOKKOS_ARCH settings. See the
src/MAKE/OPTIONS/Makefile.kokkos* files for examples.
For multicore CPUs using OpenMP:
KOKKOS_DEVICES = OpenMP
KOKKOS_ARCH = archCPU # archCPU = CPU from list above :pre
For Intel KNLs using OpenMP:
KOKKOS_DEVICES = OpenMP
KOKKOS_ARCH = KNL :pre
For NVIDIA GPUs using CUDA:
KOKKOS_DEVICES = Cuda
KOKKOS_ARCH = archCPU,archGPU # archCPU = CPU from list above that is hosting the GPU
# archGPU = GPU from list above :pre
For GPUs, you also need these 2 lines in your Makefile.machine before
the CC line is defined, in this case for use with OpenMPI mpicxx. The
2 lines define a nvcc wrapper compiler, which will use nvcc for
compiling CUDA files and use a C++ compiler for non-Kokkos, non-CUDA
files.
KOKKOS_ABSOLUTE_PATH = $(shell cd $(KOKKOS_PATH); pwd)
export OMPI_CXX = $(KOKKOS_ABSOLUTE_PATH)/config/nvcc_wrapper
CC = mpicxx :pre
:line
To build with this package, you must download and build the LATTE
library.
[CMake build]:
-D DOWNLOAD_LATTE=value # download LATTE for build, value = no (default) or yes
-D LATTE_LIBRARY=path # path to LATTE shared library (only needed if a custom location) :pre
You can download and build the LATTE library manually if you prefer;
follow the instructions in lib/latte/README. You can also do it in
one step from the lammps/src dir, using a command like these, which
simply invokes the lib/latte/Install.py script with the specified
args:
make lib-latte # print help message
make lib-latte args="-b" # download and build in lib/latte/LATTE-master
make lib-latte args="-p $HOME/latte" # use existing LATTE installation in $HOME/latte
make lib-latte args="-b -m gfortran" # download and build in lib/latte and
# copy Makefile.lammps.gfortran to Makefile.lammps
:pre
Note that 3 symbolic (soft) links, "includelink" and "liblink" and
"filelink.o", are created in lib/latte to point into the LATTE home
dir. When LAMMPS itself is built it will use these links. You should
also check that the Makefile.lammps file you create is appropriate for
the compiler you use on your system to build LATTE.
:line
Axel Kohlmeyer
committed
NOTE: the use of the MEAM package is discouraged, as it has been
superseded by the USER-MEAMC package, which is a direct translation of
the Fortran code in the MEAM library to C++. The code in USER-MEAMC
should be functionally equivalent to the MEAM package, fully supports
use of "pair_style hybrid"_pair_hybrid.html (the MEAM packaged doesn
not), and has optimizations that make it significantly faster than the
MEAM package.
No additional settings are needed besides "-D PKG_MEAM=yes".
Before building LAMMPS, you must build the MEAM library in lib/meam.
You can build the MEAM library manually if you prefer; follow the
instructions in lib/meam/README. You can also do it in one step from
the lammps/src dir, using a command like these, which simply invoke
the lib/meam/Install.py script with the specified args:
make lib-meam # print help message
make lib-meam args="-m mpi" # build with default Fortran compiler compatible with your MPI library
make lib-meam args="-m serial" # build with compiler compatible with "make serial" (GNU Fortran)
make lib-meam args="-m ifort" # build with Intel Fortran compiler using Makefile.ifort :pre
The build should produce two files: lib/meam/libmeam.a and
lib/meam/Makefile.lammps. The latter is copied from an existing
Makefile.lammps.* and has settings needed to link C++ (LAMMPS) with
Fortran (MEAM library). Typically the two compilers used for LAMMPS
and the MEAM library need to be consistent (e.g. both Intel or both
GNU compilers). If necessary, you can edit/create a new
lib/meam/Makefile.machine file for your system, which should define an
EXTRAMAKE variable to specify a corresponding Makefile.lammps.machine
file.
:line
To build with this package, you must download and build the MS-CG
library. Building the MS-CG library and using it from LAMMPS requires
a C++11 compatible compiler and that the GSL (GNU Scientific Library)
headers and libraries are installed on your machine. See the
lib/mscg/README and MSCG/Install files for more details.
-D DOWNLOAD_MSCG=value # download MSCG for build, value = no (default) or yes
-D MSCG_LIBRARY=path # path to MSCG shared library (only needed if a custom location)
-D MSCG_INCLUDE_DIR=path # path to MSCG include directory (only needed if a custom location) :pre
You can download and build the MS-CG library manually if you prefer;
follow the instructions in lib/mscg/README. You can also do it in one
step from the lammps/src dir, using a command like these, which simply
invoke the lib/mscg/Install.py script with the specified args:
make lib-mscg # print help message
make lib-mscg args="-b -m serial" # download and build in lib/mscg/MSCG-release-master
# with the settings compatible with "make serial"
make lib-mscg args="-b -m mpi" # download and build in lib/mscg/MSCG-release-master
# with the settings compatible with "make mpi"
make lib-mscg args="-p /usr/local/mscg-release" # use the existing MS-CG installation in /usr/local/mscg-release :pre
Note that 2 symbolic (soft) links, "includelink" and "liblink", will
be created in lib/mscg to point to the MS-CG src/installation dir.
When LAMMPS is built in src it will use these links. You should not
need to edit the lib/mscg/Makefile.lammps file.
No additional settings are needed besides "-D PKG_OPT=yes".
The compile flag "-restrict" must be used to build LAMMPS with the OPT
package when using Intel compilers. It should be added to the CCFLAGS
line of your Makefile.machine. See src/MAKE/OPTIONS/Makefile.opt for
an example.
No additional settings are needed besides "-D PKG_OPT=yes".
Before building LAMMPS, you must build the POEMS library in lib/poems.
You can do this manually if you prefer; follow the instructions in
lib/poems/README. You can also do it in one step from the lammps/src
dir, using a command like these, which simply invoke the
lib/poems/Install.py script with the specified args:
make lib-poems # print help message
make lib-poems args="-m serial" # build with GNU g++ compiler (settings as with "make serial")
make lib-poems args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
make lib-poems args="-m icc" # build with Intel icc compiler :pre
The build should produce two files: lib/poems/libpoems.a and
lib/poems/Makefile.lammps. The latter is copied from an existing
Makefile.lammps.* and has settings needed to build LAMMPS with the
POEMS library (though typically the settings are just blank). If
necessary, you can edit/create a new lib/poems/Makefile.machine file
for your system, which should define an EXTRAMAKE variable to specify
a corresponding Makefile.lammps.machine file.
:line
Building with the PYTHON package requires you have a Python shared
library available on your system, which needs to be a Python 2
version, 2.6 or later. Python 3 is not yet supported. See
lib/python/README for more details.
[CMake build]:
-D PYTHON_EXECUTABLE=path # path to Python executable to use :pre
Without this setting, CMake will ues the default Python on your
system. To use a different Python version, you can either create a
virtualenv, activate it and then run cmake. Or you can set the
PYTHON_EXECUTABLE variable to specify which Python interpreter should
be used. Note note that you will also need to have the development
headers installed for this version, e.g. python2-devel.
[Traditional make]:
The build uses the lib/python/Makefile.lammps file in the compile/link
process to find Python. You should only need to create a new
Makefile.lammps.* file (and copy it to Makefile.lammps) if the LAMMPS
build fails.
NOTE: the use of the REAX package and its "pair_style
reax"_pair_reax.html command is discouraged, as it is no longer
maintained. Please use the USER-REAXC package and its "pair_style
reax/c"_pair_reaxc.html command instead, and possibly its KOKKOS
enabled variant (pair_style reax/c/kk), which has a more robust memory
management. See the "pair_style reax/c"_pair_reaxc.html doc page for
details.
Axel Kohlmeyer
committed
No additional settings are needed besides "-D PKG_REAX=yes".
Before building LAMMPS, you must build the REAX library in lib/reax.
You can do this manually if you prefer; follow the instructions in
lib/reax/README. You can also do it in one step from the lammps/src
dir, using a command like these, which simply invoke the
lib/reax/Install.py script with the specified args:
make lib-reax # print help message
make lib-reax args="-m serial" # build with GNU Fortran compiler (settings as with "make serial")
make lib-reax args="-m mpi" # build with default MPI Fortran compiler (settings as with "make mpi")
make lib-reax args="-m ifort" # build with Intel ifort compiler :pre
The build should produce two files: lib/reax/libreax.a and
lib/reax/Makefile.lammps. The latter is copied from an existing
Makefile.lammps.* and has settings needed to link C++ (LAMMPS) with
Fortran (REAX library). Typically the two compilers used for LAMMPS
and the REAX library need to be consistent (e.g. both Intel or both
GNU compilers). If necessary, you can edit/create a new
lib/reax/Makefile.machine file for your system, which should define an
EXTRAMAKE variable to specify a corresponding Makefile.lammps.machine
file.
:line
To build with this package, you must download and build the "Voro++
library"_voro_home.
:link(voro_home,http://math.lbl.gov/voro++)
-D DOWNLOAD_VORO=value # download Voro++ for build, value = no (default) or yes
-D VORO_LIBRARY=path # (only needed if at custom location) path to VORO shared library
-D VORO_INCLUDE_DIR=path # (only needed if at custom location) path to VORO include directory :pre
You can download and build the Voro++ library manually if you prefer;
follow the instructions in lib/voronoi/README. You can also do it in
one step from the lammps/src dir, using a command like these, which
simply invoke the lib/voronoi/Install.py script with the specified
args:
make lib-voronoi # print help message
make lib-voronoi args="-b" # download and build the default version in lib/voronoi/voro++-<version>
make lib-voronoi args="-p $HOME/voro++" # use existing Voro++ installation in $HOME/voro++
make lib-voronoi args="-b -v voro++0.4.6" # download and build the 0.4.6 version in lib/voronoi/voro++-0.4.6 :pre
Note that 2 symbolic (soft) links, "includelink" and "liblink", are
created in lib/voronoi to point to the Voro++ src dir. When LAMMPS
builds in src it will use these links. You should not need to edit
the lib/voronoi/Makefile.lammps file.
:line
The USER-ATC package requires the MANYBODY package also be installed.
No additional settings are needed besides "-D PKG_REAX=yes" and "-D
PKG_MANYBODY=yes".
Before building LAMMPS, you must build the ATC library in lib/atc.
You can do this manually if you prefer; follow the instructions in
lib/atc/README. You can also do it in one step from the lammps/src
dir, using a command like these, which simply invoke the
lib/atc/Install.py script with the specified args:
make lib-atc # print help message
make lib-atc args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-atc args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
make lib-atc args="-m icc" # build with Intel icc compiler :pre
The build should produce two files: lib/atc/libatc.a and
lib/atc/Makefile.lammps. The latter is copied from an existing
Makefile.lammps.* and has settings needed to build LAMMPS with the ATC
library. If necessary, you can edit/create a new
lib/atc/Makefile.machine file for your system, which should define an
EXTRAMAKE variable to specify a corresponding Makefile.lammps.machine
file.
Note that the Makefile.lammps file has settings for the BLAS and
LAPACK linear algebra libraries. As explained in lib/atc/README these
can either exist on your system, or you can use the files provided in
lib/linalg. In the latter case you also need to build the library in
lib/linalg with a command like these:
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU Fortran compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI Fortran compiler (settings as with "make mpi")
make lib-linalg args="-m gfortran" # build with GNU Fortran compiler :pre
:line
No additional settings are needed besides "-D PKG_USER-AQPMD=yes".
Before building LAMMPS, you must build the AWPMD library in lib/awpmd.
You can do this manually if you prefer; follow the instructions in
lib/awpmd/README. You can also do it in one step from the lammps/src
dir, using a command like these, which simply invoke the
lib/awpmd/Install.py script with the specified args:
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
make lib-awpmd # print help message
make lib-awpmd args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-awpmd args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
make lib-awpmd args="-m icc" # build with Intel icc compiler :pre
The build should produce two files: lib/awpmd/libawpmd.a and
lib/awpmd/Makefile.lammps. The latter is copied from an existing
Makefile.lammps.* and has settings needed to build LAMMPS with the
AWPMD library. If necessary, you can edit/create a new
lib/awpmd/Makefile.machine file for your system, which should define
an EXTRAMAKE variable to specify a corresponding
Makefile.lammps.machine file.
Note that the Makefile.lammps file has settings for the BLAS and
LAPACK linear algebra libraries. As explained in lib/awpmd/README
these can either exist on your system, or you can use the files
provided in lib/linalg. In the latter case you also need to build the
library in lib/linalg with a command like these:
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU Fortran compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI Fortran compiler (settings as with "make mpi")
make lib-linalg args="-m gfortran" # build with GNU Fortran compiler :pre
:line
USER-COLVARS package :h4,link(user-colvars)
No additional settings are needed besides "-D PKG_USER-COLVARS=yes".
Before building LAMMPS, you must build the COLVARS library in
lib/colvars. You can do this manually if you prefer; follow the
instructions in lib/colvars/README. You can also do it in one step
from the lammps/src dir, using a command like these, which simply
invoke the lib/colvars/Install.py script with the specified args:
make lib-colvars # print help message
make lib-colvars args="-m serial" # build with GNU g++ compiler (settings as with "make serial")
make lib-colvars args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
make lib-colvars args="-m g++-debug" # build with GNU g++ compiler and colvars debugging enabled :pre
The build should produce two files: lib/colvars/libcolvars.a and
lib/colvars/Makefile.lammps. The latter is copied from an existing
Makefile.lammps.* and has settings needed to build LAMMPS with the
COLVARS library (though typically the settings are just blank). If
necessary, you can edit/create a new lib/colvars/Makefile.machine file
for your system, which should define an EXTRAMAKE variable to specify
a corresponding Makefile.lammps.machine file.
:line
To build with this package you must have the HDF5 software package
installed on your system, which should include the h5cc compiler and
the HDF5 library.
No additional settings are needed besides "-D PKG_USER-H5MD=yes".
This should autodetect the H5MD library on your system. Several
advanced CMake H5MD options exist if you need to specify where it is
installed. Use the ccmake (terminal window) or cmake-gui (graphical)
tools to see these options and set them interactively from their user
interfaces.
Before building LAMMPS, you must build the CH5MD library in lib/h5md.
You can do this manually if you prefer; follow the instructions in
lib/h5md/README. You can also do it in one step from the lammps/src
dir, using a command like these, which simply invoke the
lib/h5md/Install.py script with the specified args:
make lib-h5md # print help message
make lib-hm5d args="-m h5cc" # build with h5cc compiler :pre
The build should produce two files: lib/h5md/libch5md.a and
lib/h5md/Makefile.lammps. The latter is copied from an existing
Makefile.lammps.* and has settings needed to build LAMMPS with the
system HDF5 library. If necessary, you can edit/create a new
lib/h5md/Makefile.machine file for your system, which should define an
EXTRAMAKE variable to specify a corresponding Makefile.lammps.machine
file.
:line
To build with this package, you must choose which hardware you want to
build for, either Intel CPUs or Intel KNLs. You should also typically
"install the USER-OMP package"_#user-omp, as it can be used in tandem
with the USER-INTEL package to good effect, as explained on the "Speed
intel"_Speed_intel.html doc page.
-D INTEL_ARCH=value # value = cpu (default) or knl
-D BUILD_OMP=yes # also required to build with the USER-INTEl package :pre
Requires an Intel compiler as well as the Intel TBB and MKL libraries.
Choose which hardware to compile for in Makefile.machine via the
following settings. See src/MAKE/OPTIONS/Makefile.intel_cpu* and
Makefile.knl files for examples.
OPTFLAGS = -xHost -O2 -fp-model fast=2 -no-prec-div -qoverride-limits -qopt-zmm-usage=high
CCFLAGS = -g -qopenmp -DLAMMPS_MEMALIGN=64 -no-offload -fno-alias -ansi-alias -restrict $(OPTFLAGS)
LINKFLAGS = -g -qopenmp $(OPTFLAGS)
For KNLs:
OPTFLAGS = -xMIC-AVX512 -O2 -fp-model fast=2 -no-prec-div -qoverride-limits
CCFLAGS = -g -qopenmp -DLAMMPS_MEMALIGN=64 -no-offload -fno-alias -ansi-alias -restrict $(OPTFLAGS)
LINKFLAGS = -g -qopenmp $(OPTFLAGS)
LIB = -ltbbmalloc :pre
:line
USER-MOLFILE package :h4,link(user-molfile)
No additional settings are needed besides "-D PKG_USER-MOLFILE=yes".
The lib/molfile/Makefile.lammps file has a setting for a dynamic
loading library libdl.a that is typically present on all systems. It
is required for LAMMPS to link with this package. If the setting is
not valid for your system, you will need to edit the Makefile.lammps
file. See lib/molfile/README and lib/molfile/Makefile.lammps for
details.
To build with this package you must have the NetCDF library installed
on your system.
No additional settings are needed besides "-D PKG_USER-NETCDF=yes".
This should autodetect the NETCDF library if it is installed on your
system at standard locations. Several advanced CMake NETCDF options
exist if you need to specify where it was installed. Use the ccmake
(terminal window) or cmake-gui (graphical) tools to see these options
and set them interactively from their user interfaces.
The lib/netcdf/Makefile.lammps file has settings for NetCDF include
and library files which LAMMPS needs to build with this package. If
the settings are not valid for your system, you will need to edit the
Makefile.lammps file. See lib/netcdf/README for details.
No additional settings are required besides "-D PKG_USER-OMP=yes". If
CMake detects OpenMP support, the USER-OMP code will be compiled with
multi-threading support enabled, otherwise as optimized serial code.
To enable multi-threading support in the USER-OMP package (and other
styles supporting OpenMP) the following compile and link flags must
be added to your Makefile.machine file.
See src/MAKE/OPTIONS/Makefile.omp for an example.
CCFLAGS: -fopenmp # for GNU Compilers
CCFLAGS: -qopenmp -restrict # for Intel compilers on Linux
LINKFLAGS: -fopenmp # for GNU Compilers
LINKFLAGS: -qopenmp # for Intel compilers on Linux :pre
For other platforms and compilers, please consult the documentation
about OpenMP support for your compiler.
NOTE: The LAMMPS executable these steps produce is not yet functional
for a QM/MM simulation. You must also build Quantum ESPRESSO and
create a new executable (pwqmmm.x) which links LAMMPS and Quantum
ESPRESSO together. These are steps 3 and 4 described in the
lib/qmmm/README file. Unfortunately, the Quantum ESPRESSO developers
have been breaking the interface that the QM/MM code in LAMMPS is using,
so that currently (Summer 2018) using this feature requires either
correcting the library interface feature in recent Quantum ESPRESSO
releases, or using an outdated version of QE. The last version of
Quantum ESPRESSO known to work with this QM/MM interface was version
5.4.1 from 2016.
The CMake build system currently does not support building the full
QM/MM-capable hybrid executable of LAMMPS and QE called pwqmmm.x.
You must use the traditional make build for this package.
Before building LAMMPS, you must build the QMMM library in lib/qmmm.
You can do this manually if you prefer; follow the first two steps
explained in lib/qmmm/README. You can also do it in one step from the
lammps/src dir, using a command like these, which simply invoke the
lib/qmmm/Install.py script with the specified args:
make lib-qmmm # print help message
make lib-qmmm args="-m serial" # build with GNU Fortran compiler (settings as in "make serial")
make lib-qmmm args="-m mpi" # build with default MPI compiler (settings as in "make mpi")
make lib-qmmm args="-m gfortran" # build with GNU Fortran compiler :pre
The build should produce two files: lib/qmmm/libqmmm.a and
lib/qmmm/Makefile.lammps. The latter is copied from an existing
Makefile.lammps.* and has settings needed to build LAMMPS with the
QMMM library (though typically the settings are just blank). If
necessary, you can edit/create a new lib/qmmm/Makefile.machine file
for your system, which should define an EXTRAMAKE variable to specify
a corresponding Makefile.lammps.machine file.
You can then install QMMM package and build LAMMPS in the usual
manner. After completing the LAMMPS build and compiling Quantum
ESPRESSO with external library support, go back to the lib/qmmm folder
and follow the instructions on the README file to build the combined
LAMMPS/QE QM/MM executable (pwqmmm.x) in the lib/qmmm folder.
To build with this package, you must download and build the QUIP
library. It can be obtained from GitHub. For support of GAP
potentials, additional files with specific licensing conditions need
to be downloaded and configured. See step 1 and step 1.1 in the
lib/quip/README file for details on how to do this.
-D QUIP_LIBRARIES=path # path to libquip.a (only needed if a custom location) :pre
CMake will not download and build the QUIP library. But once you have
done that, a CMake build of LAMMPS with "-D PKG_USER-QUIP=yes" should
work. Set QUIP_LIBRARIES if CMake cannot find the QUIP library.
The download/build procedure for the QUIP library, described in
lib/quip/README file requires setting two environment variables,
QUIP_ROOT and QUIP_ARCH. These are accessed by the
lib/quip/Makefile.lammps file which is used when you compile and link
LAMMPS with this package. You should only need to edit
Makefile.lammps if the LAMMPS build can not use its settings to
successfully build on your system.
To build with this package, you must download the Eigen3 library.
Eigen3 is a template library, so you do not need to build it.
-D DOWNLOAD_EIGEN3 # download Eigen3, value = no (default) or yes
-D EIGEN3_INCLUDE_DIR=path # path to Eigen library (only needed if a custom location) :pre
Set EIGEN3_INCLUDE_DIR if CMake cannot find the Eigen3 library.
You can download the Eigen3 library manually if you prefer; follow the
instructions in lib/smd/README. You can also do it in one step from
the lammps/src dir, using a command like these, which simply invoke
the lib/smd/Install.py script with the specified args:
make lib-smd # print help message
make lib-smd args="-b" # download to lib/smd/eigen3
make lib-smd args="-p /usr/include/eigen3" # use existing Eigen installation in /usr/include/eigen3 :pre
Note that a symbolic (soft) link named "includelink" is created in
lib/smd to point to the Eigen dir. When LAMMPS builds it will use
this link. You should not need to edit the lib/smd/Makefile.lammps
file.
To build with this package you must have the VTK library installed on
your system.
No additional settings are needed besides "-D PKG_USER-VTK=yes".
This should autodetect the VTK library if it is installed on your
system at standard locations. Several advanced VTK options exist if
you need to specify where it was installed. Use the ccmake (terminal
window) or cmake-gui (graphical) tools to see these options and set
them interactively from their user interfaces.
[Traditional make]:
The lib/vtk/Makefile.lammps file has settings for accessing VTK files
and its library, which LAMMPS needs to build with this package. If
the settings are not valid for your system, check if one of the other
lib/vtk/Makefile.lammps.* files is compatible and copy it to
Makefile.lammps. If none of the provided files work, you will need to
edit the Makefile.lammps file. See lib/vtk/README for details.