Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
L
lammps
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
multiscale
lammps
Commits
4fea80fe
Commit
4fea80fe
authored
10 years ago
by
sjplimp
Browse files
Options
Downloads
Patches
Plain Diff
git-svn-id:
svn://svn.icms.temple.edu/lammps-ro/trunk@12486
f3b2605a-c512-4ea7-a41b-209d697bcdaa
parent
705237ae
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
bench/FERMI/README
+20
-23
20 additions, 23 deletions
bench/FERMI/README
with
20 additions
and
23 deletions
bench/FERMI/README
+
20
−
23
View file @
4fea80fe
...
...
@@ -25,7 +25,7 @@ To run on just CPUs (without using the GPU or USER-CUDA styles),
do something like the following:
mpirun -np 1 lmp_linux_double -v x 8 -v y 8 -v z 8 -v t 100 < in.lj
mpirun -np 12 lmp_linux_double -v x 16 -v y 16 -v z 16 -v t 100 < in.
lj
mpirun -np 12 lmp_linux_double -v x 16 -v y 16 -v z 16 -v t 100 < in.
eam
The "xyz" settings determine the problem size. The "t" setting
determines the number of timesteps.
...
...
@@ -37,40 +37,37 @@ nodes, scale up the "-np" setting.
To run with the GPU package, do something like the following:
mpirun -np 12 lmp_linux_single -sf gpu
-pk gpu 1
-v x 32 -v y 32 -v z 64 -v t 100 < in.lj
mpirun -np 8 lmp_linux_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 64 -v t 100 < in.
lj
mpirun -np 12 lmp_linux_single -sf gpu -v x 32 -v y 32 -v z 64 -v t 100 < in.lj
mpirun -np 8 lmp_linux_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 64 -v t 100 < in.
eam
The "xyz" settings determine the problem size. The "t" setting
determines the number of timesteps. The "np" setting determines how
many MPI tasks (per node) the problem will run on
,
The numeric
argument to the "-pk" setting is the number of GPUs (per node)
. Note
that you can use more MPI tasks than GPUs (per
node) with the GPU
package.
many MPI tasks (per node) the problem will run on
.
The numeric
argument to the "-pk" setting is the number of GPUs (per node)
; 1 GPU
is the default. Note
that you can use more MPI tasks than GPUs (per
node) with the GPU
package.
These mpirun commands run on a single node. To run on multiple
nodes,
scale up the "-np" setting, and control the number of
MPI tasks per
node via a "-ppn" setting.
These mpirun commands run on a single node. To run on multiple
nodes,
scale up the "-np" setting, and control the number of
MPI tasks per
node via a "-ppn" setting.
------------------------------------------------------------------------
To run with the USER-CUDA package, do something like the following:
If the script has "cuda" in its name, it is meant to be run using
the USER-CUDA package. For example:
mpirun -np 1 ../lmp_linux_single -c on -sf cuda -v g 1 -v x 16 -v y 16 -v z 16 -v t 100 < in.lj.cuda
mpirun -np 2 ../lmp_linux_double -c on -sf cuda -v g 2 -v x 32 -v y 64 -v z 64 -v t 100 < in.eam.cuda
mpirun -np 1 lmp_linux_single -c on -sf cuda -v x 16 -v y 16 -v z 16 -v t 100 < in.lj
mpirun -np 2 lmp_linux_double -c on -sf cuda -pk cuda 2 -v x 32 -v y 64 -v z 64 -v t 100 < in.eam
The "xyz" settings determine the problem size. The "t" setting
determines the number of timesteps. The "np" setting determines how
many MPI tasks per compute node the problem will run on, and the "g"
setting determines how many GPUs per compute node the problem will run
on, i.e. 1 or 2 in this case. For the USER-CUDA package, the number
of MPI tasks and GPUs (both per compute node) must be equal.
These mpirun commands run on a single node. To run on multiple
nodes, scale up the "-np" setting.
many MPI tasks (per node) the problem will run on. The numeric
argument to the "-pk" setting is the number of GPUs (per node); 1 GPU
is the default. Note that the number of MPI tasks must equal the
number of GPUs (both per node) with the USER-CUDA package.
These mpirun commands run on a single node. To run on multiple nodes,
scale up the "-np" setting, and control the number of MPI tasks per
node via a "-ppn" setting.
------------------------------------------------------------------------
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment