[cig-commits] [commit] devel, master: clarified the way to define memory sizes for UNDO_ATTENUATION on Titan at Oak Ridge (8087d34)

cig_noreply at geodynamics.org cig_noreply at geodynamics.org
Thu Nov 6 08:31:39 PST 2014


Repository : https://github.com/geodynamics/specfem3d_globe

On branches: devel,master
Link       : https://github.com/geodynamics/specfem3d_globe/compare/bc58e579b3b0838a0968725a076f5904845437ca...be63f20cbb6f462104e949894dbe205d2398cd7f

>---------------------------------------------------------------

commit 8087d349c9174b40d3e3ce60f28ee183f5398b38
Author: Dimitri Komatitsch <komatitsch at lma.cnrs-mrs.fr>
Date:   Tue Sep 23 02:28:47 2014 +0200

    clarified the way to define memory sizes for UNDO_ATTENUATION on Titan at Oak Ridge


>---------------------------------------------------------------

8087d349c9174b40d3e3ce60f28ee183f5398b38
 DATA/Par_file | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/DATA/Par_file b/DATA/Par_file
index 1e41a3a..70cfb8a 100644
--- a/DATA/Par_file
+++ b/DATA/Par_file
@@ -61,16 +61,21 @@ PARTIAL_PHYS_DISPERSION_ONLY    = .true.
 UNDO_ATTENUATION                = .false.
 # How much memory (in GB) is installed on your machine per CPU core (only used for UNDO_ATTENUATION, can be ignored otherwise)
 #         (or per GPU card or per INTEL MIC Phi board)
-#   (beware, this value MUST be given per core, i.e. per MPI thread, i.e. per MPI rank, NOT per node)
-#   (this value is for instance:
+#   Beware, this value MUST be given per core, i.e. per MPI thread, i.e. per MPI rank, NOT per node.
+#   This value is for instance:
 #   -  4 GB on Tiger at Princeton
 #   -  4 GB on TGCC Curie in Paris
-#   -  4 GB or 2 GB on the non-GPU part of Titan at ORNL i.e. when using CPUs only there
-#             depending on whether you use 8 or 16 MPI tasks per compute node
-#   - 32 GB on the GPU part of Titan at ORNL
+#   -  4 GB on Titan at ORNL when using CPUs only (no GPUs); start your run with "aprun -n$NPROC -N8 -S4 -j1"
 #   -  2 GB on the machine used by Christina Morency
 #   -  2 GB on the TACC machine used by Min Chen
-#   -  1.5 GB on the GPU cluster in Marseille)
+#   -  1.5 GB on the GPU cluster in Marseille
+# When running on GPU machines, it is simpler to set PERCENT_OF_MEM_TO_USE_PER_CORE = 100.d0
+# and then set MEMORY_INSTALLED_PER_CORE_IN_GB to the amount of memory that you estimate is free (rather than installed)
+# on the host of the GPU card while running your GPU job.
+# For GPU runs on Titan at ORNL, use PERCENT_OF_MEM_TO_USE_PER_CORE = 100.d0 and MEMORY_INSTALLED_PER_CORE_IN_GB = 25.d0
+# and run your job with "aprun -n$NPROC -N1 -S1 -j1"
+# (each host has 32 GB on Titan, each GPU has 6 GB, thus even if all the GPU arrays are duplicated on the host
+#  this leaves 32 - 6 = 26 GB free on the host; leaving 1 GB for the Linux system, we can safely use 100% of 25 GB)
 MEMORY_INSTALLED_PER_CORE_IN_GB = 4.d0
 # What percentage of this total do you allow us to use for arrays to undo attenuation, keeping in mind that you
 # need to leave some memory available for the GNU/Linux system to run



More information about the CIG-COMMITS mailing list