[cig-commits] r22675 - seismo/3D/SPECFEM3D_GLOBE/trunk/src/compute_optimized_dumping_undo_att

dkomati1 at geodynamics.org dkomati1 at geodynamics.org
Thu Jul 25 09:26:58 PDT 2013


Author: dkomati1
Date: 2013-07-25 09:26:57 -0700 (Thu, 25 Jul 2013)
New Revision: 22675

Modified:
   seismo/3D/SPECFEM3D_GLOBE/trunk/src/compute_optimized_dumping_undo_att/compute_optimized_dumping_undo_att.f90
Log:
better comment about the memory size available on Titan at ORNL


Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/src/compute_optimized_dumping_undo_att/compute_optimized_dumping_undo_att.f90
===================================================================
--- seismo/3D/SPECFEM3D_GLOBE/trunk/src/compute_optimized_dumping_undo_att/compute_optimized_dumping_undo_att.f90	2013-07-25 15:53:09 UTC (rev 22674)
+++ seismo/3D/SPECFEM3D_GLOBE/trunk/src/compute_optimized_dumping_undo_att/compute_optimized_dumping_undo_att.f90	2013-07-25 16:26:57 UTC (rev 22675)
@@ -234,8 +234,9 @@
   print *,'How much memory (in GB) is installed on your machine per CPU core?'
   print *,'        (or per GPU card or per INTEL MIC Phi board)?'
   print *,'  (beware, this value MUST be given per core, i.e. per MPI thread, i.e. per MPI rank, NOT per node)'
-  print *,'  (this value is for instance 4 GB on Tiger at Princeton, 2 GB on the non-GPU part of Titan at ORNL i.e. when using'
-  print *,'   CPUs only there, 2 GB also on the machine used by Christina Morency, and 1.5 GB on the GPU cluster in Marseille)'
+  print *,'  (this value is for instance 4 GB on Tiger at Princeton, 4 GB or 2 GB on the non-GPU part of Titan at ORNL i.e. when'
+  print *,'   using CPUs only there depending on whether you use 8 or 16 MPI tasks per compute node,'
+  print *,'   2 GB also on the machine used by Christina Morency, and 1.5 GB on the GPU cluster in Marseille)'
   read(*,*) gigabytes_avail_per_core
 
   if(gigabytes_avail_per_core < 0.1d0) stop 'less than 100 MB per core does not seem realistic; exiting...'



More information about the CIG-COMMITS mailing list