[cig-commits] r22510 - in seismo/3D/SPECFEM3D_GLOBE/trunk: setup src/compute_optimized_dumping_undo_att

dkomati1 at geodynamics.org dkomati1 at geodynamics.org
Fri Jul 5 17:46:59 PDT 2013


Author: dkomati1
Date: 2013-07-05 17:46:59 -0700 (Fri, 05 Jul 2013)
New Revision: 22510

Modified:
   seismo/3D/SPECFEM3D_GLOBE/trunk/setup/constants.h.in
   seismo/3D/SPECFEM3D_GLOBE/trunk/src/compute_optimized_dumping_undo_att/compute_optimized_dumping_undo_att.f90
Log:
added typical values of installed memory per core at Princeton and at ORNL


Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/setup/constants.h.in
===================================================================
--- seismo/3D/SPECFEM3D_GLOBE/trunk/setup/constants.h.in	2013-07-06 00:37:00 UTC (rev 22509)
+++ seismo/3D/SPECFEM3D_GLOBE/trunk/setup/constants.h.in	2013-07-06 00:46:59 UTC (rev 22510)
@@ -296,12 +296,6 @@
   real, parameter :: KL_REG_MIN_LAT = -90.0
   real, parameter :: KL_REG_MAX_LAT = +90.0
 
-! forces attenuation arrays to be fully 3D and defined on all GLL points
-! (useful for more accurate attenuation profiles and adjoint inversions)
-! (if set to .false., 3D attenuation is only used for models with 3D attenuation distributions)
-  logical, parameter :: USE_3D_ATTENUATION_ARRAYS = .false.
-
-
 !!-----------------------------------------------------------
 !!
 !! time stamp information

Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/src/compute_optimized_dumping_undo_att/compute_optimized_dumping_undo_att.f90
===================================================================
--- seismo/3D/SPECFEM3D_GLOBE/trunk/src/compute_optimized_dumping_undo_att/compute_optimized_dumping_undo_att.f90	2013-07-06 00:37:00 UTC (rev 22509)
+++ seismo/3D/SPECFEM3D_GLOBE/trunk/src/compute_optimized_dumping_undo_att/compute_optimized_dumping_undo_att.f90	2013-07-06 00:46:59 UTC (rev 22510)
@@ -222,6 +222,8 @@
   print *,'How much memory (in GB) is installed on your machine per CPU core?'
   print *,'        (or per GPU card or per INTEL MIC Phi board)?'
   print *,'  (beware, this value MUST be given per core, i.e. per MPI thread, i.e. per MPI rank, NOT per node)'
+  print *,'  (this value is for instance 4 GB on Tiger at Princeton, 2 GB on the non-GPU part of Titan at ORNL'
+  print *,'   i.e. when using CPUs only there, and 1.5 GB on the GPU cluster in Marseille)'
   read(*,*) gigabytes_avail_per_core
 
   if(gigabytes_avail_per_core < 0.1d0) stop 'less than 100 MB per core does not seem realistic; exiting...'



More information about the CIG-COMMITS mailing list