[cig-commits] r16373 - in seismo/3D/SPECFEM3D_GLOBE/trunk: . USER_MANUAL USER_MANUAL/figures

dkomati1 at geodynamics.org dkomati1 at geodynamics.org
Wed Mar 3 12:03:28 PST 2010


Author: dkomati1
Date: 2010-03-03 12:03:27 -0800 (Wed, 03 Mar 2010)
New Revision: 16373

Modified:
   seismo/3D/SPECFEM3D_GLOBE/trunk/INSTALL
   seismo/3D/SPECFEM3D_GLOBE/trunk/README_SPECFEM3D_GLOBE
   seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/figures/PKPdf_all_15s500s.pdf
   seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/figures/bolivia_trans.pdf
   seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/figures/bolivia_vertical.pdf
   seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/figures/vanuatu_trans.pdf
   seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/figures/vanuatu_vertical.pdf
   seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/manual_SPECFEM3D_GLOBE.pdf
   seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/manual_SPECFEM3D_GLOBE.tex
Log:
added a paragraph about cross compilers to the manual


Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/INSTALL
===================================================================
--- seismo/3D/SPECFEM3D_GLOBE/trunk/INSTALL	2010-03-03 18:09:09 UTC (rev 16372)
+++ seismo/3D/SPECFEM3D_GLOBE/trunk/INSTALL	2010-03-03 20:03:27 UTC (rev 16373)
@@ -16,41 +16,41 @@
 Basic installation
 ------------------
 
-1. untar the SPECFEM3D_GLOBE package: 
+1. untar the SPECFEM3D_GLOBE package:
 
-   > tar -zxvf SPECFEM3D_GLOBE*.tar.gz 
+   > tar -zxvf SPECFEM3D_GLOBE*.tar.gz
 
-   in the following, we will assume you set the root directory of the 
+   in the following, we will assume you set the root directory of the
    package name to:
-   
+
    SPECFEM3D_GLOBE/
-   
+
 2. configure the software for your system:
 
     > cd SPECFEM3D_GLOBE/
     > ./configure
 
     by default, it uses gfortran as a Fortran compiler, mpif90 for MPI compilation
-    and gcc as a C compiler. 
-    
+    and gcc as a C compiler.
+
     in order to use a different compiler, use for example:
-    
+
         > ./configure FC=ifort
-  
+
     in this case, it would make use of
-    
-    - ifort, Intel Fortran Compiler 
+
+    - ifort, Intel Fortran Compiler
       (see http://software.intel.com/en-us/intel-compilers/ )
-      with which we see very good performance results.   
-        
-    - a default MPI installation which provides mpif90 as fortran wrapper command.     
+      with which we see very good performance results.
+
+    - a default MPI installation which provides mpif90 as fortran wrapper command.
       An excellent package is provided by Open MPI (see http://www.open-mpi.org/).
-      
-      for the example above, you would make sure that the mpif90 command 
+
+      for the example above, you would make sure that the mpif90 command
       would also use ifort as a Fortran compiler.
-    
+
     - a C compiler provided from the GNU compiler collection (see http://gcc.gnu.org/).
-    
+
     in case `configure` was successful, it will create the files:
 
     ./Makefile
@@ -59,18 +59,18 @@
     ./DATA/Par_file
     ./DATA/CMTSOLUTION
     ./DATA/STATIONS
-    
-    please make sure, your installation is working and that 
+
+    please make sure, your installation is working and that
     the created files 'constant.h' and 'Makefile' satisfy
-    your needs. 
+    your needs.
 
     more information is given in the manual provided in USER_MANUAL.
 
 3. compile the package:
 
    > make
-   
-   
+
+
 well done!
 
 
@@ -81,19 +81,19 @@
 
 'configure' accepts the following options:
 
-FC                          Fortran90 compiler command name, 
+FC                          Fortran90 compiler command name,
                             default is 'gfortran'
- 
-MPIFC                       MPI Fortran90 command name, 
+
+MPIFC                       MPI Fortran90 command name,
                             default is 'mpif90'
 
 CC                          C compiler command name,
                             default is 'gcc'
 
-FLAGS_CHECK                 Compiler flags for non-critical subroutines, 
+FLAGS_CHECK                 Compiler flags for non-critical subroutines,
                             default is '-O3'
-                  
-FLAGS_NO_CHECK              Compiler flags for creating fast, production-run code for 
+
+FLAGS_NO_CHECK              Compiler flags for creating fast, production-run code for
                             critical subroutines,
                             default is '-O3'
 
@@ -106,20 +106,20 @@
 
 
 --enable-double-precision   The package can run either in single or in double precision.
-                            default is single precision 
-                            
---help                      Directs configure to print a usage screen 
-                            
+                            default is single precision
 
-run with 
+--help                      Directs configure to print a usage screen
+
+
+run with
   ./configure <option>=<my_option> --<flag>
-  
 
+
 Compiler specific flags are set in file 'flags.guess' in order to provide
-suitable default options for different compilers. 
+suitable default options for different compilers.
 
-you might however want to modify the created 'Makefile' and specify your best 
-system compiler flags in order to ensure optimal performance of the code. 
+you might however want to modify the created 'Makefile' and specify your best
+system compiler flags in order to ensure optimal performance of the code.
 
 
 
@@ -133,13 +133,13 @@
 1. configuration fails:
 
    Examine the log file 'config.log'. It contains detailed informations.
-   in many cases, the path's to these specific compiler commands F90, 
+   in many cases, the path's to these specific compiler commands F90,
    CC and MPIF90 won't be correct if `configure` fails.
-   
-   please make sure that you have a working installation of a Fortran compiler, 
+
+   please make sure that you have a working installation of a Fortran compiler,
    a C compiler and an MPI implementation. you should be able to compile this
    little program code:
-   
+
       program main
         include 'mpif.h'
         integer, parameter :: CUSTOM_MPI_TYPE = MPI_REAL
@@ -149,43 +149,43 @@
         call MPI_FINALIZE(ier)
       end
 
-    
+
 2. compilation fails stating :
-    ... 
+    ...
     In file ./compute_element_properties.f90:44
 
       use meshfem3D_models_par
                          1
     Fatal Error: File 'meshfem3d_models_par.mod' opened at (1) is not a GFORTRAN module file
     ...
-    
-  make sure, you're pointing to the right 'mpif90' wrapper command. 
-  
+
+  make sure, you're pointing to the right 'mpif90' wrapper command.
+
   normally, this message will appear when you're mixing two different fortran
-  compilers. that is, using e.g. gfortran to compile non-MPI files 
-  and mpif90, wrapper provided for e.g. ifort, to compile MPI-files. 
-  the module will be created by the wrapper, thus ifort, while the gfortran compiler 
+  compilers. that is, using e.g. gfortran to compile non-MPI files
+  and mpif90, wrapper provided for e.g. ifort, to compile MPI-files.
+  the module will be created by the wrapper, thus ifort, while the gfortran compiler
   is trying to read that module for the compilation of 'compute_element_properties.f90'.
-  
+
   fix: e.g. specify > ./configure FC=gfortran MPIF90=/usr/local/openmpi-gfortran/bin/mpif90
-  
-  
+
+
 3. compilation fails with:
     ...
     obj/specfem3D.o: In function `MAIN__':
     specfem3D.f90:(.text+0xb66): relocation truncated to fit: R_X86_64_32S against `.bss'
     ...
-    
-  you're probably using some resolution settings in 'DATA/Par_file' which are 
+
+  you're probably using some resolution settings in 'DATA/Par_file' which are
   too big for a single processor on your system. the solver tries to statically allocate arrays,
   which can not be handled by a 32-bit address anymore in this case.
-  
+
   fix: check the static memory needed by the solver, which is outputted
        when you run: > make xcreate_header_file
                      > ./xcreate_header_file
-       
+
        the size of static arrays per slice has to fit on to a single processor.
-  
+
        you can either try to use e.g. -mcmodel=medium as additional compiler flag,
        or better, change your resolution settings for NPROC_XI,NPROC_ETA and NEX_XI,NEX_ETA.
-       
+

Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/README_SPECFEM3D_GLOBE
===================================================================
--- seismo/3D/SPECFEM3D_GLOBE/trunk/README_SPECFEM3D_GLOBE	2010-03-03 18:09:09 UTC (rev 16372)
+++ seismo/3D/SPECFEM3D_GLOBE/trunk/README_SPECFEM3D_GLOBE	2010-03-03 20:03:27 UTC (rev 16373)
@@ -24,12 +24,12 @@
 ------------------
 
 A short step-by-step procedure to run a default simulation is described
-here. 
+here.
 
 1. To set up a default simulation, you need to have the files:
 
-   DATA/Par_file 
-   DATA/CMTSOLUTION   
+   DATA/Par_file
+   DATA/CMTSOLUTION
    DATA/STATIONS
 
    we will use the default simulation files '*.default' provided in the package:
@@ -37,14 +37,14 @@
    > cp DATA/Par_file.default DATA/Par_file
    > cp DATA/CMTSOLUTION.default DATA/CMTSOLUTION
    > cp DATA/STATIONS.default DATA/STATIONS
-   
-   The default simulation is an earthquake that occurred in 
+
+   The default simulation is an earthquake that occurred in
    Bolivia 1994 with a moment magnitude of ~6.8.
-   
+
    The simulation will run over the entire globe and require a total of 150 CPUs.
    The seismogram resolution will be accurate down to ~18 seconds.
-   
 
+
 2. compile the mesher:
 
    > make xmeshfem3D
@@ -53,60 +53,60 @@
 3. run the mesher on your cluster:
 
    > qsub go_mesher_pbs.bash
-   
+
    - Different cluster scripts are provided as examples in the directory
       UTILS/Cluster. Locate your script and copy it the package
-      root directory: 
+      root directory:
       e.g. > cp UTILS/Cluster/pbs/go_mesher_pbs.bash ./
-      
+
       The example above uses a version of the Portable Batch System (PBS).
       Example scripts are also provided for Load Sharing Facility (LSF)
       and Sun Grid Engine (SGE). make sure, you modify this script to
       fit to your specific cluster setup.
-  
-   - please, before you submit your job, make sure to have created the directory specified as 
+
+   - please, before you submit your job, make sure to have created the directory specified as
       'LOCAL_PATH' in your DATA/Par_file. by default the DATA/Par_file specifies:
           ..
           # path to store the local database files on each node
           LOCAL_PATH                      = ./DATABASES_MPI
           ..
-          
+
       make sure this points to the right location for your cluster and that
       you created the directory before submitting your job:
-      e.g. > mkdir ./DATABASES_MPI  
-      
+      e.g. > mkdir ./DATABASES_MPI
+
       (at this point, you should also know if your directory for LOCAL_PATH
        is accessible by all compute nodes or not. set the flag
        LOCAL_PATH_IS_ALSO_GLOBAL in constants.h accordingly )
-       
-   after a successful run, the mesher will have created a perfectly, load-balanced 
+
+   after a successful run, the mesher will have created a perfectly, load-balanced
    hexahedral spectral-element mesh stored in this directory.
 
    note: for the default example, the mesher will take about 30 minutes
          to create the mesh.
          the file disk space needed for this global mesh is about ~51 GB.
-         
-     
+
+
 4. compile the solver:
 
   > make xspecfem3D
-  
+
   re-compilation is needed since the solver tries to use static memory allocation
   and compiler optimizations that will work out much better once the details of
-  your simulation mesh are known. 
+  your simulation mesh are known.
 
 
 5. run the solver on your cluster:
 
   > qsub go_solver_pbs.bash
-  
-  - Again, different cluster scripts are provided in the directory 
-    UTILS/Cluster. modify the example scripts to fit to your specific 
+
+  - Again, different cluster scripts are provided in the directory
+    UTILS/Cluster. modify the example scripts to fit to your specific
     cluster setup.
-        
+
   after a successful simulation, all seismograms are stored into the directory:
   OUTPUT_FILES/
-    
+
   note: the default simulation will take about 1 h 40 m.
         the file disk space for all seismograms in OUTPUT_FILES/ is about ~77 MB.
 
@@ -115,4 +115,4 @@
 well done!
 
 
-   
+

Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/figures/PKPdf_all_15s500s.pdf
===================================================================
(Binary files differ)

Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/figures/bolivia_trans.pdf
===================================================================
(Binary files differ)

Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/figures/bolivia_vertical.pdf
===================================================================
(Binary files differ)

Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/figures/vanuatu_trans.pdf
===================================================================
(Binary files differ)

Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/figures/vanuatu_vertical.pdf
===================================================================
(Binary files differ)

Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/manual_SPECFEM3D_GLOBE.pdf
===================================================================
(Binary files differ)

Modified: seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/manual_SPECFEM3D_GLOBE.tex
===================================================================
--- seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/manual_SPECFEM3D_GLOBE.tex	2010-03-03 18:09:09 UTC (rev 16372)
+++ seismo/3D/SPECFEM3D_GLOBE/trunk/USER_MANUAL/manual_SPECFEM3D_GLOBE.tex	2010-03-03 20:03:27 UTC (rev 16373)
@@ -165,6 +165,8 @@
 
 \chapter{\label{cha:Getting-Started}Getting Started}
 
+\section{Configuring and compiling the source code}
+
 The SPECFEM3D\_GLOBE software package comes in a gzipped tar ball.
 In the directory in which you want to install the package, type
 
@@ -273,14 +275,82 @@
 script discussed below automatically takes care of creating the \texttt{OUTPUT\_FILES}
 directory.
 
-Note that if you run very large meshes on a relatively small number
+\section{Running meshes that require more than 2 gigabytes of memory per processor core}
+
+If you run very large meshes on a relatively small number
 of processors, the memory size needed on each processor might become
-greater than 2 gigabytes, which is the upper limit for 32-bit addressing;
-in this case, on some compilers you may need to add \texttt{``-mcmodel=medium}''
+greater than 2 gigabytes, which is the upper limit for 32-bit addressing.
+In this case, on some compilers you may need to add \texttt{``-mcmodel=medium}''
 to the compiler options otherwise the compiler will display an error
-message.
+message (for instance \texttt{``relocation truncated to fit: R\_X86\_64\_PC32 against .bss''} or
+something similar).
 
+\section{Using a cross compiler}
 
+The \texttt{``configure''} script assumes that you will compile the code on the same kind of hardware
+as the machine on which you will run it. On some systems (for instance IBM Blue Gene) this might not be the case
+and you may compile the code using a cross compiler on a frontend computer that does not have the same 
+architecture. In such a case, typing \texttt{``make all''} on the frontend will fail, but you can use one of these two solutions: \\
+
+\noindent
+1/ create a script that runs \texttt{``make all''} on a node instead of on the frontend, if the compiler is also installed on the nodes \\
+
+\noindent
+2/ after running the \texttt{``configure''} script, create two copies of the Makefile:
+
+\begin{verbatim}
+mv Makefile Makefile1
+cp Makefile1 Makefile2
+\end{verbatim}
+
+\noindent
+In \texttt{Makefile1} put this instead of the current values:
+
+\begin{verbatim}
+FLAGS_CHECK = -O0
+FLAGS_NO_CHECK = -O0
+\end{verbatim}
+
+\noindent
+and replace
+
+\begin{verbatim}
+create_header_file: $O/program_create_header_file.o $(LIBSPECFEM)
+ ${FCCOMPILE_CHECK} -o xcreate_header_file $O/program_create_header_file.o $(LIBSPECFEM)
+\end{verbatim}
+
+\noindent
+with
+
+\begin{verbatim}
+create_header_file: $O/program_create_header_file.o $O/exit_mpi.o $(LIBSPECFEM)
+  ${MPIFCCOMPILE_CHECK} -o xcreate_header_file $O/program_create_header_file.o \
+     $O/exit_mpi.o $(LIBSPECFEM)
+\end{verbatim}
+
+\noindent
+In \texttt{Makefile2} comment out these two lines:
+
+\begin{verbatim}
+#OUTPUT_FILES/values_from_mesher.h: xcreate_header_file
+# ./xcreate_header_file
+\end{verbatim}
+
+\noindent
+Then:
+
+\begin{verbatim}
+ make -f Makefile1 clean
+ make -f Makefile1 create_header_file
+ ./xcreate_header_file
+ make -f Makefile2 clean
+ make -f Makefile2 meshfem3D
+ make -f Makefile2 specfem3D
+\end{verbatim}
+
+\noindent
+should work. 
+
 \chapter{\label{cha:Running-the-Mesher}Running the Mesher \texttt{xmeshfem3D}}
 
 You are now ready to compile the mesher. In the directory with the
@@ -606,6 +676,8 @@
 information about the source time function in the file \texttt{OUTPUT\_FILES/plot\_source\_time\_function.txt}.
 This feature is not used at the time of meshing.
 \end{description}
+%
+\clearpage
 \noindent \begin{center}
 \label{table:nex} \begin{longtable}{|c|c|c|c|c|c|c|c|c|c|c|c|}
 \hline
@@ -872,11 +944,8 @@
 129 & 99846 & 2064 & 4128 & 6192 & 8256 & 10320 & 12384 & 14448 & 16512 & 18576 & 20640\tabularnewline
 \hline
 \end{longtable}
-\par\end{center}
-
-\begin{center}
 %
-\begin{table}[H]
+\begin{table}[h]
 \caption{Sample choices for $\nexxi$ given $\nprocxi$ based upon the relationship
 $\nexxi=8\times c\times\nprocxi$, where the integer $c\ge1$. The
 number of MPI slices, i.e., the total number of required processors,
@@ -884,11 +953,9 @@
 The approximate shortest period at which the global simulation is
 accurate for a given value of $\nexxi$ can be estimated by running
 the small serial program \texttt{xcreate\_header\_file}.}
-
 \end{table}
+\end{center}
 
-\par\end{center}
-
 Finally, you need to provide a file that tells MPI what compute nodes
 to use for the simulations. The file must have a number of entries
 (one entry per line) at least equal to the number of processors needed
@@ -2933,7 +3000,7 @@
 
 \chapter{\label{cha:SAC-headers}SAC Headers}
 
-Most information about the simulations (i.e., event/station information, sampling rate, etc.) are written in the headers of the seismograms in SAC format. The list of headers and their explanations may be found in Figure \ref{fig:SAC-headers}. Please check the SAC webpages \url{www.iris.edu/software/sac/} for further information. Please note that the reference time KZTIME is the centroid time ($t_\text{CMT}=t_\text{PDE}+\texttt{time shift}$) which corresponds to zero time in the synthetics. For kinematic rupture simulations, KZTIME equals to the CMT time of the source having the minimum time-shift in the \texttt{CMTSOLUTION} file, and coordinates, depth and half-duration of the event are not provided in the headers. 
+Information about the simulation (i.e., event/station information, sampling rate, etc.) is written in the header of the seismograms in SAC format. The list of values and related explanation are given in Figure \ref{fig:SAC-headers}. Please check the SAC webpages \url{www.iris.edu/software/sac/} for further information. Please note that the reference time KZTIME is the centroid time ($t_\text{CMT}=t_\text{PDE}+\texttt{time shift}$) which corresponds to zero time in the synthetics. For kinematic rupture simulations, KZTIME is equal to the CMT time of the source having the minimum time-shift in the \texttt{CMTSOLUTION} file, and coordinates, depth and half-duration of the event are not provided in the headers. 
 
 \begin{figure}[ht]
 \noindent \begin{centering}



More information about the CIG-COMMITS mailing list