[cig-commits] [commit] master, python-removal, rajesh-petsc-schur: Removed Pyre and Python references from Chapter 3 of the manual. Updated the example config files in Chapter 3 to the C version. (eb40d8e)

cig_noreply at geodynamics.org cig_noreply at geodynamics.org
Wed Nov 5 19:01:48 PST 2014


Repository : https://github.com/geodynamics/citcoms

On branches: master,python-removal,rajesh-petsc-schur
Link       : https://github.com/geodynamics/citcoms/compare/464e1b32299b15819f93efd98d969cddb84dfe51...f97ae655a50bdbd6dac1923a3471ee4dae178fbd

>---------------------------------------------------------------

commit eb40d8ecf982a66d051285c912a9ca74bc5f2f98
Author: Rajesh Kommu <rajesh.kommu at gmail.com>
Date:   Fri Jun 13 16:26:26 2014 -0700

    Removed Pyre and Python references from Chapter 3 of the manual. Updated
    the example config files in Chapter 3 to the C version.


>---------------------------------------------------------------

eb40d8ecf982a66d051285c912a9ca74bc5f2f98
 doc/citcoms-manual.pdf | Bin 12587122 -> 12581683 bytes
 doc/citcoms-manual.tex | 590 +++++++++++--------------------------------------
 2 files changed, 135 insertions(+), 455 deletions(-)

diff --git a/doc/citcoms-manual.pdf b/doc/citcoms-manual.pdf
index 3619f99..b53351e 100644
Binary files a/doc/citcoms-manual.pdf and b/doc/citcoms-manual.pdf differ
diff --git a/doc/citcoms-manual.tex b/doc/citcoms-manual.tex
index 808ed3c..9639904 100644
--- a/doc/citcoms-manual.tex
+++ b/doc/citcoms-manual.tex
@@ -1344,10 +1344,9 @@ to run \texttt{make} again until you re-run \texttt{configure}.
 
 \chapter{Running CitcomS}
 
+\section{Using CitcomS}
 
-\section{Using CitcomS without Pyre}
-
-Regardless of whether you build CitcomS with or without Pyre, two
+Upon the completion of a successful build, two
 binary executables, \texttt{CitcomSRegional} and \texttt{CitcomSFull},
 are always placed under the \texttt{bin} directory. These programs
 are compiled from pure C code and do not use Python or the Pyre framework.
@@ -1361,216 +1360,146 @@ for a full spherical model, are provided in the \texttt{examples/Regional}
 and \texttt{examples/Full} directories, respectively. The meaning
 of the input parameters is described in Appendix \vref{cha:Appendix-A:-Input}.
 
-The pure C version, compared to the Pyre version, shares the same
-input parameters and functionality, but is less flexible in changing
-parameters and in launching parallel jobs. Users are encouraged to
-use the Pyre version if possible. In the following sections and Chapter
-\ref{cha:Cookbooks}, we will concentrate on the Pyre version only.
-
-
-\section{Using CitcomS with Pyre}
-
-If you build CitcomS with the Pyre framework, an additional executable,
-\texttt{citcoms}, is placed under the \texttt{bin} directory. The
-\texttt{citcoms} executable is a Python script used for running both
-the regional and full spherical models using Pyre. Executed without
-any command line options, \texttt{citcoms} will run a regional model
-with default parameters. It can also run a full spherical model if
-the correct parameters are set (see Section \vref{sec:Cookbook-1:-Global}
-for an example).
-
-On input, CitcomS needs numerous parameters to be specified (see Appendix
-\ref{cha:Appendix-A:-Input} for a full list). All parameters have
-sensible default values. Since you will likely want to specify the
-parameters of your CitcomS runs, you will need to alter both computational
-details (such as the number of time steps) and controlling parameters
-specific to your problem (such as the Rayleigh number). These input
-parameters, or properties in the Pyre terminology, are grouped under
-several Pyre components.
-
-Most of the properties you will set using CitcomS have names which
-are identical to the parameters for CitcomS in pure C version, which
-are described in Appendix \ref{cha:Appendix-A:-Input}.
-
-
 \section{Changing Parameters }
 
-There are several methods to set the input parameters for CitcomS:
-via the command line, or by using a configuration file in \texttt{.cfg}
-format.
+A configuration file should be used to set the input parameters for CitcomS.
+A configuration file is a plain text file whose format is based on the
+Windows INI file. The format of a configuration file is as follows:
 
-
-\subsection{Using the Command Line}
-
-Pyre uses the following syntax to change properties from the command
-line. To change the value of a property of a component, use:
-\begin{lyxcode}
--{}-{[}component{]}.{[}property{]}={[}value{]}
-\end{lyxcode}
-Each component is attached to a facility, so the option above can
-also be written as: 
 \begin{lyxcode}
--{}-{[}facility{]}.{[}property{]}={[}value{]}
-\end{lyxcode}
-Each facility has a default component attached to it. A different
-component can be attached to a facility by:
-\begin{lyxcode}
--{}-{[}facility{]}={[}new\_component{]}~
+\#CitcomS.component1.component2\\
+\#~this~is~a~comment\\
+property1=value1\\
+property2=value2~~;~this~is~another~comment
+property3=val1,val2,val3~~;~this~property~is~specified~by~a~list~of~values
 \end{lyxcode}
 
-\subsection{Using a \texttt{.cfg} File}
+Upon termination of each run, all of the parameters are logged in a 
+\texttt{pidXXXXXX} file, where \texttt{XXXXXX} is the process id of the
+CitcomS application. Section names such as 
+\texttt{\#CitcomS.component1.component2} are optional, but are recommended for 
+the purpose of organizing the contents of the configuration file
+
+\section{Uniprocessor Example}
 
-Entering all those parameters via the command line involves the risk
-of typographical errors, which can lead to undesired results. You
-may find it easier to write a brief \texttt{.cfg} input file that
-contains the parameters. This file has a format similar to a Windows
-INI file. The file is composed of one or more sections which are formatted
-as follows:
+CitcomS runs similarly in full spherical or regional modes. For the
+purpose of this example, you will perform a test run of the regional
+version on a workstation. The CitcomS source package contains an 
+\texttt{examples} directory (the \texttt{make install} command installs the 
+examples under \texttt{PREFIX/share/CitcomS/examples}, where \texttt{PREFIX} 
+is the \texttt{CitcomS} installation directory). In this directory, you will 
+find the \texttt{example0} configuration file. Switch to the \texttt{examples}
+directory and execute the following on the command line:
+\begin{lyxcode}
+\$~CitcomSRegional example0
+\end{lyxcode}
+or, equivalently,
 \begin{lyxcode}
-{[}CitcomS.component1.component2{]}~\\
-\#~this~is~a~comment~\\
-property1~=~value1~\\
-property2~=~value2~~;~this~is~another~comment
+\$~mpirun~-np~1~CitcomSRegional example0
 \end{lyxcode}
-We strongly recommend that you use \texttt{.cfg} files for your work.
-The files are syntax-colored by the \texttt{vim} editor. (Upon termination
-of each run, all of the parameters are logged in a \texttt{.cfg} file.)
 
+\subsubsection{Example: Uniprocessor, \texttt{example0}}
+\begin{verbatim}
+# CitcomS
+cpu_limits_in_seconds=360000000
+minstep=5
 
-\subsection{Using a \texttt{.pml} File}
+# CitcomS.controller
+storage_spacing=1
 
-A \texttt{.pml} file is an XML file that specifies parameter values
-in a highly structured format. XML files are intended to be read and
-written by machines, not edited manually by humans. The \texttt{.pml}
-file format is intended for applications in which CitcomS input files
-are generated by another program, e.g., a GUI, web application, or
-a high-level structured editor. This file fomat will not be discussed
-or used further in the manual. It is composed of nested sections which
-are formatted as follows:
-\begin{lyxcode}
-<component~name='component1'>~\\
-~~~~<component~name='component2'>~\\
-~~~~~~~~<property~name='property1'>value1</property>~\\
-~~~~~~~~<property~name='property2'>value2</property>~\\
-~~~~</component>~\\
-</component>
-\end{lyxcode}
+# CitcomS.solver
+datafile=example0
+rayleigh=100000
 
-\subsection{Specification and Placement of Configuration Files}
+# CitcomS.solver.mesher
+fi_max=1
+fi_min=0
+nodex=17
+nodey=17
+nodez=9
+theta_max=2.0708
+theta_min=1.0708
 
-One or more configuration files and command line options may be specified
-on the command line:
-\begin{lyxcode}
-\$~citcoms~a.cfg~b.cfg~-{}-foo.bar=baz
-\end{lyxcode}
-In addition, the Pyre framework searches for configuration files named
-\texttt{CitcomS.cfg} in several predefined locations. You may put
-settings in any or all of these locations, depending on the scope
-you want the settings to have:
-\begin{enumerate}
-\item \texttt{PREFIX/etc/CitcomS.cfg}, for system-wide settings;
-\item \texttt{\$HOME/.pyre/CitcomS/CitcomS.cfg}, for user settings and preferences;
-\item the current directory (\texttt{./CitcomS.cfg}), for local overrides. 
-\end{enumerate}
-Parameters given directly on the command line will override any input
-contained in a configuration file. If more than one configuration
-files are given on the command line, the later one overrides the former
-ones. Configuration files given on the command line override all other
-\texttt{CitcomS.cfg} files. The \texttt{CitcomS.cfg} files placed
-in (3) will override those in (2), (2) overrides (1), and (1) overrides
-only the built-in defaults.
+# CitcomS.solver.ic
+num_perturbations=1
+perturbl=1
+perturblayer=5
+perturbm=1
+perturbmag=0.05
 
+# CitcomS.solver.visc
+num_mat=4
+\end{verbatim}
 
-\section{Uniprocessor Example}
+\section{\label{sec:Multiprocessor-Example}Multiprocessor Example}
 
-CitcomS runs similarly in full spherical or regional modes. For the
-purpose of this example, you will perform a test run of the regional
-version on a workstation. Execute the following on the command line:
+In the \texttt{examples} directory, type the following at the command line:
 \begin{lyxcode}
-\$~citcoms~-{}-steps=10~-{}-controller.monitoringFrequency=5~\textbackslash{}~\\
--{}-solver.datafile=example0~-{}-solver.mesher.nodex=17~-{}-solver.mesher.nodey=17
+\$~mpirun~-np~4~CitcomSRegional example1
 \end{lyxcode}
-This runs a default convection problem in a regional domain for 10
-time steps and with a mesh of 17 $\times$ 17 $\times$ 9 nodal points.
-Since we did not provide the parameter \texttt{solver.mesher.nodez},
-the default value 9 is used. The model results are written to files
-\texttt{example0.{*}} with an interval of 5 time steps. 
 
-Instead of writing the input parameters on the command line, you can
-put them in a \texttt{.cfg} file. The CitcomS source package contains
-an \texttt{examples} directory (the \texttt{make install} command
-installs the examples under \texttt{PREFIX/share/CitcomS/examples},
-where \texttt{PREFIX} is the \texttt{CitcomS} installation directory).
-In this directory, you will find a configuration file equivalent to
-the previous example: \texttt{example0.cfg}. You can run the model
-using:
-\begin{lyxcode}
-\$~citcoms~example0.cfg
-\end{lyxcode}
+\subsubsection{Example: Multiprocessor, \texttt{example1}}
+\begin{verbatim}
+# CitcomS
+cpu_limits_in_seconds=360000000
+minstep=70
 
-\subsubsection{Example: Uniprocessor, \texttt{example0.cfg}}
-\begin{lyxcode}
-{[}CitcomS{]}~\\
-steps~=~5~\\
-~\\
-{[}CitcomS.controller{]}~\\
-monitoringFrequency~=~1~\\
-~\\
-{[}CitcomS.solver{]}~\\
-datafile~=~example0~\\
-~\\
-{[}CitcomS.solver.mesher{]}~\\
-nodex~=~~17~\\
-nodey~=~~17
-\end{lyxcode}
+# CitcomS.controller
+storage_spacing=10
 
-\section{\label{sec:Multiprocessor-Example}Multiprocessor Example}
+# CitcomS.solver
+datafile=example1
+rayleigh=100000
 
-In order to run this example, you should be on a Beowulf cluster with
-four or more processors, or on a supercomputer; and you should be
-in the directory in which the input file is located, in this case,
-the \texttt{examples} directory. CitcomS has been extensively used
-on both environments, using up to several hundred processors. How
-to run a multiprocessor CitcomS model depends on your hardware and
-software settings, e.g., whether a batch system is used, what the
-names of the computers in a cluster are, and how the file system is
-organized. This section will lead you through the different settings
-of a parallel environment. 
+# CitcomS.solver.mesher
+fi_max=1
+fi_min=0
+nodex=17
+nodey=17
+nodez=9
+nprocx=2
+nprocy=2
+theta_max=2.0708
+theta_min=1.0708
 
+# CitcomS.solver.ic
+num_perturbations=1
+perturbl=1
+perturblayer=5
+perturbm=1
+perturbmag=0.05
 
-\subsubsection{Example: Multiprocessor, \texttt{example1.cfg }}
-\begin{lyxcode}
-{[}CitcomS{]}~\\
-steps~=~70~\\
-~\\
-{[}CitcomS.controller{]}~\\
-monitoringFrequency~=~10~\\
-~\\
-{[}CitcomS.solver{]}~\\
-datafile~=~example1~\\
-~\\
-{[}CitcomS.solver.mesher{]}~\\
-nprocx~=~~2~\\
-nprocy~=~~2~\\
-nodex~~=~17~\\
-nodey~~=~17~\\
-nodez~~=~~9~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-\end{lyxcode}
-This example uses 2 processors in colatitude ($x$-coordinate), 2
-in longitude ($y$-direction), and 1 in the radial ($z$-direction),
-i.e., it uses 4 processors in total. In addition, there will be 17
-nodes in $x$ (theta), 17 nodes in $y$ (fi), and 9 nodes in $z$
-(radius\_inner). The model will run for 70 time steps and the code
-will output the results every 10 time steps.
+# CitcomS.solver.visc
+num_mat=4
+\end{verbatim}
+
+This example has:
+\begin{itemize}
+  \item 2 processors in colatitude ($x$-coordinate), specified by 
+    the \texttt{nprocx=2} line in the configuration file
+  \item 2 processors in longitude ($y$-direction), specified by 
+    the \texttt{nprocy=2} line
+  \item 1 processor in the radial ($z$-direction). The default value for
+    the \texttt{nprocx,nprocy,nprocz} parameters is 1. Hence the 
+    \texttt{nprocz=1} line is omitted from the configuration file.
+  \item The total number of processors used is given by
+  $\mathtt{nprocx}\times\mathtt{nprocy}\times\mathtt{nprocz}$. Hence this
+    model uses 4 processors in total.
+  \item There are 17 nodes in $x$ (theta) direction.
+  \item There are 17 nodes in $y$ (fi) direction.
+  \item There are 9 nodes in $z$ (radius\_inner) direction.
+\end{itemize}
+The model will run for 70 time steps and the code will output the results every
+10 time steps.
 
 It is important to realize that within the example script (and in
 finite element method, FEM) the term ``node'' refers to the mesh points
-defining the corners of the elements. In \texttt{example1.cfg}, this
+defining the corners of the elements. In \texttt{example1}, this
 is indicated with:
 \begin{lyxcode}
-nodex~~=~17~\\
-nodey~~=~17~\\
-nodez~~=~~9~
+nodex=17\\
+nodey=17\\
+nodez=9
 \end{lyxcode}
 These quantities refer to the total number of FEM nodes in a given
 direction for the complete problem, and for the example it works out
@@ -1578,303 +1507,54 @@ that within a given processor there will be 9 $\times$ 9 $\times$
 9 nodes. Note that in the $x$-direction (or $y$) that for the entire
 problem there are 17 nodes and there is one node shared between two
 processors. This shared node is duplicated in two adjacent processors.
+
 Unfortunately, ``nodes'' sometimes also refer to the individual computers
-which make up a Beowulf cluster or supercomputer. In the example scripts,
-this is indicated with:
+which make up a cluster or supercomputer or the number of CPUs in a given 
+computer. We run one ``process'' on each node. In the example script this is 
+indicated with:
 \begin{lyxcode}
-nprocx~=~~2~\\
-nprocy~=~~2
+nprocx=2\\
+nprocy=2
 \end{lyxcode}
 \begin{figure}[H]
 \begin{centering}
 \includegraphics[width=0.5\paperwidth]{graphics/cookbook2.pdf}
 \par\end{centering}
-
 \caption{Computational Domain. Map view on the configuration of the top layer
 of the computational nodes and the processors.}
 \end{figure}
 
 
 
-\subsection{Output Directories and Output Formats}
-
-CitcomS potentially generates a large number of ASCII files. This
-means that you will have to organize your directories carefully when
-running CitcomS so that you can manage these files as well as use
-a post-processing program contained in this distribution. 
-
-How to best manage this large output depends on whether you will use
-a local file system or a parallel file system. For example, if you
-have a local hard disk on every machine (node) on a Beowulf cluster,
-with each hard disk mounted locally to the machine, this scenario
-is referred to as a local file system in this section. Or you might
-use some kind of parallel file system on your computer (e.g., NFS,
-GPFS, PVFS, to name a few), which is mounted on all of the nodes.
-Usually your home directory is mounted on the parallel file system.
-The local file system is usually more cost- and time-efficient than
-the parallel file system. 
-
-If you want CitcomS to write its output to the local hard disks, you
-need to have a common directory structure on all of the local hard
-disks. For example, if the directory \texttt{/scratch} exists on all
-local hard disks, you can run the example script with:
-\begin{lyxcode}
-\$~citcoms~example1.cfg~-{}-solver.datadir=/scratch
-\end{lyxcode}
-The additional command line option will override the \texttt{datadir}
-property, which specifies the output directory. The output files are
-then placed in \texttt{/scratch} on each individual machine with a
-filename prefix \texttt{example1.}
-
-However, if the output directory name on each local hard disk depends
-on the machine hostname, you can run the example script with:
-\begin{lyxcode}
-\$~citcoms~example1.cfg~-{}-solver.datadir=/scratch\_\%HOSTNAME
-\end{lyxcode}
-The special string \texttt{\%HOSTNAME} will be substituted by the
-hostname of each machine. 
-
-As the final example for a local file system, you can specify an arbitrary
-output directory for each machine. To do so, you must write a program
-to be executed on each machine which will print the output directory.
-The program must be named \texttt{citcoms\_datadir} and must reside
-on your path. An example of \texttt{citcoms\_datadir} can be found
-in the \texttt{visual/} directory. Then you can run the example script
-with:
-\begin{lyxcode}
-\$~citcoms~example1.cfg~-{}-solver.datadir=\%DATADIR
-\end{lyxcode}
-The special string \texttt{\%DATADIR} will be substituted by the output
-of \texttt{citcoms\_datadir} for each machine.
-
-If you want CitcomS to write its output to a parallel file system,
-you have several choices. You can run the example script as follows
-(substitute \texttt{\emph{username}} with your own username):
-\begin{lyxcode}
-\$~citcoms~example1.cfg~-{}-solver.datadir=/home/\emph{username}
-\end{lyxcode}
-The output files are then placed in your home directory with a filename
-prefix \texttt{example1}. A potential problem with this approach is
-that the directory \texttt{/home/}\texttt{\emph{username}} will be
-flooded with hundreds of files, perhaps even tens of thousands of
-files if you are running a model using several tens of processors
-for thousands of time steps. Alternatively you can have each machine
-write its output to its own directory, according to its MPI rank.
-You can run the example script with:
+\subsection{Output Formats}
+The possible output formats in CitcomS are \texttt{ascii,ascii-gz,hdf5,vtk} with
+\texttt{ascii} being the default. The output format is specified by the
+\texttt{output\_format} configuration parameter. The ASCII output can 
+potentially take a lot of disk space. CitcomS can write \texttt{gzip} 
+compressed output directly. You can run the example script with:
 \begin{lyxcode}
-\$~citcoms~example1.cfg~-{}-solver.datadir=/home/\emph{username}/\%RANK
-\end{lyxcode}
-The special string \texttt{\%RANK} will be substituted by the MPI
-rank of each processor. You will see four new directories \texttt{/home/}\texttt{\emph{username}}\texttt{/0},
-\texttt{/home/}\texttt{\emph{username}}\texttt{/1}, \texttt{/home/}\texttt{\emph{username}}\texttt{/2},
-and \texttt{/home/}\texttt{\emph{username}}\texttt{/3}. The processor
-of MPI rank 0 will write its output in \texttt{/home/}\texttt{\emph{username}}\texttt{/0}
-with a filename prefix \texttt{example1} (defined by the property
-\texttt{datafile} inside \texttt{example1.cfg}) and so on. 
-
-The ASCII output can potentially take a lot of disk space. CitcomS
-can write \texttt{gzip} compressed output directly. You can run the
-example script with:
-\begin{lyxcode}
-\$~citcoms~example1.cfg~-{}-solver.datadir=/home/\emph{username}~\textbackslash{}~\\
--{}-solver.output.output\_format=ascii-gz
+  output\_format=ascii-gz
 \end{lyxcode}
 Be warned that the post-process scripts do not understand this output
 format yet.
 
-The last choice is the most powerful one. Instead of writing many
-ASCII files, CitcomS can write its results into a single HDF5 (Hierarchical
-Data Format) file per time step. These HDF5 files take less disk space
-than all the ASCII files combined and don't require additional post-processing
-to be visualized in OpenDX. In order to use this feature, you must
-compile CitcomS with the parallel HDF5 library if you haven't done
-so already (see Section \vref{sec:HDF5-Configuration}). You can run
-the example script with:
+Instead of writing many ASCII files, CitcomS can write its results into a 
+single HDF5 (Hierarchical Data Format) file per time step. These HDF5 files 
+take less disk space than all the ASCII files combined and don't require 
+additional post-processing to be visualized in OpenDX. In order to use this 
+feature, you must compile CitcomS with the parallel HDF5 library if you 
+haven't done so already (see Section \vref{sec:HDF5-Configuration}). You can 
+run the example script with:
 \begin{lyxcode}
-\$~citcoms~example1.cfg~-{}-solver.datadir=/home/\emph{username}~\textbackslash{}~\\
--{}-solver.output.output\_format=hdf5
+  output\_format=hdf5
 \end{lyxcode}
-The output files will be stored in \texttt{/home/}\texttt{\emph{username}}\texttt{/}
+The output files will be stored in 
+\texttt{./}
 with a filename prefix \texttt{example1} and a filename suffix \texttt{h5}.
 See Chapter \vref{cha:Working-with-HDF5} for more information on
 how to work with the HDF5 output.
 
 
-\subsection{\label{sub:Launchers-and-Schedulers}Launchers and Schedulers}
-
-If you have used MPI before, you know that \texttt{mpirun} requires
-several command-line options to launch a parallel job. Or if you have
-used one of the batch systems, you will know that the batch system
-requires you to write a script to launch a job. Fortunately, launching
-a parallel CitcomS job is simplified by Pyre's \texttt{launcher} and
-\texttt{scheduler} facilities. Many properties associated with \texttt{launcher}
-and \texttt{scheduler} are pertinent to the cluster you are on, and
-are best customized in a configuration file. Your personal CitcomS
-configuration file (\texttt{\textasciitilde{}/.pyre/CitcomS/CitcomS.cfg})
-is suitable for this purpose. On a cluster, the ideal setup is to
-install a system-wide configuration file under \texttt{/etc/pythia-0.8},
-for the benefit of all users.
-
-Pyre's \texttt{scheduler} facility is used to specify the type of
-batch system you are using (if any):
-\begin{lyxcode}
-{[}CitcomS{]}
-
-scheduler~=~lsf
-\end{lyxcode}
-The valid values for \texttt{scheduler} are \texttt{lsf}, \texttt{pbs},
-\texttt{globus}, and \texttt{none}.
-
-Pyre's \texttt{launcher} facility is used to specify which MPI implementation
-you are using:
-\begin{lyxcode}
-{[}CitcomS{]}
-
-launcher~=~mpich
-\end{lyxcode}
-The valid values for \texttt{launcher} include \texttt{mpich} and
-\texttt{lam-mpi}.
-
-You may find the \texttt{dry} option useful while debugging the \texttt{launcher}
-and \texttt{scheduler} configuration. To debug the scheduler configuration,
-use the \texttt{-{}-scheduler.dry} option:
-\begin{lyxcode}
-citcoms~-{}-scheduler.dry
-\end{lyxcode}
-This option will cause CitcomS to perform a ``dry run,'' dumping the
-batch script to the console, instead of actually submitting it for
-execution (the output is only meaningful if you're using a batch system).
-Likewise, to debug the launcher configuration, use the \texttt{-{}-launcher.dry}
-option:
-\begin{lyxcode}
-citcoms~-{}-launcher.dry
-\end{lyxcode}
-This option will cause CitcomS to print the \texttt{mpirun} command,
-instead of actually executing it. (If you're using a batch system,
-a job will be submitted for execution; when it runs, CitcomS will
-simply print the \texttt{mpirun} command, and the job will immediately
-terminate.)
-
-
-\subsubsection{Running without a Batch System}
-
-On a cluster without a batch system, you need to specify on which
-machines the job will run. Supposing the machines on your cluster
-are named n001, n002, \ldots{}, etc., but you want to run the job
-on machines n001, n003, n004, and n005 (maybe n002 is down for the
-moment). To run the example, create a file named \texttt{mymachines.cfg}
-which specifies the machines to use:
-\begin{lyxcode}
-{[}CitcomS.launcher{]}
-
-nodegen~=~n\%03d
-
-nodelist~=~{[}1,3-5{]}
-\end{lyxcode}
-The \texttt{nodegen} property is a printf-style format string, used
-in conjunction with \texttt{nodelist} to generate the list of machine
-names. The \texttt{nodelist} property is a comma-separated list of
-machine names in square brackets.
-
-Now, invoke the following:
-\begin{lyxcode}
-\$~citcoms~example1.cfg~mymachines.cfg
-\end{lyxcode}
-This strategy gives you the flexibility to create an assortment of
-\texttt{.cfg} files (with one \texttt{.cfg} file for each machine
-list) which can be easily paired with different parameter files.
-
-If your machine list does not change often, you may find it more convenient
-to specify default values for \texttt{nodegen} and \texttt{nodelist}
-in \texttt{\textasciitilde{}/.pyre/CitcomS/CitcomS.cfg} (which is
-read automatically). Then, you can run any simulation with no additional
-arguments:
-\begin{lyxcode}
-\$~citcoms~example1.cfg\end{lyxcode}
-\begin{quote}
-\textcolor{red}{Warning:} This assumes your machine list has enough
-nodes for the simulation in question.
-\end{quote}
-You will notice that a machine file \texttt{mpirun.nodes} is generated.
-It will contain a list of the nodes where CitcomS has run. Save the
-machine file as it will be useful in the postprocessing step.
-
-
-\subsubsection{Using a Batch System}
-
-The settings which are important when using a batch system are summarized
-in the sample configuration file which follows.
-\begin{lyxcode}
-{[}CitcomS{]}
-
-scheduler~=~lsf~~~~;~the~type~of~the~installed~batch~system~\\
-~\\
-{[}CitcomS.lsf{]}
-
-bsub-options~=~{[}-a~mpich\_gm{]}~~~~;~special~options~for~'bsub'~\\
-~\\
-{[}CitcomS.launcher{]}
-
-command~=~mpirun.lsf~~~~;~'mpirun'~command~to~use~on~our~cluster~\\
-~\\
-{[}CitcomS.job{]}
-
-queue~=~normal~~~~~~~~~~;~default~queue~for~jobs~\\
-walltime~=~5{*}minute~~~~~;~run~time~limit~of~the~job
-\end{lyxcode}
-These settings are usually placed in \texttt{\textasciitilde{}/.pyre/CitcomS/CitcomS.cfg}
-or in a system-wide configuration file. They can be overridden on
-the command line, where one typically specifies the job name and the
-allotted time for the job:
-\begin{lyxcode}
-\$~citcoms~example1.cfg~-{}-job.queue=debug~\textbackslash{}
-
-~~~~-{}-job.name=example1~-{}-job.walltime=5{*}minute
-\end{lyxcode}
-The number of nodes to allocate for the job is determined automatically,
-based upon the simulation parameters.
-
-
-\subsection{Monitoring Your Jobs}
-
-Once launched, CitcomS will print the progress of the model to the
-standard error stream (stderr). Usually, the stderr is directed to
-your terminal so that you can monitor the progress. On some system,
-the stderr is redirected to a file. In any case, the progress is always
-saved in a log file (e.g., \texttt{example1.log}). The log file contains
-the convergence progress of the computation and, if an error occurs,
-debugging output. The time file (e.g., \texttt{example1.time}) contains
-the elapsed model time (in non-dimensional units) and CPU time (in
-seconds) of every time step. The format of the time file can be found
-in Appendix \vref{cha:Appendix-C:-CitComS,}. The log and time files
-are output by the rank-0 processor only. 
-
-Following your successful run, you will want to retrieve the output
-files from all the nodes and process them so they can be visualized
-with the visualization program OpenDX (see Chapter\vref{cha:Postprocessing-and-Graphics}).
-
-
-\section{Using CitcomS on the TeraGrid}
-
-The TeraGrid is a set of parallel supercomputer facilities at eight
-partner sites in the U.S. which creates an integrated, persistent
-computational resource. Since TeraGrid software is based on commodity
-clusters, Linux/Unix, and Globus, it should be easier to scale from
-a laboratory development environment to a high-end environment in
-a straightforward manner which promotes application performance. Although
-the TeraGrid is a high-end resource, it was developed to be accessible
-to the general community of scientists and engineers as a production
-facility. TeraGrid accounts for small allocations are available directly
-from CIG for investigators in the U.S. 
-
-CitcomS has already been installed and tested on several NSF TeraGrid
-platforms, including NCSA, SDSC, and TACC. To use CitcomS on these
-machines, please log in to your TeraGrid account and read the instructions
-at \texttt{\$TG\_COMMUNITY/CIG/CitcomS/TG\_README}. See CIG Community
-Area Software on the TeraGrid \url{geodynamics.org/cig/software/csa/}
-for additional information on access and to apply for allocation time.
-
-
 \chapter{\label{cha:Working-with-HDF5}Working with CitcomS HDF5 Files}
 
 



More information about the CIG-COMMITS mailing list