[cig-commits] commit: Minor edits.

Mercurial hg at geodynamics.org
Sun Feb 10 20:24:38 PST 2013


changeset:   172:1b19a2d00c03
tag:         tip
user:        Charles Williams <C.Williams at gns.cri.nz>
date:        Mon Feb 11 17:24:26 2013 +1300
files:       faultRup.tex
description:
Minor edits.


diff -r 1c12180b514e -r 1b19a2d00c03 faultRup.tex
--- a/faultRup.tex	Sat Feb 09 01:27:46 2013 -0500
+++ b/faultRup.tex	Mon Feb 11 17:24:26 2013 +1300
@@ -140,8 +140,8 @@ rupture propagation often approximate th
 rupture propagation often approximate the loading of the crust at the
 beginning of a rupture
 \citep{Mikumo:etal:1998,Harris:Day:1999,Aagaard:etal:BSSA:2001,Peyrat:etal:2001,Oglesby:Day:2001,Dunham:Archuleta:2004}. Numerical
-seismicity models that attempt to model multiple earthquake cycles,
-generally simplify not only the fault loading and rupture propagation
+seismicity models that attempt to model multiple earthquake cycles
+generally simplify not only the fault loading and rupture propagation,
 but also the physical properties to make the calculations tractable
 \citep{Ward:1992,Robinson:Benites:1995,Hillers:etal:2006,Rundle:etal:2006,Pollitz:Schwartz:2008,Dieterich:Richards-Dinger:2010}.
 
@@ -471,7 +471,7 @@ variations. Considering the deformation 
     \right) \, dS = \bm{0}.
   \end{split}
 \end{gather}\end{linenomath*}
-In order to march forward in time, we simply increment time, solve the
+To march forward in time, we simply increment time, solve the
 equations, and add the increment in the solution to the solution from
 the previous time step.  We solve these equations using the Portable,
 Extensible Toolkit for Scientific Computation (PETSc), which provides
@@ -632,7 +632,7 @@ introduce deformation at short length sc
 introduce deformation at short length scales (high frequencies) that
 numerical models cannot resolve accurately. This is especially true
 in spontaneous rupture simulations, because the rise time is sensitive
-to the evolution of the fault rupture. In order to reduce the
+to the evolution of the fault rupture. To reduce the
 introduction of deformation at such short length scales we add
 artificial damping via Kelvin-Voigt viscosity
 \citep{Day:etal:2005,Kaneko:etal:2008} to the computation of the strain,
@@ -772,7 +772,7 @@ poor. Similarly, in rare cases in which 
 poor. Similarly, in rare cases in which the fault slip extends across
 the entire domain, deformation extends far from the fault and
 the estimate derived using only the fault DOF will be
-poor. In order to make this iterative procedure more robust so that it
+poor. To make this iterative procedure more robust so that it
 works well across a wide variety of fault constitutive models, we add
 a small enhancement to the iterative procedure.
 
@@ -905,7 +905,7 @@ classified (contains a face on the fault
 classified (contains a face on the fault with this vertex). Depending
 on the order of the iteration, this can produce a ``wrap around''
 effect at the ends of the fault, but it does not affect the numerical
-solution as long as the fault slip is forced be zero at the edges of
+solution as long as the fault slip is forced to be zero at the edges of
 the fault. In prescribed slip simulations this is done via the
 user-specified slip distribution, whereas in spontaneous rupture
 simulations it is done by preventing slip with artificially large
@@ -926,7 +926,7 @@ coefficients of friction, cohesive stres
 \subsection{Quasi-static Simulations}
 \label{sec:solver:quasi-static}
 
-In order to solve the large, sparse systems of linear equations
+To solve the large, sparse systems of linear equations
 arising in our quasi-static simulations, we employ preconditioned
 Krylov subspace methods~\citep{Saad03}. We create a sequence of
 vectors by repeatedly applying the system matrix to the
@@ -1244,7 +1244,7 @@ We generate both hexahedral meshes and t
 (available from \url{http://cubit.sandia.gov}) and construct meshes so that
 the problem size (number of DOF) for the two different cell types
 (hexahedra and tetrahedra) are nearly the same (within 2\%). The suite
-of simulations examine increasing larger problem sizes as we increase
+of simulations examines increasingly larger problem sizes as we increase
 the number of processes (with one process per core), with $7.8\times
 10^4$ DOF for 1 process up to $7.1\times 10^6$ DOF for 96
 processes. The corresponding discretization sizes are 2033 m to 437 m
@@ -1339,7 +1339,7 @@ preconditioner for the Lagrange multipli
 preconditioner for the Lagrange multipliers submatrix. We ran the
 simulations on Lonestar at the Texas Advanced Computing
 Center. Lonestar is comprised of 1888 compute nodes connected by QDR
-Infiniband in a fat-tree topology, where each compute node consisted
+Infiniband in a fat-tree topology, where each compute node consists
 of two six-core Intel Xeon E5650 processors with 24 GB of
 RAM. Simulations run on twelve or fewer cores were run on a single
 compute node with processes distributed across processors and then
@@ -1371,7 +1371,7 @@ Table~\ref{tab:solvertest:solver:events}
 Table~\ref{tab:solvertest:solver:events}, we see that \texttt{MatMult}
 has good scalability, but that it is a small fraction of the overall
 solver time. The AMG preconditioner setup (\texttt{PCSetUp}) and
-application \texttt{PCApply}) dominate the overall solver time. The
+application (\texttt{PCApply}) dominate the overall solver time. The
 AMG preconditioner setup time increases with the number of
 processes. Note that many weak scaling studies do not include this
 event, because it is amortized over the iteration. Nevertheless, in
@@ -1548,11 +1548,11 @@ available in the {\tt dynamic/scecdynrup
 {\tt dynamic/scecdynrup/tpv210} directories of the benchmark repository.
 
 Figure~\ref{fig:tpv13:geometry}
-show the geometry of the benchmark and the size of the domain
+shows the geometry of the benchmark and the size of the domain
 we used in our verification test. The benchmark includes both 2-D
 (TPV13-2D is a vertical slice through the fault center-line with plane
 strain conditions) and 3-D versions (TPV13). This benchmark specifies
-a spatial resolution of 100 m on the fault surface. In order to
+a spatial resolution of 100 m on the fault surface. To
 examine the effects of cell type and discretization size we consider
 both triangular and quadrilateral discretizations with resolutions on
 the fault of 50 m, 100 m, and 200 m for TPV13-2D and 100 m and 200 m
@@ -1648,7 +1648,7 @@ benchmark TPV13, we conclude that PyLith
 benchmark TPV13, we conclude that PyLith performs similarly
 to other finite-element and finite-difference dynamic spontaneous
 rupture modeling codes. In particular it is well-suited to problems
-with complex geometry as we are able to vary the discretization size
+with complex geometry, as we are able to vary the discretization size
 while simulating a dipping normal fault. The code accurately captures
 supershear rupture and properly implements a Drucker-Prager
 elastoplastic bulk rheology and slip-weakening friction.



More information about the CIG-COMMITS mailing list