[cig-commits] r11987 - seismo/3D/automeasure/latex

alessia at geodynamics.org alessia at geodynamics.org
Tue May 20 07:40:54 PDT 2008


Author: alessia
Date: 2008-05-20 07:40:53 -0700 (Tue, 20 May 2008)
New Revision: 11987

Modified:
   seismo/3D/automeasure/latex/appendix.tex
   seismo/3D/automeasure/latex/discussion.tex
   seismo/3D/automeasure/latex/flexwin_paper.pdf
   seismo/3D/automeasure/latex/introduction.tex
   seismo/3D/automeasure/latex/method.tex
   seismo/3D/automeasure/latex/results.tex
Log:
Made modifications suggested by Luis.  Most modifications are in the introduction.  I have also removed the reference in the Summary to the modification of parameters between iterations of an adjoint tomography inversion.  Although this is a valid point, it should be made  in the adjoint tomography inversion paper, not in the Flexwin paper.

Modified: seismo/3D/automeasure/latex/appendix.tex
===================================================================
--- seismo/3D/automeasure/latex/appendix.tex	2008-05-20 12:37:39 UTC (rev 11986)
+++ seismo/3D/automeasure/latex/appendix.tex	2008-05-20 14:40:53 UTC (rev 11987)
@@ -1,5 +1,6 @@
 \appendix
 \section{User functions\label{ap:user_fn}}
+Functional forms of the time-dependent parameters in Table~\ref{tb:params}. They are defined by the user, and can depend on quantities related to the focal parameters and to the relative positions of the earthquake source and the receiver.
 
 \subsection{Global scenario\label{ap:user_global}}
 

Modified: seismo/3D/automeasure/latex/discussion.tex
===================================================================
--- seismo/3D/automeasure/latex/discussion.tex	2008-05-20 12:37:39 UTC (rev 11986)
+++ seismo/3D/automeasure/latex/discussion.tex	2008-05-20 14:40:53 UTC (rev 11987)
@@ -1,7 +1,7 @@
 \section{Relevance to adjoint tomography}
 \label{sec:discuss}
 
-The window selection algorithm we describe in this paper was designed to solve the problem of automatically picking windows for adjoint problems, specifically for 3D-3D tomography as described by \cite{TrompEtal2005} and \cite{TapeEtal2007}.
+The window selection algorithm we describe in this paper was designed to solve the problem of automatically picking windows for tomographic problems, specifically for 3D-3D adjoint tomography as described by \cite{TrompEtal2005} and \cite{TapeEtal2007}.
 Once the time windows are picked, the user is faced with choosing a type of measurement within each time window, for example, waveform difference, cross-correlation time-lags, multi-taper phase and amplitude anomalies.
 The specificity of adjoint methods is to turn measurements of the differences between observed and synthetic waveforms into adjoint sources that are subsequently used to determine the sensitivity kernels of the measurements themselves to the Earth model parameters.  The manner in which the adjoint source is created is specific to each type of measurement, but once formulated can be applied indifferently to any part of the seismogram.  Adjoint methods have been used to calculate kernels of various body and surface-wave phases with respect to isotropic elastic parameters and interface depths \citep{LiuTromp2006}, and with respect to anisotropic elastic parameters \citep{SieminskiEtal2007a,SieminskiEtal2007b}.  Adjoint methods allow us to calculate kernels for each and every wiggle on a given seismic record, thereby giving access to virtually all the information contained within.  
 
@@ -11,7 +11,7 @@
 
 The use of adjoint methods for tomography requires a method of selecting and windowing seismograms that avoids seismic noise while at the same time extracting as much information as possible from the signals.  The method must be automated in order to adapt to the changing synthetic seismograms at each iteration of the tomographic inversion.  The method must also be adaptable to the features that exist in the seismograms themselves, because 3D wavefield simulations are able to synthesize phases that do not exist in 1D simulations or traditional travel-time curves.  These considerations led us to favor a signal processing approach to the problem of data selection, approach which in turn led to the development of the FLEXWIN algorithm we have presented here.  
 
-Finally, we note that the design of this algorithm is predicated on the desire {\em not} to use the entire time series of each event when making a measurement between data and synthetics. If one were to simply take the waveform difference between two time series, then there would be no need for selecting time windows of interest. However, this ideal approach \citep[e.g.,][]{GauthierEtal1986} may only work in real applications if the
+Finally, we note that the design of this algorithm is based on the desire {\em not} to use the entire time series of each event when making a measurement between data and synthetics. If one were to simply take the waveform difference between two time series, then there would be no need for selecting time windows of interest. However, this ideal approach \citep[e.g.,][]{GauthierEtal1986} may only work in real applications if the
 statistical properties of the noise is well known, which is rare.
 %noise in the observed seismograms is described well, which is rare.
 Without an adequate description of the noise, it is prudent to resort to the selection of time windows even for waveform difference measurements. 
@@ -21,7 +21,8 @@
 \section{Summary
 \label{sec:summary}}
 
-The FLEXWIN algorithm is independent of input model, geographic scale and frequency range. Its use need not be limited to tomography studies, nor to studies using 3D synthetics. It is a configurable process that can be applied to different seismic scenarios by changing the parameters in Table~\ref{tb:params}.  We have configured the algorithm separately for each of the tomographic scenarios presented in Section~\ref{sec:results}.  The configuration process is data-driven: starting from the description of how each parameter influences the window selection (Section~\ref{sec:algorithm}), the user tunes the parameters using a representative subset of the full dataset until the algorithm produces an adequate set of windows, then applies the tuned algorithm to the full dataset. The choice of what makes an adequate set of windows remains subjective, as it depends strongly on the quality of the input model, the quality of the data, and the region of the Earth the tomographic inversion aims to constrain.  We consider the algorithm to be correctly tuned when false positives (windows around undesirable features of the seismogram) are minimized, and true positives (window around desirable features) are maximized.  For a given dataset, the set of tuned parameters (Table~\ref{tb:params}) and their user-defined time dependencies completely determine the window selection results. Finally, we envision that successive iterations of a particular tomographic model may require minor adjustments to the tuning parameters, as the fits improve between the synthetic and observed seismograms, permitting higher frequency information to be used.
+The FLEXWIN algorithm is independent of input model, geographic scale and frequency range. Its use need not be limited to tomography studies, nor to studies using 3D synthetics. It is a configurable process that can be applied to different seismic scenarios by changing the parameters in Table~\ref{tb:params}.  We have configured the algorithm separately for each of the tomographic scenarios presented in this paper (Section~\ref{sec:results}).  The configuration process is data-driven: starting from the description of how each parameter influences the window selection (Section~\ref{sec:algorithm}), the user tunes the parameters using a representative subset of the full dataset until the algorithm produces an adequate set of windows, then applies the tuned algorithm to the full dataset. The choice of what makes an adequate set of windows remains subjective, as it depends strongly on the quality of the input model, the quality of the data, and the region of the Earth the tomographic inversion aims to constrain.  We consider the algorithm to be correctly tuned when false positives (windows around undesirable features of the seismogram) are minimized, and true positives (window around desirable features) are maximized.  For a given dataset, the set of tuned parameters (Table~\ref{tb:params}) and their user-defined time dependencies completely determine the window selection results. 
+%Finally, we envision that successive iterations of a particular tomographic model may require minor adjustments to the tuning parameters, as the fits improve between the synthetic and observed seismograms, permitting higher frequency information to be used.
 
 The desire to study regions of detailed structure and to examine the effects of finite source processes requires seismologists to deal with increasingly complex seismic records.  Furthermore, with increasing coverage and sampling rate, the available data becomes voluminous and challenging to manage. In using the FLEXWIN package, the onus would still be on the seismologist to tune the algorithm parameters so as to pick time windows appropriate for each specific study target. For a given data-set and a given set of tuning parameters, the time-window picking is entirely reproducible.  The automated and signal processing nature of the procedure should eliminate some of the human bias involved in picking measurement windows, while expediting the process of analyzing tens to hundreds of thousands of records.
 

Modified: seismo/3D/automeasure/latex/flexwin_paper.pdf
===================================================================
(Binary files differ)

Modified: seismo/3D/automeasure/latex/introduction.tex
===================================================================
--- seismo/3D/automeasure/latex/introduction.tex	2008-05-20 12:37:39 UTC (rev 11986)
+++ seismo/3D/automeasure/latex/introduction.tex	2008-05-20 14:40:53 UTC (rev 11987)
@@ -12,97 +12,86 @@
 produce new 3D Earth models \citep[e.g.][]{MontelliEtal2004a,ZhouEtal2006}.  The numeric kernels have
 opened up the possibility of `3D-3D' tomography, i.e.~seismic tomography based upon a 3D reference model, 3D numerical simulations of the seismic wavefield and finite-frequency sensitivity kernels \citep{TrompEtal2005,ChenEtal2007b}.
 
-The growing number of competing tomographic techniques all have at their core
-the following `standard operating procedure', which they share with all inverse
-problems in physics: make a guess about a set of model parameters; predict an
-observable from this guess (a travel-time, a dispersion curve, a full
-waveform); measure the difference (misfit) between the prediction and the
-observation; improve on the original guess.  This vague description of the
-tomographic problem hides a number of important assumptions: firstly, that we are able to predict observables
-correctly (we can solve the forward problem); secondly, that the misfit is due
-to inadequacies in the values of our initial model parameters, and is not caused
-by a misunderstanding of the physics, by inaccuracies in our solution to the forward problem, or by
-the presence of noise in the observations; lastly, that we know the
-relation between the measured misfit and the model parameters
-(in terms of partial derivatives or a sensitivity kernel). 
+%The growing number of competing tomographic techniques all have at their core
+%the following `standard operating procedure', which they share with all inverse
+%problems in physics: make a guess about a set of model parameters; predict an
+%observable from this guess (a travel-time, a dispersion curve, a full
+%waveform); measure the difference (misfit) between the prediction and the
+%observation; improve on the original guess.  This vague description of the
+%tomographic problem hides a number of important assumptions: firstly, that we are able to predict observables
+%correctly (we can solve the forward problem); secondly, that the misfit is due
+%to inadequacies in the values of our initial model parameters, and is not caused
+%by a misunderstanding of the physics, by inaccuracies in our solution to the forward problem, or by
+%the presence of noise in the observations; lastly, that we know the
+%relation between the measured misfit and the model parameters
+%(in terms of partial derivatives or a sensitivity kernel). 
 
 It is common practice in tomography to work only with certain subsets of the
 available seismic data. The choices made in selecting these subsets are
-inextricably linked to the choice of tomographic method, and are often dictated
-by the assumptions made within the method itself.  For example, ray-based
+inextricably linked to the assumptions made in the tomographic method.  For example, ray-based
 travel-time tomography deals
 only with high frequency body wave arrivals, while great-circle
 surface wave tomography must satisfy the path-integral approximation,
-and only considers surface waves that present no evidence of multipathing.
-The emerging 3D-3D tomographic methods seem to be the best candidates for studying
-regions with complex 3D structure. These methods take advantage of
+and only considers surface waves that present no evidence of multipathing.  
+In both these examples, a large proportion of the information contained within the seismograms goes to waste.
+The emerging 3D-3D tomographic methods take advantage of
 full wavefield simulations and numeric finite-frequency kernels,
 thereby reducing
 the data restrictions required when using approximate forward modelling and simplified descriptions
-of sensitivity.  3D-3D tomographic methods require their own specific data selection strategies.
+of sensitivity.  These methods seem to be the best candidates for studying
+regions with complex 3D structure as they permit the use of a larger proportion of the information contained within each seismogram, including complex arrivals not predicted by 1D approximations of Earth structure.  In order to exploit the full power of 3D-3D tomographic methods, we require a new data selection strategy that does not exclude such complex arrivals.
 
-In this paper we present an automated data selection method for the adjoint tomography approach of \cite{TrompEtal2005,LiuTromp2006} and \cite{TapeEtal2007}, which builds upon \cite{Tarantola1984}.  In adjoint tomography, the sensitivity kernels that tie variations
+In this paper we present an automated data selection method designed for the adjoint approach to 3D-3D tomography \citep{TrompEtal2005,LiuTromp2006,TapeEtal2007}, which builds upon \cite{Tarantola1984}.  In adjoint tomography, the sensitivity kernels that tie variations
 in Earth model parameters to variations in the misfit are obtained by
 interaction bewteen the wavefield used to generate the synthetic seismograms (the
 direct wavefield) and an adjoint wavefield that obeys the same wave equation
 as the direct wavefield, but with a source term which is derived from the
 misfit measurements.  The computational cost of such kernel computations for use in seismic tomography depends only on the number of events, and not on the number of receivers nor on the number of measurements made.  It is therefore to our advantage to make the greatest number of measurements on each seismogram.
 
-Our data selection strategy aims to define measurement time windows that
-cover as much of a given seismogram as possible, whilst avoiding portions of
-the waveform that are dominated by noise.  We define noise as any signal that
-cannot be modeled using physically reasonable values of the simulation
-parameters.
-Thus,
-what we consider
-to be noise varies with our simulation abilities.  For example,
-short period signals are noise for long
-period simulations, while multiple scattering and coda signals are noise for most
-waveform simulation methods, including the spectral element method \citep{KomatitschEtal2002} we use for adjoint tomography.
-
 The adjoint kernel calculation procedure allows us to measure and use for
 tomographic inversion almost any part of the seismic signal.  We do not
 need to identify specific seismic phases, as the kernel will take care of
-defining the relevant sensitivities.  However, there is nothing in the adjoint method itself that prevents us
-from constructing an adjoint kernel from noise, and thereby polluting our
-inversion process.   
-It is up to the data selection method to ensure such noise is
-avoided in the choice of the portions of the seismogram to be measured. 
+defining the relevant sensitivities.  However, there is nothing in the adjoint method itself that prevents us from constructing an adjoint kernel from noise-dominated data, and thereby polluting our inversion process.  
+Our data selection strategy therefore aims to define measurement time windows that
+cover as much of a given seismogram as possible, whilst avoiding portions of
+the waveform that are dominated by noise.
 
 From a signal processing point of view, the simplest way to avoid serious
 contamination by noise is to select and measure strong signals, which in
-seismology correspond to seismic arrivals.  We select time windows on the synthetic seismogram within which the waveform
+seismology correspond to seismic arrivals.  We therefore select time windows on the synthetic seismogram within which the waveform
 contains a distinct energy arrival, then require an adequate correspondence
-between observed and synthetic waveforms within these windows.  
-In order to isolate changes in amplitude or frequency content susceptible of being
-associated with distinct energy arrivals, we need to analyse the character of the waveform itself.  This analysis is similar to that used
+between observed and synthetic waveforms within these windows.  This selection paradigm is general, and can be applied to synthetic seismograms regardless of how they have been obtained.  It is clear, however, that a synthetic seismogram obtained by 3D propagation through a good 3D Earth model will provide a better fit to the observed seismogram over a greater proportion of its length than will be the case for a more approximate synthetic seismogram.
+
+In order to the isolate changes in amplitude or frequency content susceptible of being
+associated with distinct energy arrivals, we need to analyse the character of the synthetic waveform itself.  This analysis is similar to that used on observed waveforms
 in automated phase detection algorithms for the routine location of
-earthquakes.  We have taken a tool used in this detection process ---
+earthquakes.  In designing our time-window selection algorithm, we have taken a tool used in this detection process ---
 the long-term / short-term ratio --- and applied it to the definition of
 time windows around distinct seismic phases.  
 
 The choices made in time-window selection for tomography are
-interconnected with all other aspects of the tomographic inversion process,
+interconnected with all aspects of the tomographic inversion process,
 from the waveform simulation method (direct problem), through the choice of
-measurement, to the inversion itself which necessarily depends on the method
-of obtaining sensitivity kernels.  For example, a glance at any plot of travel-time curves will reveal the presence of many
-time crossings and triplications.  These indicate that seismic phases with
-often very different ray paths may arrive at similar times, resulting in
-composite arrivals on a seismogram.  Whether or not such arrivals can be used
-in tomography depends on the choice of the forward simulation method (can
-this composite arrival be simulated?), on the type of measurement to be made
-(can the measurement method characterize the differences between observed
-and simulated signals accurately?), and on the capacity of the inverse method
-used to correctly account for the sensitivity of the composite phase.   Most traditional tomographic methods tend to avoid using composite phases. Accurate sensitivity kernels for these phases can be calculated using adjoint methods
-\citep{LiuTromp2006}.
- Their acceptance into adjoint tomography inversions depends on the
-choice of measurement method: waveform difference measurements can capture the
-full complexity of the difference between observed and simulated composite
-phases, but lead to highly non-linear tomographic inversions and are more
-sensitive to noise; measurements such as cross-correlation
-travel-times that lead to less non-linear tomographic inversions can deal with composite phases only when the simulated and
-observed signals are similar in shape.
+measurement method, to the method
+used to obtain sensitivity kernels.  One of the major difficulties in defining a general data selection strategy is the great range of possible choices open to the tomographer.  We have  designed a configurable data selection process that can be adapted to different tomographic scenarios by tuning a handful of parameters (see Table~\ref{tb:params}).  Although we have designed our algorithm for use in adjoint tomography, its inherent flexibility should make it useful in many data-selection applications.
 
-These considerations on the acceptability of composite phases in tomographic inversions illustrate one of the major difficulties in defining a data selection strategy: the great range of choices open to the tomographer.  We have therefore designed a configurable data selection process that can be adapted to different tomographic scenarios by tuning a handful of parameters (see Table~\ref{tb:params}).  Although we have designed the algorithm for use in adjoint tomography, its inherent flexibility should make it useful in many data-selection applications.
+%Let us take as an example the case of composite seismic arrivals.  A glance at any plot of travel-time curves will reveal the presence of many
+%time crossings and triplications.  These indicate that seismic phases with
+%often very different ray paths may arrive at similar times, resulting in
+%composite arrivals on a seismogram.  Whether or not such arrivals can be used
+%in tomography depends on the choice of the forward simulation method (can
+%this composite arrival be simulated?), on the type of measurement to be made
+%(can the measurement method characterize the differences between observed
+%and simulated signals accurately?), and on the capacity of the inverse method
+%used to correctly account for the sensitivity of the composite phase.   Most traditional tomographic methods tend to avoid using composite phases. Accurate sensitivity kernels for these phases can be calculated using adjoint methods
+%\citep{LiuTromp2006}.
+% Their acceptance into adjoint tomography inversions depends on the
+%choice of measurement method: waveform difference measurements can capture the
+%full complexity of the difference between observed and simulated composite
+%phases, but lead to highly non-linear tomographic inversions and are more
+%sensitive to noise; measurements such as cross-correlation
+%travel-times that lead to more linear tomographic inversions can deal with composite phases only when the simulated and observed signals are similar in shape.
 
+%These considerations on the acceptability of composite phases in tomographic inversions illustrate one of the major difficulties in defining a data selection strategy: the great range of choices open to the tomographer.  We have therefore designed a configurable data selection process that can be adapted to different tomographic scenarios by tuning a handful of parameters (see Table~\ref{tb:params}).  Although we have designed the algorithm for use in adjoint tomography, its inherent flexibility should make it useful in many data-selection applications.
+
 We have successfully applied our windowing algorithm, the details of which are described in Section~\ref{sec:algorithm}, to diverse seismological scenarios: local and near regional tomography in Southern California, regional subduction-zone tomography in Japan, and global tomography.  We present examples from each of these scenarios in Section~\ref{sec:results}, and we discuss the use of the algorithm in the context of adjoint tomography in Section~\ref{sec:discuss}.   We hope that the time-window selection algorithm we present here will become a standard tool in seismic tomography studies.

Modified: seismo/3D/automeasure/latex/method.tex
===================================================================
--- seismo/3D/automeasure/latex/method.tex	2008-05-20 12:37:39 UTC (rev 11986)
+++ seismo/3D/automeasure/latex/method.tex	2008-05-20 14:40:53 UTC (rev 11987)
@@ -28,7 +28,7 @@
 
 %----------------------
 
-\subsubsection{Pre-processing}
+%\subsubsection{Pre-processing}
 
 We apply minimal and identical pre-processing to both observed and synthetic
 seismograms: band-pass filtering with a non-causal Butterworth
@@ -40,7 +40,7 @@
 
 %----------------------
 
-\subsubsection{Seismogram rejection on the basis of noise in observed seismogram}
+%\subsubsection{Seismogram rejection on the basis of noise in observed seismogram}
 
 Our next step is to reject seismograms that are dominated by noise.  This rejection is based on two signal-to-noise criteria that compare the power and amplitude of the signal to those of the background noise (given by the observed waveform before the first $P$-wave arrival).  The power signal-to-noise ratio is defined as
 ${\rm SNR}_P = P_{\rm signal}/P_{\rm noise},$ 
@@ -61,7 +61,7 @@
 
 %----------------------
 
-\subsubsection{Construction of STA:LTA timeseries}
+%\subsubsection{Construction of STA:LTA timeseries}
 
 Detection of seismic phase arrivals is routinely performed by automated
 earthquake location algorithms.  We have taken a tool used in this
@@ -135,7 +135,7 @@
 Table~\ref{tb:params}: once the system has been tuned,
 these parameters remain unchanged and are used for all seismic events in the same scenario. The functional forms of the time-dependent parameters are defined by the user, can depend on
 information about the earthquake source and the receiver, and also
-remain unchanged once the system has been tuned. 
+remain unchanged once the system has been tuned (see Appendix~\ref{ap:user_fn}). 
 For the example in Figure~\ref{fg:stalta}, we have required the water level
 $w_E(t)$ to double after the end of the surface wave arrivals (as defined by
 the epicentral distance and a group velocity of $3.2$~\kmps) so as to avoid
@@ -251,14 +251,12 @@
 A$.  The signal to noise ratio for single windows is defined as an amplitude
 ratio, ${\rm SNR}_W=A_{\rm window}/A_{\rm noise}$, where $A_{\rm window}$ and
 $A_{\rm noise}$ are the maximum absolute values of the observed seismogram $|d(t)|$ in the window
-and in the noise time-span respectively (see equation~\ref{eq:noise}).  The remaining three
-quantities are based on the cross-correlation function between observed and
-synthetic seismograms
+and in the noise time-span respectively (see equation~\ref{eq:noise}).  The cross-correlation value $CC$ is defined as the maximum value of the
+cross-correlation function ${\rm CC}={\rm max}[\Gamma(t^\prime)]$, where
 \begin{equation}
-\Gamma(t^\prime) = \int s(t-t^\prime)d(t)dt. 
+\Gamma(t^\prime) = \int s(t-t^\prime)d(t)dt, 
 \end{equation}
-The cross-correlation value $CC$ is defined as the maximum value of the
-cross-correlation function ${\rm CC}={\rm max}[\Gamma(t^\prime)]$, and
+and
 quantifies the similarity in shape between the $s(t)$ and $d(t)$
 waveforms.  The time lag $\Delta \tau$ is defined as the value of $t^\prime$
 at which $\Gamma$ is maximal, and quantifies the delay in time between a
@@ -287,9 +285,7 @@
 where $t_M$ is the time of the window's seed maximum.  In words, we only accept
 windows in which the observed signal is above the noise level, the observed and
 synthetic signals are reasonably similar in shape, their arrival times
-differences are small (i.e. there are no major errors in seismogram
-synchronization), and their amplitudes are broadly compatible (i.e. there are no
-major errors in instrument response).  When the synthetic and observed
+differences are small, and their amplitudes are broadly compatible.  When the synthetic and observed
 seismograms are similar, the fit-based criteria of
 equations~(\ref{eq:cc})-(\ref{eq:dlnA}) reject only a few of the candidate data
 windows (see lower portion of Figure~\ref{fg:win_rej_data}).  They are
@@ -319,7 +315,7 @@
 observed and synthetic seismograms are most similar (high values of $CC$).
 Furthermore, should we have the choice between two short windows and a longer,
 equally well-fitting one covering the same time-span, we may wish to favour
-the longer window. 
+the longer window as this poses a stronger constraint on the tomographic inversion. 
 
 The condition that optimal windows should have passed all previous tests
 removes the straightforward solution of merging overlapping windows.  Indeed, given any two

Modified: seismo/3D/automeasure/latex/results.tex
===================================================================
--- seismo/3D/automeasure/latex/results.tex	2008-05-20 12:37:39 UTC (rev 11986)
+++ seismo/3D/automeasure/latex/results.tex	2008-05-20 14:40:53 UTC (rev 11987)
@@ -1,8 +1,8 @@
 \section{Windowing Examples \label{sec:results}}
 
 We present a set of examples showing the results of the FLEXWIN algorithm
-applied to real data.  These examples illustrate the robustness of the
-algorithm, as well as its flexibility.  We have applied the algorithm to three
+applied to real data.  These examples illustrate the robustness and flexibility of the
+algorithm.  We have applied the algorithm to three
 tomographic scenarios, with very different geographical extents and distinct period ranges:
 a global tomography (\trange{50}{150}),
 a regional tomography of the Japanese subduction zone, down to 700~km (\trange{6}{120}), and
@@ -29,7 +29,7 @@
 We tuned the windowing algorithm separately for each of the three scenarios we present here, and we present examples based on the events listed in Table~\ref{tb:events}.  Tuning parameter values can be found in Table~\ref{tb:example_params}, while the functional forms of the time-dependent parameters can be found in Appendix~\ref{ap:user_fn}.  Once tuned for a scenario, the algorithm is applied to all the events in that scenario without further modification. 
 
 
-\subsection{Global scale}
+\subsection{Global tomography}
 \label{sec:globe}
 
 Our first scenario is a global scale, long-period tomographic study.
@@ -90,8 +90,8 @@
 nevertheless present in both observed and synthetic seismograms, and
 undoubtedly contains information.  The particularity of our windowing algorithm
 is to treat such features as information, without trying to identify their
-sources.  An adjoint-type scheme would have no problem identifying the
-sensitivity kernels of such features, and would thereby allow measurements made
+sources.  A scheme that permits the computation of
+sensitivity kernels for such features (e.g. the adjoint scheme), would allow measurements made
 on them to be interpreted and inverted correctly.  Other methods of determining
 measurement sensitivities may have more difficulty dealing with them.  These
 considerations illustrate the strong ties that exist between the selection,
@@ -142,7 +142,7 @@
 velocity at the source location.  
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\subsection{Japan scale}
+\subsection{Regional tomography of the Japanese subduction zone}
 \label{sec:japan}
 Our second scenario is a regional-scale tomographic study of the Japanese subduction zone, using a set of local events within the depth range 0--600~km.
 The lateral dimensions of the domain are 
@@ -259,7 +259,7 @@
 velocity at the source location.  
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\subsection{Southern California scale}
+\subsection{Local tomography in Southern California}
 \label{sec:socal}
 
 Our last scenario is a local tomographic study of southern California.  We apply the windowing algorithm to a set of 150 events within southern California, for which we have computed synthetic seismograms using the spectral-element method and a regional 3D crustal and upper mantle model \citep{KomatitschEtal2004}.  This model contains three discontinuities: the surface topography (included in the mesh), the basement layer that separates the sedimentary basins from the bedrock, and the Moho, separating the lower crust from the upper mantle. The basement surface is essential for simulating the resonance of seismic waves within sedimentary basins, such as the Ventura basin, the L.A. basin, and the Salton trough \citep{KomatitschEtal2004,LovelyEtal2006}. The smooth 3D background velocity model used in \citet{KomatitschEtal2004} was \citet{Hauksson2000}; we use an updated version provided by \citet{LinEtal2007}. The physical domain of the model is approximately 600~km by 500~km at the surface, and extends to a depth of 60~km. Our simulations of seismic waves are numerically accurate down to a period of 2~s.
@@ -271,6 +271,6 @@
 The windowing algorithm tends to identify five windows on each set of three component longer-period seismograms (Figures~\ref{fg:socal_CLC} and~\ref{fg:socal_rs_T06}): on the vertical and radial components the first window corresponds to the body-wave arrival and the second to the Rayleigh wave, while windows on the transverse component capture the Love wave.
 The shorter-period synthetic seismograms do not agree well with the observed seismograms, especially in the later part of the signal, leading to fewer picked windows. In Figure~\ref{fg:socal_CLC}e, only two windows are selected by the algorithm: a P arrival recorded on the radial component, and the combined S and Love-wave arrival on the transverse component. The P-wave arrival on the vertical component is rejected because the cross-correlation value within the time window did not exceed the specified minimum value of 0.85 (Table~\ref{tb:example_params}). 
 
-Figure~\ref{fg:socal_FMP} shows results for the same event as Figure~\ref{fg:socal_CLC}, but for a different station, FMP, situated 52~km from the event and within the Los Angeles basin. Comparison of the two figures highlights the characteristic resonance caused by the thick sediments within the basin.  This resonance is beautifully captured by the transverse component synthetics (Figure~\ref{fg:socal_FMP}d, record T), thanks to the inclusion of the basement layer in the crustal model \citep{KomatitschEtal2004}. In order to pick such long time windows with substantial frequency-dependent measurement differences, we are forced to lower the minimum cross-correlation value for the entire dataset (0.74 in Table~\ref{tb:example_params}) and increase $c_{4b}$ to capture the slow decay in the STA:LTA curves (Figure~\ref{fg:socal_FMP}d, record T). It is striking that although these arrivals look nothing like the energy packets typical for the global case, the windowing algorithm is still able to determine the proper start and end times for the windows.  In Figure~\ref{fg:socal_FMP}e the windowing algorithm selects three short-period body-wave time windows with superb agreement between data and synthetics.
+Figure~\ref{fg:socal_FMP} shows results for the same event as Figure~\ref{fg:socal_CLC}, but for a different station, FMP, situated 52~km from the event and within the Los Angeles basin. Comparison of the two figures highlights the characteristic resonance caused by the thick sediments within the basin.  This resonance is beautifully captured by the transverse component synthetics (Figure~\ref{fg:socal_FMP}d, record T), thanks to the inclusion of the basement layer in the crustal model \citep{KomatitschEtal2004}. In order to pick such long time windows with substantial frequency-dependent measurement differences, we are forced to lower the minimum cross-correlation value $CC_0$ for the entire dataset (0.74 in Table~\ref{tb:example_params}) and increase $c_{4b}$ to capture the slow decay in the STA:LTA curves (Figure~\ref{fg:socal_FMP}d, record T). It is striking that although these arrivals look nothing like the energy packets typical for the global case, the windowing algorithm is still able to determine the proper start and end times for the windows.  In Figure~\ref{fg:socal_FMP}e the windowing algorithm selects three short-period body-wave time windows with superb agreement between data and synthetics.
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%



More information about the cig-commits mailing list