[cig-commits] r11759 - in seismo/3D/automeasure: latex user_files/socal_3D

carltape at geodynamics.org carltape at geodynamics.org
Mon Apr 7 11:47:43 PDT 2008


Author: carltape
Date: 2008-04-07 11:47:42 -0700 (Mon, 07 Apr 2008)
New Revision: 11759

Modified:
   seismo/3D/automeasure/latex/AM-allcitations.bib
   seismo/3D/automeasure/latex/abstract.tex
   seismo/3D/automeasure/latex/appendix.tex
   seismo/3D/automeasure/latex/discussion.tex
   seismo/3D/automeasure/latex/figures_paper.tex
   seismo/3D/automeasure/latex/flexwin_paper.pdf
   seismo/3D/automeasure/latex/introduction.tex
   seismo/3D/automeasure/latex/method.tex
   seismo/3D/automeasure/latex/results.tex
   seismo/3D/automeasure/user_files/socal_3D/PAR_FILE_T02
   seismo/3D/automeasure/user_files/socal_3D/user_functions.f90
Log:
Alessia's updates for FLEXWIN paper. Re-organization of socal user_functions.f90.


Modified: seismo/3D/automeasure/latex/AM-allcitations.bib
===================================================================
--- seismo/3D/automeasure/latex/AM-allcitations.bib	2008-04-07 08:36:09 UTC (rev 11758)
+++ seismo/3D/automeasure/latex/AM-allcitations.bib	2008-04-07 18:47:42 UTC (rev 11759)
@@ -1,5 +1,5 @@
 
-%% Created for alessia at 2008-03-26 13:47:46 +0100 
+%% Created for alessia at 2008-04-07 12:51:26 +0200 
 
 
 %% Saved with string encoding Occidental (ASCII) 
@@ -94,6 +94,16 @@
 @string{tectphys = {Tectonophysics}}
 
 
+ at article{LiTanimoto1993,
+	Author = {Li, X.D. and Tanimoto, T.},
+	Date-Added = {2008-04-07 12:36:02 +0200},
+	Date-Modified = {2008-04-07 12:38:19 +0200},
+	Journal = {Geophys. J. Int.},
+	Pages = {92--102},
+	Title = {Waveforms of long-period body waves in a slightly aspherical Earth model},
+	Volume = {112},
+	Year = {1993}}
+
 @article{MegninRomanowicz1999,
 	Author = {M{\'e}gnin, C. and Romanowicz, B.},
 	Date-Added = {2008-03-25 17:00:27 +0100},
@@ -3611,16 +3621,6 @@
 	Volume = 427,
 	Year = 2004}
 
- at article{LiTanimoto93,
-     AUTHOR = {Li, X.-D. and Tanimoto, T.},
-     JOURNAL = gji,
-     TITLE = {{Waveforms of long-period body waves in a slightly aspherical Earth model}},
-     PAGES = {92--102},
-     VOLUME = {112},
-     YEAR = {1993}
-}
-
-
 @phdthesis{Liotier1989,
 	Address = {Grenoble, France},
 	Author = {Liotier, Y.},
@@ -5944,7 +5944,7 @@
 	Volume = 158,
 	Year = 2004}
 
- at article{TalagrandCourtier1987a,
+ at article{TalagrandCourtier1987,
 	Author = {Talagrand, O. and Courtier, P.},
 	Journal = qtrmets,
 	Pages = {1311--1328},
@@ -5952,15 +5952,6 @@
 	Volume = 113,
 	Year = 1987}
 
- at article{TalagrandCourtier1987b,
-     AUTHOR = {Courtier, P. and Talagrand, O.},
-     YEAR = {1987},
-     TITLE = {{Variational assimilation of meteorological observations with the adjoint vorticity equation. II: Numerical results}},
-     JOURNAL = qtrmets,
-     VOLUME = {113},
-     PAGES = {1329--1347}
-}
-
 @article{TalebianJackson2002,
 	Author = {Talebian, M. and Jackson, J.},
 	Journal = gji,
@@ -6022,25 +6013,6 @@
 	Volume = 49,
 	Year = 1984}
 
- at article{Tarantola84a,
-     AUTHOR = {Tarantola, A.},
-     YEAR = {1984},
-     TITLE = {{Linearized inversion of seismic reflection data}},
-     JOURNAL = {Geophys.~Prosp.},
-     VOLUME = {32},
-     PAGES = {998--1015}
-}
-
- at article{Tarantola84b,
-     AUTHOR = {Tarantola, A.},
-     YEAR = {1984},
-     TITLE = {{Inversion of seismic reflection data in the acoustic approximation}},
-     JOURNAL = geophys,
-     VOLUME = {49},
-     NUMBER = {8},
-     PAGES = {1259--1266}
-}
-
 @article{TatarEtal2002,
 	Author = {Tatar, M. and Hatzfeld, D. and Martinod, J. and Walpersdorf, A. and Ghafori-Ashtiany, M. and Chery, J.},
 	Date-Modified = {2008-03-24 17:31:17 +0100},

Modified: seismo/3D/automeasure/latex/abstract.tex
===================================================================
--- seismo/3D/automeasure/latex/abstract.tex	2008-04-07 08:36:09 UTC (rev 11758)
+++ seismo/3D/automeasure/latex/abstract.tex	2008-04-07 18:47:42 UTC (rev 11759)
@@ -1,3 +1,3 @@
 \begin{abstract}
-We present an algorithm for the automated selection of measurement windows on pairs of observed and synthetic seismograms.  The algorithm was designed specifically to automate window selection and measurement for adjoint tomography studies, but is sufficiently flexible to be adapted to most tomographic applications and seismological scenarios.  Adjoint tomography requires a data selection method that maximizes the number of measurements made on each seismic record while avoiding seismic noise.  The method must  adapt to the features that exist in the seismograms themselves, because 3D wavefield simulations are able to synthesize phases that do not exist in 1D simulations or traditional travel-time curves.  The method must also be automated in order to adapt to the changing synthetic seismograms after each iteration of the tomographic inversion.  These considerations led us to favor a signal processing approach to the problem of data selection, and to the development of the FLEXWIN algorithm presented here. 
+We present an algorithm for the automated selection of time-windows on pairs of observed and synthetic seismograms.  The algorithm was designed specifically to automate window selection and measurement for adjoint tomography studies, but is sufficiently flexible to be adapted to most tomographic applications and seismological scenarios.  Adjoint tomography requires a data selection method that maximizes the number of measurements made on each seismic record while avoiding seismic noise.  The method must  adapt to the features that exist in the seismograms themselves, because 3D wavefield simulations are able to synthesize phases that do not exist in 1D simulations or traditional travel-time curves.  The method must also be automated in order to adapt to changes in the synthetic seismograms after each iteration of the tomographic inversion.  These considerations led us to favor a signal processing approach to the time-window selection problem, and to the development of the FLEXWIN algorithm we present here. 
 \end{abstract}

Modified: seismo/3D/automeasure/latex/appendix.tex
===================================================================
--- seismo/3D/automeasure/latex/appendix.tex	2008-04-07 08:36:09 UTC (rev 11758)
+++ seismo/3D/automeasure/latex/appendix.tex	2008-04-07 18:47:42 UTC (rev 11759)
@@ -1,6 +1,11 @@
 \appendix
 \section{User functions\label{ap:user_fn}}
+
 \subsection{Global scenario\label{ap:user_global}}
+
+ALESSIA: Please explain $t_Q$ and $t_R$.
+%Below $t_Q$ and $t_R$ denote the arrival times of the Love wave and Rayleigh wave, resepctively, computed using a 1D spherically symmetric earth model.
+
 \begin{align}
 w_E(t) & =
   \begin{cases}
@@ -38,3 +43,19 @@
     \Delta \ln A_0/3 & \text{$t > t_R$} 
   \end{cases}
 \end{align}
+
+%--------------------------
+
+\subsection{Japan scenario\label{ap:user_japan}}
+
+MIN: fill this in please.
+
+%--------------------------
+
+\subsection{Southern California scenario\label{ap:user_socal}}
+
+Below $t_P$, $t_S$, $t_{R0}$ denote the start of the time windows for the crustal P wave, the crustal S wave, and, and the crustal surface wave, respectively, computed using a 1D layered model. The end time of the surface-wave window is given by $t_{R1}$.
+
+CARL: fill this in please.
+
+%--------------------------

Modified: seismo/3D/automeasure/latex/discussion.tex
===================================================================
--- seismo/3D/automeasure/latex/discussion.tex	2008-04-07 08:36:09 UTC (rev 11758)
+++ seismo/3D/automeasure/latex/discussion.tex	2008-04-07 18:47:42 UTC (rev 11759)
@@ -1,45 +1,22 @@
 \section{Relevance to adjoint tomography}
 \label{sec:discuss}
 
-The window selection algorithm we describe in this paper was designed to solve the problem of automatically picking windows for adjoint problems, specifically for 3D-3D tomography as described by \cite{TrompEtal2005} and \cite{TapeEtal2007}.  The specificity of adjoint methods is to turn measurements of the differences between observed and synthetic waveforms into adjoint sources that are subsequently used to determine the sensitivity kernels of the measurements themselves to the Earth model parameters.  The manner in which the adjoint source is created is specific to each type of measurement (e.g. waveform difference, cross-correlation time-lags, multi-taper phase and amplitude anomalies), but once formulated can be applied indifferently to any part of the seismogram.  Adjoint methods have been used to calculate kernels of various body and surface-wave phases with respect to isotropic elastic parameters and interface depths \citep{LiuTromp2006}, but also with respect to anisotropic elastic parameters \cite{SieminskiEtal2007a,SieminskiEtal2007b}.  Adjoint methods allow us to calculate kernels for each and every wiggle on a given seismic record, thereby giving us access to virtually all the information contained within.  
+The window selection algorithm we describe in this paper was designed to solve the problem of automatically picking windows for adjoint problems, specifically for 3D-3D tomography as described by \cite{TrompEtal2005} and \cite{TapeEtal2007}.  The specificity of adjoint methods is to turn measurements of the differences between observed and synthetic waveforms into adjoint sources that are subsequently used to determine the sensitivity kernels of the measurements themselves to the Earth model parameters.  The manner in which the adjoint source is created is specific to each type of measurement (e.g. waveform difference, cross-correlation time-lags, multi-taper phase and amplitude anomalies), but once formulated can be applied indifferently to any part of the seismogram.  Adjoint methods have been used to calculate kernels of various body and surface-wave phases with respect to isotropic elastic parameters and interface depths \citep{LiuTromp2006}, and with respect to anisotropic elastic parameters \citep{SieminskiEtal2007a,SieminskiEtal2007b}.  Adjoint methods allow us to calculate kernels for each and every wiggle on a given seismic record, thereby giving access to virtually all the information contained within.  
 
-It is becoming clear, as more and more finite-frequency tomography models are published, that better kernels on their own are not the answer to the problems of improving the resolution of tomographic studies.  \cite{TrampertSpetzler2006} and \cite{BoschiEtal2007} investigate the factors limiting the quality of finite-frequency tomography images, and conclude that incomplete and inhomogeneous data coverage limit in practice the improvement in resolution that accurate finite-frequency kernels can provide.  The current frustration with the data-induced limitations to the improvements in wave-propagation theory is well summarized by \cite{Romanowicz2008}.  The ability of adjoint methods to deal with all parts of the seismogram indifferently means we can incorporate a much greater amount of information from each seismogram into a tomographic problem, leading to a much improved data coverage.
+It is becoming clear, as more and more finite-frequency tomography models are published, that better kernels on their own are not the answer to the problem of improving the resolution of tomographic studies.  \cite{TrampertSpetzler2006} and \cite{BoschiEtal2007} investigate the factors limiting the quality of finite-frequency tomography images, and conclude that incomplete and inhomogeneous data coverage limit in practice the improvement in resolution that accurate finite-frequency kernels can provide.  The current frustration with the data-induced limitations to the improvements in wave-propagation theory is well summarized by \cite{Romanowicz2008}.  The ability of adjoint methods to deal with all parts of the seismogram indifferently means we can incorporate a much greater amount of information from each seismogram into a tomographic problem, leading to a much improved data coverage.
 
-The computational cost of constructing an adjoint kernel is independent of the number of portions of each seismogram we choose to measure, and also of the number of records of a given event we choose to work with \citep{TapeEtal2007}.  It is therefore to our advantage to make measurements on as many records as possible, while covering as much as possible of each record.  There are, however, certain limits we must be aware of.  As mentioned in the introduction, there is nothing in the adjoint method itself that prevents us from constructing a kernel from noise-dominated portions of the data.  As the purpose of 3D-3D tomography is to improve the fine details of Earth models, it would be counterproductive to pollute the inversion process with such kernels.  
+The computational cost of constructing an adjoint kernel is independent of the number of time-windows on each seismogram we choose to measure, and also of the number of records of a given event we choose to work with \citep{TapeEtal2007}.  It is therefore to our advantage to make measurements on as many records as possible, while covering as much as possible of each record.  There are, however, certain limits we must be aware of.  As mentioned in the introduction, there is nothing in the adjoint method itself that prevents us from constructing a kernel from noise-dominated portions of the data.  As the purpose of 3D-3D tomography is to improve the fine details of Earth models, it would be counterproductive to pollute the inversion process with such kernels.  
 
-DO A BETTER JOB IN THIS PARAGRAPH DESCRIBING THE DATA SELECTION ISSUES SPECIFIC TO ADJOINT TOMOGRAPHY.
-The use of adjoint methods for tomography requires a method of selecting and windowing seismograms that avoids seismic noise while at the same time extracting as much information as possible from the signals.  The method must be automated in order to adapt to the changing synthetic seismograms at each iteration of the tomographic inversion.  The method must also be adaptable to the features that exist in the seismograms themselves, because 3D wavefield simulations are able to synthesize phases that do not exist in 1D simulations or traditional travel-time curves.  These considerations led us to favor a signal processing approach to the problem of data selection, approach which in turn led to the development of the FLEXWIN algorithm we present here.  
+The use of adjoint methods for tomography requires a method of selecting and windowing seismograms that avoids seismic noise while at the same time extracting as much information as possible from the signals.  The method must be automated in order to adapt to the changing synthetic seismograms at each iteration of the tomographic inversion.  The method must also be adaptable to the features that exist in the seismograms themselves, because 3D wavefield simulations are able to synthesize phases that do not exist in 1D simulations or traditional travel-time curves.  These considerations led us to favor a signal processing approach to the problem of data selection, approach which in turn led to the development of the FLEXWIN algorithm we have presented here.  
 
-Finally, we note that the use of FLEXWIN is predicated on the desire {\em not} to use the entire time series of each event in making a measurement between data and synthetics. If one were to simply take the waveform difference between two time series, then there would be no need for selecting time windows of interest. However, this ideal approach \citep[e.g.,][]{GauthierEtal1986} may only work in real applications if the noise in the observed seismograms is described well, which is rare.  The use of FLEXWIN selects particular time windows of interest; within the windows many misfit criteria are possible, including a waveform difference, but outside the windows the seismograms are ignored.
+Finally, we note that the design of FLEXWIN is predicated on the desire {\em not} to use the entire time series of each event when making a measurement between data and synthetics. If one were to simply take the waveform difference between two time series, then there would be no need for selecting time windows of interest. However, this ideal approach \citep[e.g.,][]{GauthierEtal1986} may only work in real applications if the noise in the observed seismograms is described well, which is rare.  Without an adequate description of the spectral characteristics of the noise, it is prudent to resort to the selection of time-windows even for waveform difference measurements. 
 
 %------------------------------
 
-\section{Summary}
-\label{sec:summary}
+\section{Summary
+\label{sec:summary}}
 
-The FLEXWIN algorithm is independent of input model, geographic scale and frequency range. Use of the FLEXWIN algorithm need not be limited to tomography studies, nor to studies using 3D synthetics. It is a configurable process that can be applied to different seismic scenarios by changing the parameters in Table~\ref{tb:params}.  We have configured the algorithm separately for each of the tomographic scenarios presented in Section~\ref{sec:results}.  The configuration process is data-driven: starting from the description of how each parameter influences the window selection (Section~\ref{sec:algorithm}), the user tunes the parameters using a representative subset of the full dataset until the algorithm produces an adequate set of windows, then applies the tuned algorithm to the full dataset. The choice of what makes an adequate set of windows remains subjective, as it depends strongly on the quality of the input model, the quality of the data, and the region of the Earth the tomographic inversion aims to constrain.  We consider the algorithm to be correctly tuned when false positives (windows around undesirable features of the seismogram) are minimized, and true positives (window around desirable features) are maximized.  For a given dataset, the set of tuned parameters (e.g., Table~\ref{tb:params}) and their user-defined time dependencies completely determine the window selection results. Finally, we envision that successive iterations of a particular tomographic model will require minor adjustments to the tuning parameters, as the fits improve between the synthetic and observed seismograms.
-%It can be made to mimic travel-time based selection algorithms by including the travel-time based moveouts into the time-dependence of the STA:LTA water-level $w_E$ and of the rejection parameters $CC_0$, $\Delta \tau_0$ and $\Delta\ln{A}_0$.
+The FLEXWIN algorithm is independent of input model, geographic scale and frequency range. Use of the FLEXWIN algorithm need not be limited to tomography studies, nor to studies using 3D synthetics. It is a configurable process that can be applied to different seismic scenarios by changing the parameters in Table~\ref{tb:params}.  We have configured the algorithm separately for each of the tomographic scenarios presented in Section~\ref{sec:results}.  The configuration process is data-driven: starting from the description of how each parameter influences the window selection (Section~\ref{sec:algorithm}), the user tunes the parameters using a representative subset of the full dataset until the algorithm produces an adequate set of windows, then applies the tuned algorithm to the full dataset. The choice of what makes an adequate set of windows remains subjective, as it depends strongly on the quality of the input model, the quality of the data, and the region of the Earth the tomographic inversion aims to constrain.  We consider the algorithm to be correctly tuned when false positives (windows around undesirable features of the seismogram) are minimized, and true positives (window around desirable features) are maximized.  For a given dataset, the set of tuned parameters (Table~\ref{tb:params}) and their user-defined time dependencies completely determine the window selection results. Finally, we envision that successive iterations of a particular tomographic model may require minor adjustments to the tuning parameters, as the fits improve between the synthetic and observed seismograms, permitting higher frequency information to be used.
 
-Seismologists are seemingly always in an era where seismic records are becoming more complex with our desire to examine the effects of both increasingly detailed structure as well as complex source processes; furthermore, with increasing coverage and sampling rate, the available data becomes voluminous and challenging to manage. Our hope is that this software package will become a standard seismological tool for picking time windows for measurements between complex synthetics seismograms and observed ones. In essence, the onus is still on the seismologist to ``tune'' the input parameters of the code to pick ``appropriate'' time windows, but the automated procedure should eliminate some human bias in picking measurement windows, while expediting the process of analyzing tens to hundreds of thousands of records.
+The desire to study regions of detailed structure and to examine the effects of finite source processes leads seismology researchers to deal with increasingly complex seismic records.  Furthermore, with increasing coverage and sampling rate, the available data becomes voluminous and challenging to manage. Our hope is that the FLEXWIN software package will become a standard seismological tool for picking time windows for measurements between complex synthetic and observed seismograms. In using this package, the onus would still be on the seismologist to tune the algorithm parameters so as to pick time-windows appropriate for each specific study target. For a given data-set and a given set of tuning parameters, the time-window picking is entirely reproducible.  The automated and signal processing nature of the procedure should eliminate some of the human bias involved in picking measurement windows, while expediting the process of analyzing tens to hundreds of thousands of records.
 
-%Compare with data strategy used by Chen for LA basin crustal structure??
-%
-%Full records vs selected phases. 
-%The adjoint kernel calculation procedure allows us to measure and use for
-%tomographic inversion almost any part of the seismic signal.  We do not even
-%need to identify specific seismic phases, as the kernel will take care of
-%defining the sensitivities.
-%Relative phase weighting is one advantage of phase selection that has been used for some time (see \cite{LiRomanowicz1996} and \cite{MegninRomanowic1999}).  It also applies to us (the measurement kernels are already automatically weighted to enhance the contribution to the small amplitude phases and the less well fitting phases).  The noise argument again (only valid should we choose to use waveform difference on full waveforms).  Measurement ease.  Greater linearity of inversion (hence fewer expensive inversion steps) if we use observables that have more closely linear dependence on model parameters (each inversion step takes bigger steps down the misfit curve).
-%
-%
-%\begin{itemize}
-%
-%\item Do we need to compare our selection strategy explicitly with Po Chen's (his is not explicitly stated - he uses P and S direct waves and isolates them using isolation filters obtained by windowing the synthetics)?
-%
-%
-%\item Compare the three datasets.  What is the main point we want to make from such a comparison?
-%
-%
-%\item Ultimately, it should not be necessary to rotate horizontal components into the radial-transverse basis.  Synthetics are computed for east and north, and so are the data, so for extreme 3D structure, it might be best to run the windowing code simply on the east-north records.  Should this not go earlier in the paper?
-%
-%\end{itemize}

Modified: seismo/3D/automeasure/latex/figures_paper.tex
===================================================================
--- seismo/3D/automeasure/latex/figures_paper.tex	2008-04-07 08:36:09 UTC (rev 11758)
+++ seismo/3D/automeasure/latex/figures_paper.tex	2008-04-07 18:47:42 UTC (rev 11759)
@@ -43,7 +43,6 @@
 \multicolumn{7}{c}{Global} \\ \hline
 % CHECK THAT THE MOMENT IS LISTED IN N-M, NOT DYNE-CM
 % CARL HAS FORMULAS TO CONVERT FROM A MOMENT TENSOR TO M0 TO MW
-%021294B	& -10.83 	& -129.02 	& 15.0	& 1.07e19 & 6.6	& South Pacific Ocean \\
 101895B		& 28.06		& 130.18	& 18.5	& 5.68e19 & 7.1	& Ryukyu Islands \\ 
 050295B		& -3.77		& -77.07	& 112.8	& 1.27e19 & 6.7	& Northern Peru \\
 060994A		& -13.82	& -67.25	& 647.1	& 2.63e21 & 8.2	& Northern Bolivia \\
@@ -60,7 +59,6 @@
 \end{tabular}
 \caption{\label{tb:events}
 Example events used in this study.  The identifier refers to the CMT catalog for global events and Japan events, and refers to the Southern California Earthquake Data Center catalog for southern California events.
-%For each event, the CMT catalog identifier, hypocentral location, moment and geographical location are given.
 } 
 \end{table}
 
@@ -110,7 +108,6 @@
 \center \includegraphics[width=6in]{figures/050295B.050-150/ABKT_II_LHZ_seis_nowin.pdf}
 \caption{\label{fg:stalta}
 Synthetic seismogram and its corresponding STA:LTA timeseries.
-(CAN WE ALSO INCLUDE THE ENVELOPES HERE, TOO?)
 The seismogram was calculated using SPECFEM3D and the
 Earth model S20RTS \citep{RitsemaEtal2004} for the CMT catalog event
 050295B, whose details can be found in Table~\ref{tb:events}.  The
@@ -185,12 +182,12 @@
 Time dependent fit based criteria 
 for the 050295B event recorded at ABKT. The time-dependence of these criteria
 is given by the formulae in Appendix~\ref{ap:user_global}. The lower limit on
-acceptable cross-correlation value, $CC_0$, is
+acceptable cross-correlation value, $CC_0$ (solid line), is
 0.85 for most of the duration of the seismogram; it is lowered to 0.75 during
 the approximate surface wave window  defined by the group velocities 4.2\kmps\
 and 3.2\kmps, and is raised to 0.95 thereafter.  The upper limit on time lag,
-$\tau_0$, is 21~s for the whole seismogram.  The upper limit on amplitude
-ratio, $\Delta \ln A_0$, is 1.0 for most of the seismogram; it is reduced to
+$\tau_0$ (dotted line), is 21~s for the whole seismogram.  The upper limit on amplitude
+ratio, $\Delta \ln A_0$ (dashed line), is 1.0 for most of the seismogram; it is reduced to
 1/3 of this value after the end of the surface waves.  
 }
 \end{figure}

Modified: seismo/3D/automeasure/latex/flexwin_paper.pdf
===================================================================
(Binary files differ)

Modified: seismo/3D/automeasure/latex/introduction.tex
===================================================================
--- seismo/3D/automeasure/latex/introduction.tex	2008-04-07 08:36:09 UTC (rev 11758)
+++ seismo/3D/automeasure/latex/introduction.tex	2008-04-07 18:47:42 UTC (rev 11759)
@@ -1,50 +1,24 @@
-\subsection*{Notes from CT to AM}
-
-\begin{itemize}
-
-\item I suggest significantly reducing the citations, especially in the case of long lists. This will limit the reference list to only the most pertinent. I think \cite{LiTanimoto93} is a good one for early kernels. I removed references to \citet{ChenEtal2007a} and \citet{PeterEtal2007}, but feel free to add them back in.
-
-\item There is no longer a ``Harvard'' CMT.  Technically it is GCMT now, I think.  All I did was remove ``Harvard'' from the text.
-
-\item I created a new command \verb+\trange+, so that our usage would be consistent (for example, \trange{6}{30}).  Alessia, you can change \verb+\trange+ as you like to change them all at once.
-
-\item At first, I wrote a concluding paragraph in the introduction, but then I moved it to the very end.  What do you think?
-
-\item I deleted the sentence in the Discussion starting with, ``It can be made to mimic travel-time based selection algorithms\ldots''. If you want to use it, then perhaps put it in Section~\ref{sec:algorithm}.
-
-\item With so much of the discussion devoted to adjoint tomography, we might consider the socal adjoint-source figure.  I am on the fence, because I don't want the paper to be too long either!
-
-\item Jeroen will advocate a succint conclusion (or summary), which is why I put it back in.
-
-\end{itemize}
-
-%======================================
-
 \section{Introduction}
 
 Seismic tomography - the process of imaging the 3D structure of the Earth using
 seismic recordings - has been transformed by recent advances in methodology.
 Ray-based approaches are being superseded by finite-frequency kernel-based
 approaches, and 1D reference models by 3D reference models.  These transitions
-are motivated by a greater understanding of the volumetric sensitivity of seismic measurements
-\citep{MarqueringEtal1999,ZhaoEtal2000,DahlenEtal2000}
-and by computational advances in the forward modelling of seismic wave propagation in fully 3D media
-\citep{KomatitschVilotte1998,KomatitschEtal2002,CapdevilleEtal2003}.  
-In the past decade we have learned to calculate analytic sensitivity kernels in 1D media
-\citep{LiTanimoto93,DahlenEtal2000,DahlenBaig2002,FavierChevrot2003,ZhouEtal2004,SieminskiEtal2004,CalvetChevrot2005,DahlenZhou2006,CalvetEtal2006,ZhaoJordan2006}
-and numeric sensitivity kernels in 3D media
-\citep{Capdeville2005,TrompEtal2005,ZhaoEtal2005,ZhaoEtal2006,LiuTromp2006,SieminskiEtal2007a,SieminskiEtal2007b,TianEtal2007}.
+are motivated by a greater understanding of the volumetric sensitivity of
+seismic measurements \citep{MarqueringEtal1999,ZhaoEtal2000,DahlenEtal2000} and by computational advances in the forward
+modelling of seismic wave propagation in fully 3D media \citep{KomatitschVilotte1998,KomatitschEtal2002,CapdevilleEtal2003}.  
+In the past decade we have learned to calculate analytic sensitivity kernels
+in 1D media \citep[e.g.][]{LiTanimoto1993,DahlenBaig2002,DahlenZhou2006} and numeric sensitivity kernels in 3D media \citep[e.g.][]{Capdeville2005,TrompEtal2005,ZhaoEtal2005}.
 The analytic kernels have been taken up rapidly by tomographers, and used to
-produce new 3D Earth models
-\citep{MontelliEtal2004a,ZhouEtal2005,ZhouEtal2006,MontelliEtal2006,Pollitz2007,MaroneEtal2007,Takeuchi2007}.
-The numeric kernels have opened up the possibility of `3D-3D' tomography, i.e.~seismic tomography based upon a 3D reference model, 3D numerical simulations of the seismic wavefield and finite-frequency sensitivity kernels \citep{TrompEtal2005,ChenEtal2007b}.
+produce new 3D Earth models \citep[e.g.][]{MontelliEtal2004a,ZhouEtal2006}.  The numeric kernels have
+opened up the possibility of `3D-3D' tomography, i.e.~seismic tomography based upon a 3D reference model, 3D numerical simulations of the seismic wavefield and finite-frequency sensitivity kernels \citep{TrompEtal2005,ChenEtal2007b}.
 
 The growing number of competing tomographic techniques all have at their core
 the following `standard operating procedure', which they share with all inverse
 problems in physics: make a guess about a set of model parameters; predict an
 observable from this guess (a travel-time, a dispersion curve, a full
 waveform); measure the difference (misfit) between the prediction and the
-observation; improve on the original guess.  This simple description of the
+observation; improve on the original guess.  This vague description of the
 tomographic problem hides a number of important assumptions, common to all
 tomographic approaches: firstly, that we are able to predict observables
 correctly (we can solve the forward problem); secondly, that the misfit is due
@@ -56,72 +30,48 @@
 
 In order to remain within a domain in which these assumptions are still valid,
 it is common practice in tomography to work only with certain subsets of the
-available seismic data.  For example, ray-based travel-time tomography deals
+available seismic data. Data choices are inextricably linked to tomographic method.  For example, ray-based travel-time tomography deals
 only with high frequency body wave arrivals, while great-circle
 surface wave tomography takes pains to satisfy the path-integral approximation,
 and only deals with surface waves that present no evidence of multipathing.
-Data choices are therefore inextricably linked to tomographic method.  The
+  The
 emerging 3D-3D methods seem to be the best candidates for tomographic studies
 of regions with complex tectonics or structure. These methods take advantage of
 full wavefield simulations and numeric 3D finite-frequency kernels, the
 accuracy of which releases tomographers from many of the data restrictions
 required when using approximate forward modelling and simplified descriptions
-of sensitivity.  3D-3D tomographic methods therefore require different data selection strategies.
+of sensitivity.  3D-3D tomographic methods require their own specific data selection strategies.
 
-In this paper we present an automated data selection method for the 3D-3D adjoint tomography approach of \cite{TrompEtal2005,LiuTromp2006} and \cite{TapeEtal2007}, which build upon \citet{Tarantola84b}.  In adjoint tomography, the sensitivity kernels that tie variations
+In this paper we present an automated data selection method for the 3D-3D adjoint tomography approach of \cite{TrompEtal2005,LiuTromp2006} and \cite{TapeEtal2007}, which builds upon \cite{Tarantola1984}.  In adjoint tomography, the sensitivity kernels that tie variations
 in Earth model parameters to variations in the misfit are obtained by
 interference of the wavefield used to generate the synthetic seismograms (the
 direct wavefield) with an adjoint wavefield that obeys the same wave equation
 as the direct wavefield, but with a source term which is derived from the
-misfit measurements.  The computational cost of such kernel computations for use in seismic tomography depends only on the number of events, and not on the number of stations or on the number measurements made.  It is therefore to our advantage to use the greatest amount of information from each seismogram.
+misfit measurements.  The computational cost of such kernel computations for use in seismic tomography depends only on the number of events, and not on the number of receivers nor on the number of measurements made.  It is therefore to our advantage to use the greatest amount of information from each seismogram.
 
 The adjoint kernel calculation procedure allows us to measure and use for
-tomographic inversion almost any part of the seismic signal.  We do not even
+tomographic inversion almost any part of the seismic signal.  We do not
 need to identify specific seismic phases, as the kernel will take care of
-defining the sensitivities.  However, with great power comes great
+defining the relevant sensitivities.  However, with great power comes great
 responsibility: there is nothing in the adjoint method itself that prevents us
 from constructing an adjoint kernel from noise, thereby polluting our
-inversion process.  When quantifying the misfit between observed data and
-predictions, noise is usually defined as any signal not physically related to
-the system being simulated.  In earthquake seismology, we consider to be noise
+inversion process.   
+In earthquake seismology, we consider to be noise
 any seismic energy that is not caused directly or indirectly by the earthquake
 being simulated.  It is up to the data selection method to ensure such noise is
 avoided in the choice of the portions of the seismogram to be measured. 
 
 From a signal processing point of view, the simplest way to avoid serious
 contamination by noise is to select and measure strong signals, which in
-seismology correspond to seismic arrivals.  Our data selection strategy
-therefore selects windows on the synthetic seismogram in which the waveform
-contains a distinct energy arrival, then requires an adequate correspondence
+seismology correspond to seismic arrivals.  We select time-windows on the synthetic seismogram within which the waveform
+contains a distinct energy arrival, then require an adequate correspondence
 between observed and synthetic waveforms within these windows.  
-During the first step we need to analyse the character of the waveform itself, in
-order to isolate changes in amplitude or frequency content susceptible of being
-associated with distinct seismic phases.  This analysis is similar to that used
-in automated phase detection algorithms used in the routine location of
-earthquakes.  We have therefore taken a tool used in the detection process ---
+In order to isolate changes in amplitude or frequency content susceptible of being
+associated with distinct energy arrivals, we need to analyse the character of the waveform itself,.  This analysis is similar to that used
+in automated phase detection algorithms for the routine location of
+earthquakes.  We have taken a tool used in this detection process ---
 the long-term / short-term ratio --- and applied it to the definition of
-time-windows around distinct seismic phases.  Once these time-windows have been
-defined, we proceed to the second step, during which we reject those windows in
-which the observed and synthetic seismograms fail a set of quality criteria
-based on their cross-correlation, time-lag, amplitude ratio and signal-to-noise
-ratio.  The algorithm parameters for both steps are fully configurable,
-allowing fine control of the window selection process. Although we have designed the algorithm for use in adjoint tomography, its inherent flexibility should make it useful in many data-selection applications.
+time-windows around distinct seismic phases.  
+Although we have designed the algorithm for use in adjoint tomography, its inherent flexibility should make it useful in many data-selection applications.
 
 We have successfully applied our windowing algorithm, the details of which are described in Section~\ref{sec:algorithm}, to diverse seismological scenarios: local and near regional propagation in Southern California, regional subduction-zone propagation in Japan, and global propagation.  We present examples from each of these scenarios in Section~\ref{sec:results}, and we discuss the use of the algorithm in the context of adjoint tomography in Section~\ref{sec:discuss}.
-
-%\subsection{BASIC POINTS FROM CARL}
-
-%\begin{itemize}
-%\item Statistical properties of ``noise'' in seismograms is not well known.
-%\item Full waveform tomography was not a successful in practice as in theory.
-%\item We are interested in regions with complex tectonics, which lead to complex structures, and complex seismograms.
-%\item We are interested in matching the seismograms ``wiggle for wiggle'', not other quanta, for example, spectral content, peak acceleration (WHAT ELSE?).
-%\end{itemize}
-
-%\subsection{EXTRA ALESSIA TEXT -- WHERE DOES THIS FIT IN?}
-
-%Once noise-dominated data have been rejected, we are ready to consider defining preliminary measurement windows.  We choose to define these windows around seismic phase arrivals, for two reasons: (1) the presence of a phase arrival in a window reduces the impact of residual noise on any measurement; (2) distinct phase arrivals tend to have well-defined sensitivity kernels which are intuitively easier to interpret.  
-
-%A reasonably straightforward procedure to define data windows around seismic phases is to estimate their arrival time using a simplified Earth model, and to define windows around these times, allowing for the uncertainty in the travel time and the width of the seismic phase itself.  This kind of procedure is simple to implement when the phases of interest are well spaced on the seismogram, and forms the basis of window selection for many tomographic studies that use cross-correlation measurements of travel time delays (REFs).  When the phases of interest are spaced closely enough on the seismogram that the ideal windows for each phase overlap, different strategies have to be applied.  One such strategy is that implemented by {\bf Panning et al?}, and involves using the predicted travel-time curves in a reference Earth model to pre-define the extend and position of data windows as a function of epicentral distance.
-
-%Windowing strategies of this type have one major drawback, tied to the fact that the prominence of seismic phases depends strongly on focal mechanism and frequency range, and to a lesser extent on 3D Earth structure.  In order to avoid placing empty and or unnecessary windows, these travel-time based schemes may have to be modified to take at least focal mechanism and frequency range into account.  As the importance of these two factors varies as a function of the seismic phases of interest, taking them into account is more or less equivalent to calculating a full synthetic seismogram, and analyzing the shape and amplitude of the synthetic signal.  We argue that since a full synthetic seismogram contains all the travel time, frequency range and focal mechanism information required to assess the usefulness of a particular data window, the seismogram itself should be used to define the data windows.  Instead of using travel times, we choose to detect seismic phase arrivals directly on the synthetic seismograms. 

Modified: seismo/3D/automeasure/latex/method.tex
===================================================================
--- seismo/3D/automeasure/latex/method.tex	2008-04-07 08:36:09 UTC (rev 11758)
+++ seismo/3D/automeasure/latex/method.tex	2008-04-07 18:47:42 UTC (rev 11759)
@@ -1,10 +1,10 @@
 \section{The selection algorithm\label{sec:algorithm}}
 
-Our selection strategy aims to define measurement windows that
+Our selection strategy aims to define measurement time-windows that
 cover as much of a given seismogram as possible, whilst avoiding portions of
 the waveform that are dominated by noise.  We define noise as any signal that
 cannot be modeled using physically reasonable values of the simulation
-parameters.  It is important to note that by this definition, what we consider
+parameters.  It is important to note that by this definition what we consider
 to be noise varies with our simulation abilities.  For example: body waves are
 noise for surface wave simulations; short period signals are noise for long
 period simulations; multiple scattering and coda signals are noise for most
@@ -24,21 +24,20 @@
  Their acceptance into adjoint tomography inversions depends on the
 choice of measurement method: waveform difference measurements can capture the
 full complexity of the difference between observed and simulated composite
-phases, but lead to highly non-linear tomographic inversions, and are more
-sensitive to noise; more robust measurements such as cross-correlation
-travel-times can deal with composite phases only when the simulated and
+phases, but lead to highly non-linear tomographic inversions and are more
+sensitive to noise; measurements such as cross-correlation
+travel-times that lead to less non-linear tomographic inversions can deal with composite phases only when the simulated and
 observed signals are similar in shape.  
 
-This discussion of some of the considerations that go into the window
-selection process illustrates how the choices made in this selection are
+These considerations illustrate how the choices made in time-window selection are
 interconnected with all other aspects of the tomographic inversion process,
 from the waveform simulation method (direct problem), through the choice of
-measurement, to the inversion itself, which necessarily depends on the method
+measurement, to the inversion itself which necessarily depends on the method
 of obtaining sensitivity kernels.  
 
 % Therefore, we have built considerable flexibility into our windowing scheme, enabling it to function under a great variety of tomographic scenarios. 
 
-Our algorithm, called FLEXWIN to reflect its FLEXibility of picking time WINdows for measurement,  operates on pairs of
+Our algorithm, called FLEXWIN to reflect its FLEXibility in picking time WINdows for measurement,  operates on pairs of
 observed and synthetic single component seismograms.  There is no restriction
 on the type of simulation used to generate the synthetics, though realistic
 Earth models and more complete propagation theories yield waveforms that are more similar to the observed
@@ -61,7 +60,7 @@
 \subsection{Phase 0 \label{sec:phase0}}
 %{\em Parameters used: $T_{0,1}$.}
 The purpose of this phase is to pre-process input seismograms, to reject
-noisy records, and to set up a secondary waveform (the short-term / long-term average ratio) derived from the envelope of the synthetic seismogram.  This derived STA:LTA waveform will be used later to define preliminary
+noisy records, and to set up a secondary waveform (the short-term / long-term average ratio) derived from the envelope of the synthetic seismogram.  This STA:LTA waveform will be used later to define preliminary
 measurement windows.
 
 %----------------------
@@ -69,7 +68,7 @@
 \subsubsection{Pre-processing}
 
 We apply minimal and identical pre-processing to both observed and synthetic
-seismograms: band-pass filtering with an non-causal Butterworth
+seismograms: band-pass filtering with a non-causal Butterworth
 filter, whose
 short and long period corners we denote as $T_0$ and $T_1$ respectively. 
 Values of these corner periods should reflect the information content of the data,
@@ -99,13 +98,13 @@
 
 %----------------------
 
-\subsubsection{Construction of STA:LTA timeseries from synthetic seismogram}
+\subsubsection{Construction of STA:LTA timeseries}
 
 Detection of seismic phase arrivals is routinely performed by automated
 earthquake location algorithms.  We have taken a tool used in this
 standard detection process --- the short-term long-term average ratio (STA:LTA)
---- and adapted it to the task of defining data-windows around seismic phases.  Given a synthetic seismogram $s(t)$, we create a
-derived STA:LTA waveform using an iterative algorithm applied to its envelope.
+--- and adapted it to the task of defining time-windows around seismic phases.  Given a synthetic seismogram $s(t)$, we derive its
+STA:LTA timeseries using an iterative algorithm.
 If we denote the Hilbert transform of the synthetic seismogram by
 $\mathcal{H}[s(t)]$, its envelope $e(t)$ is given by:
 \begin{equation}
@@ -127,8 +126,8 @@
 constants determines the sensitivity of the STA:LTA timeseries.  
 \citet{BaiKennett2001} used a similar timeseries to
 analyse the character of broad-band waveforms, and allowed the constants
-$C_S$ and $C_L$ to depend on the dominant period of the waveform to be
-analysed.  We have followed their lead in setting
+$C_S$ and $C_L$ to depend on the dominant period of the waveform under
+analysis.  We have followed their lead in setting
 \begin{equation}
 C_S = 10^{- \delta t / T_0} \qquad {\rm and} \qquad C_L = 10^{-\delta t / 12 T_0}, 
 \end{equation}
@@ -137,13 +136,13 @@
 
 An example of a synthetic seismogram and its corresponding STA:LTA timeseries $E(t)$ is
 shown in Figure~\ref{fg:stalta}.  Before the first arrivals on the synthetic
-seismogram, the $E(t)$ timeseries warms up, and rises to a plateau.  At each
+seismogram, the $E(t)$ timeseries warms up and rises to a plateau.  At each
 successive seismic arrival on the synthetic, $E(t)$ rises to a
 local maximum.  We can see from Figure~\ref{fg:stalta} that these local maxima
 correspond both in position and in width to the seismic phases in the
 synthetic, and that the local minima in $E(t)$ correspond to the
-transition between one phase and the next.  In the following sections, we shall
-explain how we use these correspondences to define data windows.
+transitions between one phase and the next.  In the following sections we shall
+explain how we use these correspondences to define time-windows.
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
 
@@ -152,9 +151,9 @@
 
 The correspondence between local maxima in the STA:LTA waveform $E(t)$ and the
 position of the seismic phases in the synthetic seismogram suggests that we
-should center data windows around these local maxima.  The
+should center time-windows around these local maxima.  The
 correspondence between the local minima in $E(t)$ and the transition between
-successive phases suggests the data windows should start and end at these local
+successive phases suggests the time-windows should start and end at these local
 minima.  In the case of complex phases, there may be several local maxima and
 minima within a short time-span.  In order to correctly window these complex
 phases, we must determine rules for deciding when adjacent local maxima
@@ -171,15 +170,14 @@
 seismological scenario, it is not necessary to change $w_E$ for each
 seismogram.  This is also true of all the other parameters in
 Table~\ref{tb:params}: once the system has been tuned,
-these parameters remain unchanged, and can be used for all seismic events in the same scenario. The functional form of those
-parameters that are dependent on time is defined by the user, can depend on
-information about the earthquake source and the recording station, and also
-remains unchanged once the system has been tuned. 
+these parameters remain unchanged and are used for all seismic events in the same scenario. The functional forms of the time-dependent parameters are defined by the user, can depend on
+information about the earthquake source and the receiver, and also
+remain unchanged once the system has been tuned. 
 For the example in Figure~\ref{fg:stalta}, we have required the water level
 $w_E(t)$ to double after the end of the surface wave arrivals (as defined by
-the epicentral distance and a group velocity of $3.2$~\kmps), so as to avoid
-creating data windows after $R1$.  All local maxima that lie above $w_E(t)$
-are used for the creation of candidate data windows.
+the epicentral distance and a group velocity of $3.2$~\kmps) so as to avoid
+creating time-windows after $R1$.  All local maxima that lie above $w_E(t)$
+are used for the creation of candidate time-windows.
 
 We take each acceptable local maximum in turn as a seed maximum, and create all
 possible candidate windows that contain it, as illustrated by
@@ -197,33 +195,32 @@
 \subsection{Phase B \label{sec:phaseB}}
 %{\em Parameters used: $T_0$, $w_E(t)$, $c_{0-4}$.}
 
-After having created a complete suite of candidate data windows in the manner
+After having created a complete suite of candidate time-windows in the manner
 described above, we start the rejection process.  We reject windows based on
-two sets of criteria: those regarding the shape of the STA:LTA waveform $E(t)$,
-and those regarding the similarity of the observed and synthetic waveforms
+two sets of criteria concerning respectively the shape of the STA:LTA waveform $E(t)$,
+and the similarity of the observed and synthetic waveforms
 $d(t)$ and $s(t)$ within each window.   Here we describe the first set of
 criteria; the second set is described in the following section.
 
 The aim of shape-based window rejection is to retain the set of candidate
-data-windows that contain well-developed phases or groups of phases in the synthetic waveform $s(t)$.  The
+time-windows within which the synthetic waveform $s(t)$ contains well-developed seismic phases or groups of phases. The
 four rejection criteria described here are parameterized by the constants
-$c_{0-3}$ in Table~\ref{tb:params}, and scaled in time by $T_0$ and in
+$c_{0-3}$ in Table~\ref{tb:params}, and are scaled in time by $T_0$ and in
 amplitude by $w_E(t)$.  We apply these criteria sequentially.
 
 Firstly, we reject all windows that contain internal local minima of $E(t)$
 whose amplitude is less than $c_0 w_E(t)$.  We have seen above that local
 minima of $E(t)$ tend to lie on the transitions between seismic phases.  By
 rejecting windows that span deep local minima, we are in fact forcing partition
-of unequivocally distinct seismic phases into separate data windows (see Figure~\ref{fg:win_composite}b).
-Secondly, we reject data windows whose length is less than $c_1 T_0$.  By
-rejecting short windows, we are requiring that data windows be long enough to
+of unequivocally distinct seismic phases into separate time-windows (see Figure~\ref{fg:win_composite}b).
+Secondly, we reject windows whose length is less than $c_1 T_0$.  By
+rejecting short windows, we are requiring that time-windows be long enough to
 contain useful information.
-Thirdly, we reject data windows whose seed maximum $E(t_M)$ rises by less than
+Thirdly, we reject windows whose seed maximum $E(t_M)$ rises by less than
 $c_2 w_E(t)$ above either of its adjacent minima.  Subdued local maxima of
 this kind represent minor changes in waveform character, and should not be used
-to anchor data windows.  They may, however, be considered as part of a data
-window with a more prominent maximum (see Figure~\ref{fg:win_composite}c).
-Lastly, we reject data windows that contain at least
+to anchor time-windows.  They may, however, be considered as part of a time-window with a more prominent maximum (see Figure~\ref{fg:win_composite}c).
+Lastly, we reject windows that contain at least
 one strong phase arrival that is well separated in time from $t_M$.  The
 rejection is performed using the following criterion:
 \begin{equation}
@@ -280,7 +277,7 @@
 The next stage is to evaluate the degree of similarity between the observed and
 synthetic seismograms within these windows, and to reject
 those that fail basic fit-based criteria.  A similar kind of rejection is
-performed by most windowing based schemes.  
+performed by most windowing schemes.  
 
 The quantities we use to define well-behavedness of data within a window are
 signal
@@ -311,8 +308,8 @@
 quantities are the time dependent parameters $r_0(t)$, $CC_0(t)$, $\Delta
 \tau_0(t)$ and $\Delta \ln A_0(t)$ in Table~\ref{tb:params}.
 As for the STA:LTA water level $w_E(t)$ used in above, the functional form of
-these parameters is defined by the user, and can depend on source and station
-related parameters such as epicentral distance and earthquake depth.   
+these parameters is defined by the user, and can depend on source and receiver
+parameters such as epicentral distance and earthquake depth.   
 Figure~\ref{fg:criteria} shows the time
 dependence of $CC_0$ , $\Delta \tau_0$ and $\Delta \ln A_0$ for an example seismogram.  
 
@@ -340,7 +337,7 @@
 
 After having rejected candidate data windows that fail any of the shape or
 similarity based criteria described above, we are left with a small number of
-windows, each of which taken singly would be an acceptable data window for
+windows, each of which taken singly would be an acceptable time-window for
 measurement.  As can be seen from Figure~\ref{fg:win_composite}d and the last
 panel of Figure~\ref{fg:win_rej_data}, the remaining windows may
 overlap partially or totally with their neighbours.  Such overlaps are
@@ -349,15 +346,15 @@
 portions.  Resolving this overlap problem is the last step in the
 windowing process.
 
-Overlap resolution should be seen as a set of choices leading to
-the determination of an optimal set of data windows.  What do we mean by
-optimal?  For our purposes, an optimal set of data windows contains only windows that
+Overlap resolution can be seen as a set of choices leading to
+the determination of an optimal set of time-windows.  What do we mean by
+optimal?  For our purposes, an optimal set of time-windows contains only windows that
 have passed all previous tests, that do not overlap with other windows in the set,
 and that cover as much of the seismogram as possible.  When choosing between
-candidate data windows, we favour those within which the
+candidate windows, we favour those within which the
 observed and synthetic seismograms are most similar (high values of $CC$).
 Furthermore, should we have the choice between two short windows and a longer,
-equally well-fitting one covering the same time-span, we would favour
+equally well-fitting one covering the same time-span, we may wish to favour
 the longer window. 
 
 The condition that optimal windows should have passed all previous tests
@@ -374,7 +371,7 @@
 
 We make this choice by constructing all possible non-overlapping subsets of
 candidate windows, and scoring each subset on three criteria: length of
-seismogram covered by windows, average cross-correlation value for the windows,
+seismogram covered by the windows, average cross-correlation value for the windows,
 and total number of windows.  These criteria often work against each other. For
 example, a long window may have a lower $CC$ than two shorter ones, if the two
 short ones have different time lags $\Delta\tau$.  An optimal weighting of the

Modified: seismo/3D/automeasure/latex/results.tex
===================================================================
--- seismo/3D/automeasure/latex/results.tex	2008-04-07 08:36:09 UTC (rev 11758)
+++ seismo/3D/automeasure/latex/results.tex	2008-04-07 18:47:42 UTC (rev 11759)
@@ -9,9 +9,9 @@
 a regional tomography of southern California, down to 60~km (\trange{2}{40}).
 For each of these scenarios, we compare
 observed seismograms to spectral-element synthetics, using our
-algorithm to select data windows on the pairs of time series.  
+algorithm to select time-windows on the pairs of time series.  
 
-As we have seen in the above description of the method, the windowing algorithm
+The windowing algorithm
 itself has little prior knowledge of seismology, other than in the most general
 terms: it considers a seismogram to be a succession of seismic phases indicated
 by changes in amplitude and frequency of the signal with time; it is based upon
@@ -26,10 +26,10 @@
 scenario-dependent information is encapsulated in the tuning parameters of
 Table~\ref{tb:params}.  
 
-We tuned the windowing algorithm separately for each of the three scenarios we present here, and we present examples based on the events listed in Table~\ref{tb:events}.  Tuning parameter values can be found in Table~\ref{tb:example_params}, while the functional forms of the time-dependent parameters can be found in Appendix~\ref{ap:user_fn}.  Once tuned for a scenario, the algorithm is applied to all the events without further modification. 
+We tuned the windowing algorithm separately for each of the three scenarios we present here, and we present examples based on the events listed in Table~\ref{tb:events}.  Tuning parameter values can be found in Table~\ref{tb:example_params}, while the functional forms of the time-dependent parameters can be found in Appendix~\ref{ap:user_fn}.  Once tuned for a scenario, the algorithm is applied to all the events in that scenario without further modification. 
 
 
-\subsection{Global examples}
+\subsection{Global scale}
 \label{sec:globe}
 
 Our first scenario is a global scale, long-period tomographic study.
@@ -44,10 +44,10 @@
 good match to observed seismograms for periods longer than 25~s.  For our
 examples, we shall be working in the period range \trange{50}{150}.
 
-Here we discuss windowing results for shadow-zone seismograms of the three earthquakes listed
+Here we discuss windowing results for shadow-zone seismograms of three earthquakes listed
 in Table~\ref{tb:events}: a shallow event in the Ryukyu Islands, Japan
 (101895B), an intermediate depth event in Northern Peru (050295B), and a strong
-deep event in Northern Bolivia (060994A).  We focus on the shadow zone
+deep event in Northern Bolivia (060994A).  We focus on shadow zone
 seismograms as these contain a large number of often poorly time-separated
 phases, and pose a greater windowing challenge than more commonly used
 teleseismic seismograms.
@@ -58,18 +58,16 @@
 well, indicating that the Earth model S20RTS+CRUST2.0 provides a good 3D image
 of how the Earth is seen by \trange{50}{150} seismic waves.  The fit is far from
 perfect, though, as is attested by the shape differences, time-lags and
-amplitude differences visible on many phases; these all indicate that there is
-room for improving of the Earth model and possibly of certain
+amplitude differences visible on many seismic phases; these indicate that there is
+room for improving the Earth model and possibly certain
 earthquake parameters even at these low frequencies.  
 
-The second observation we make is that our algorithm has placed data windows
+The second observation we make is that our algorithm has placed time-windows
 around most of the significant features that stand out in the STA:LTA
 timeseries $E(t)$ and in the seismograms themselves, and that the window limits
 also seem to be sensibly placed.  These windows were selected according to the
-purely signal processing algorithm described in the previous section, which
-makes choices by taking into consideration the shape of $E(t)$, and the
-similarity between observed and synthetic seismograms, but which has no
-knowledge of Earth structure, or of seismic phases and their travel-time curves.
+purely signal processing algorithm described in the previous section, which has no
+knowledge of Earth structure or of seismic phases and their travel-time curves.
 In order to demonstrate the ability of such an Earth-blind algorithm to set
 windows around actual seismic phases, we have identified the
 seismic arrivals contained within the chosen data windows, using standard
@@ -78,7 +76,7 @@
 to known seismic phases, which are listed in the
 corresponding figure captions.  We have also traced the body wave ray paths
 corresponding to these phases and show them in Figures~\ref{fg:res_abkt}b and
-~\ref{fg:examples}b,d;  these ray path plots serve to illustrate the large
+~\ref{fg:examples}b,d;  these ray path plots serve to illustrate the considerable
 amount of information contained in a single seismogram, even a long period
 seismogram, when all the usable seismic phases are considered.  
 
@@ -101,12 +99,10 @@
 
 We have described those seismic phases and other features in the seismograms
 that have been selected by our windowing algorithm.  Equally important are the
-phases that have been rejected.  Two such phases, 
-rejected for different reasons, are $P_{\rm diff}$ and $S4$ on the
+phases that have been rejected.  Two such phases are $P_{\rm diff}$ and $S4$ on the
 vertical component seismogram in Figure~\ref{fg:res_abkt}a.  We can identify
-the reasons for the rejection of these phases by comparing the selected data
-windows with the candidate windows at each stage in the rejection process
-(Figure~\ref{fg:win_rej_data}.  The $P_{\rm diff}$ phase, though small on the
+the reasons for the rejection of these phases by comparing the selected time-windows with the candidate windows at each stage in the rejection process
+(Figure~\ref{fg:win_rej_data}).  The $P_{\rm diff}$ phase, though small on the
 long period seismogram, gives rise to a strong maximum on the $E(t)$ timeseries
 and therefore to at least one candidate window.  Candidate windows
 containing $P_{\rm diff}$ disappear from Figure~\ref{fg:win_rej_data} at the
@@ -128,19 +124,19 @@
 summaries such as those in Figure~\ref{fg:composites}, which show at a glance
 the geographical path distribution of records containing acceptable windows,
 the distribution of $CC$, $\Delta\tau$ and $\Delta\ln A$ values within the
-accepted data windows, and data window record sections.  Comparison of the
+accepted time-windows, and time-window record sections.  Comparison of the
 summary plots for the shallow Ryukyu Islands event and the deep
 Bolivia event (Figure~\ref{fg:composites}b and~e respectively) shows that both
 have similar one-sided distributions of $CC$ values, strongly biased towards
 the higher degrees of similarity $CC>0.95$.  The two events also have similar
 two-sided $\Delta\ln A$ distributions that peak at $\Delta\ln A\simeq0.25$,
-indicating that on average, the synthetics underestimate the amplitude of the
+indicating that on average the synthetics underestimate the amplitude of the
 observed waveforms by 25\%.  We cannot know at this stage if this
 anomaly is due to an underestimation of seismic moment of the events, or to an
 overestimation of the attenuation.  The $\Delta\tau$ distributions for the two
-events are also two-sided, though while the shallow event $\Delta\tau$ values
+events are also two-sided. The shallow event $\Delta\tau$ values
 peak between 0 and 4~s, indicating that the synthetics are moderately faster
-than the observed records, the deep event $\Delta\tau$ distribution peaks at
+than the observed records; the deep event $\Delta\tau$ distribution peaks at
 much higher time lags of 8--10~s.  Possible explanations for these large average
 time lags include an origin time error, and/or an overestimation of the seismic
 velocity at the source location.  
@@ -243,7 +239,7 @@
 corresponding to $sS$.  The summary plot for the \trange{24}{120} period range shows a single branch of windows on the vertical component, that splits up into separate $P$- and $S$-wavepackets at distances greater than 15\deg.  The same split is visibile on the radial component, but occurs earlier (around 10\deg), while the transverse component windows form a single branch containing the merged $S+sS$ arrivals.
  
 Comparison of the histograms in Figures~\ref{fg:200511211536A_T06_rs} and~\ref{fg:200511211536A_T24_rs} shows that windows selected on the \trange{24}{120} seismograms tend to have higher degrees of waveform similarity than those selected on the \trange{6}{30} records.  
-$\Delta\tau$ values peak between $-5$~s to $0$~s in both period ranges, indicating that 
+$\Delta\tau$ values peak between $-5$~s and $0$~s in both period ranges, indicating that 
 the synthetics are slower than the observed records. 
 The particularly large peak at $-2$~s in the $\Delta\tau$ distribution
 of Figure~\ref{fg:200511211536A_T06_rs}c is probably due to  
@@ -269,7 +265,7 @@
 event 091502B.
 Comparing the statistics for these three events, we see that
 the degree of similarity $CC$ improves with increasing event 
-depth, implies the estimation 
+depth, implying that the estimation 
 of mantle structure is better than the estimation of crustal structure
 in the initial model.
 The $\Delta\ln A$ distributions of these three events 
@@ -278,7 +274,7 @@
 However, the $\Delta\tau$ distributions have very different features:
 the shallow event (051502B) has a large peak 
 at -10~s and another smaller peak at 8~s; the intermediate-depth 
-event (200511211536A) has very peak at -2~s; the deep event(091502B) has a more 
+event (200511211536A) has sharp peak at -2~s; the deep event(091502B) has a more 
 distributed  $\Delta\tau$ in the range -2 to -10~s.
 Possible explanations for these large average
 time lags include an origin time error, 
@@ -298,14 +294,5 @@
 The windowing algorithm tends to pick five windows on each set of three component longer-period seismograms (Figure~\ref{fg:socal_CLC} and~\ref{fg:socal_rs_T06} are representative examples): on the vertical and radial components the first window corresponds to the body-wave arrival and the second to the Rayleigh wave, while windows on the transverse component capture the Love wave.  The shorter-period synthetic seismograms do not agree well with the observed seismograms, especially in the later part of the signal, leading to fewer picked windows. In Figure~\ref{fg:socal_CLC}e, only two windows are selected by the algorithm: a P arrival recorded on the radial component, and the combined S and Love-wave arrival on the transverse component. The P-wave arrival on the vertical component is rejected because the cross-correlation value within the time window did not exceed the specified minimum value of 0.85 (Table~\ref{tb:example_params}). 
 
 Figure~\ref{fg:socal_FMP} shows results for the same event as Figure~\ref{fg:socal_CLC}, but for a different station, FMP, situated 52~km from the event and within the Los Angeles basin. Comparison of the two figures highlights the characteristic resonance caused by the thick sediments within the basin.  This resonance is beautifully captured by the transverse component synthetics (Figure~\ref{fg:socal_FMP}d, record T), thanks to the inclusion of the basement layer in the crustal model \citep{KomatitschEtal2004}. In order to pick such long time windows with substantial frequency-dependent measurement differences, we are forced to lower the minimum cross-correlation value for the entire dataset (0.74 in Table~\ref{tb:example_params}) and increase $c_{4b}$ to capture the slow decay in the STA:LTA curves (Figure~\ref{fg:socal_FMP}d, record T). It is striking that although these arrivals look nothing like the energy packets typical for the global case, the windowing algorithm is still able to determine the proper start and end times for the windows.  In Figure~\ref{fg:socal_FMP}e the windowing algorithm selects three short-period body-wave time windows with superb agreement between data and synthetics.
-%Finally, in Figure~\ref{fg:socal_adj} we present one example of a set of adjoint sources for one station for one event.  For a cross-correlation traveltime measurement, the adjoint source is simply a weighted version of the synthetic velocity.  The synthetic seismograms in the left column of Figure~\ref{fg:socal_adj} are displacement records.  The weight for each time window in an adjoint source contains three factors:
-%%
-%\begin{enumerate}
-%\item the cross-correlation traveltime measurement, $\Delta T$;
-%\item the estimated uncertainty associated with the measurement, $\sigma_T$;
-%\item a term representing the size of the synthetic pulse \citep[][Eq.~42]{TrompEtal2005}.
-%\end{enumerate}
-%%
-%The contributions act in a manner such that a large measurement with a low uncertainty estimate for a small pulse will have the largest weight for the adjoint source.
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Modified: seismo/3D/automeasure/user_files/socal_3D/PAR_FILE_T02
===================================================================
--- seismo/3D/automeasure/user_files/socal_3D/PAR_FILE_T02	2008-04-07 08:36:09 UTC (rev 11758)
+++ seismo/3D/automeasure/user_files/socal_3D/PAR_FILE_T02	2008-04-07 18:47:42 UTC (rev 11759)
@@ -29,7 +29,7 @@
 
 # -------------------------------------------------------------
 # TSHIFT
-TSHIFT_BASE                     = 2.0
+TSHIFT_BASE                     = 5.0
 
 # -------------------------------------------------------------
 # limit on CC for window acceptance

Modified: seismo/3D/automeasure/user_files/socal_3D/user_functions.f90
===================================================================
--- seismo/3D/automeasure/user_files/socal_3D/user_functions.f90	2008-04-07 08:36:09 UTC (rev 11758)
+++ seismo/3D/automeasure/user_files/socal_3D/user_functions.f90	2008-04-07 18:47:42 UTC (rev 11759)
@@ -11,21 +11,31 @@
   ! for qinya's scsn picking
   double precision :: Pnl_start, S_end, Sw_start, Sw_end
  
+!===========================
 
+! -----------------------------------------------------------------
+! This is the basic version of the subroutine - no variation with time
+! -----------------------------------------------------------------
+   do i = 1, npts
+     time = b+(i-1)*dt
+     DLNA_LIMIT(i) = DLNA_BASE
+     CC_LIMIT(i) = CC_BASE
+     TSHIFT_LIMIT(i) = TSHIFT_BASE       ! WIN_MIN_PERIOD/2.0
+     STALTA_W_LEVEL(i) = STALTA_BASE
+     S2N_LIMIT(i) = WINDOW_AMP_BASE
+   enddo
 
-  if (.not. BODY_WAVE_ONLY) then
-     Pnl_start =  -5.0 + dist_km/7.8
-     Sw_start  = -15.0 + dist_km/3.5
-     Sw_end    =  35.0 + dist_km/3.1
-  else
-     Pnl_start =  P_pick - 5.0
-     S_end     =  S_pick + 5.0
-     Sw_start  = -15.0 + dist_km/3.5
-     Sw_end    =  35.0 + dist_km/3.1
-  endif
+!!$  if (.not. BODY_WAVE_ONLY) then
+!!$     Pnl_start =  -5.0 + dist_km/7.8
+!!$     Sw_start  = -15.0 + dist_km/3.5
+!!$     Sw_end    =  35.0 + dist_km/3.1
+!!$  else
+!!$     Pnl_start =  P_pick - 5.0
+!!$     S_end     =  S_pick + 5.0
+!!$     Sw_start  = -15.0 + dist_km/3.5
+!!$     Sw_end    =  35.0 + dist_km/3.1
+!!$  endif
 
-
-  
   ! regional (Qinya's formulation):
   ! -------------------------------------------------------------
   ! see Liu et al. (2004), p. 1755, but note that the PARENTHESES
@@ -55,15 +65,10 @@
      write(*,*) 'DEBUG : noise_end = ', sngl(noise_end)
   endif
 
-  ! loop over all seismogram points
+ ! --------------------------------
+ ! modulate criteria in time
   do i = 1, npts
-     ! calculate time
-     time = b+(i-1)*dt
-     ! set the values to base ones by default
-     DLNA_LIMIT(i)     = DLNA_BASE
-     TSHIFT_LIMIT(i)   = WIN_MIN_PERIOD/2.0
-     CC_LIMIT(i)       = CC_BASE
-     STALTA_W_LEVEL(i) = STALTA_BASE
+     time = b+(i-1)*dt     ! time
 
      ! set time- and distance-specific Tshift and DlnA to mimic Qinya's criteria
      ! (see Liu et al., 2004, p. 1755; note comment above)
@@ -81,14 +86,14 @@
         STALTA_W_LEVEL(i) = 2.*STALTA_BASE
      endif
 
-     ! raises STA/LTA water level after surface wave arrives. 
+     ! raises STA/LTA water level after surface wave arrives
      if (BODY_WAVE_ONLY) then
         if(time.gt.S_end) then
            STALTA_W_LEVEL(i) = 10.*STALTA_BASE
         endif
      endif
   
-     ! raises STA/LTA water level before p wave arrival.
+     ! raises STA/LTA water level before P wave arrival.
      if(time.lt.Pnl_start) then
         STALTA_W_LEVEL(i) = 10.*STALTA_BASE
      endif
@@ -96,17 +101,6 @@
   enddo
 
 ! The following is for check_window quality_s2n
-! -----------------------------------------------------------------
-! This is the basic version of the subroutine - no variation with time
-! -----------------------------------------------------------------
-   do i = 1, npts
-!     time = b+(i-1)*dt
-!     DLNA_LIMIT(i) = DLNA_BASE
-!     CC_LIMIT(i) = CC_BASE
-!     TSHIFT_LIMIT(i) = TSHIFT_BASE
-!    STALTA_W_LEVEL(i) = STALTA_BASE
-     S2N_LIMIT(i) = WINDOW_AMP_BASE
-   enddo
 
 ! -----------------------------------------------------------------
 ! Start of user-dependent portion
@@ -135,7 +129,5 @@
 ! End of user-dependent portion
 ! -----------------------------------------------------------------
 
-
-
 end subroutine set_up_criteria_arrays
 ! -------------------------------------------------------------



More information about the cig-commits mailing list