[aspect-devel] How to grab the radius of every interior sphere made up from cell faces in the domain and some MPI questions (Ian Rose)

Thorsten Becker twb at usc.edu
Sat Oct 24 18:46:13 PDT 2015


Happy to help with this from the HC side, but I think Max has it covered.
Thanks! (Note that you can pull simple stuff (e.g. geoid, dynamic
topography), directly out of the SEATREE
<https://geosys.usc.edu/projects/seatree/> GUI, which comes fully installed
with a bunch of other stuff within UGESCE
<http://geodynamics.usc.edu/~becker/ugesce.html> as a VirtualBox
envionment.)

We never tested HC much against other kinds of semi-analytical approaches
(because enthusiasm for the previous benchmarking exercise was low), but
folks keep using it and so far, it seems to work (if not pushed to
spherical harmonics degrees that are too high (>~60), see the help page, a
known problem with the propagator approach). Results are internally
consistent with Steinberger's code, which is a recoding of the original
Hager & O'Connell code. We did test HC against CitcomS (or vice versa), and
that worked (e.g. geoid-wise).  Might be interesting to get Petrunin
(as of Petrunin
et al., 2013 <http://onlinelibrary.wiley.com/doi/10.1002/ggge.20226/pdf>)
involved, they have done LVV tests using an update of the Zhang and
Christensen iterative approach.

Cheers

T


Thorsten W Becker
geodynamics.usc.edu <http://geodynamics.usc.edu/~becker>

On Sat, Oct 24, 2015 at 6:30 PM, Max Rudolph <maxwellr at gmail.com> wrote:

> For the benchmarking effort, it would be useful to also make a comparison
> to results obtained using the propagator matrix technique, and to calculate
> the geoid dynamic response function for radially localized density
> perturbations with lateral structure containing only a single spherical
> harmonic degree/order. I did this recently with HC to verify that I could
> reproduce older results by Hager and others and would be happy to try to
> provide the results of my comparison in a format that could be compared
> with results from aspect.
>
>
>
> On Fri, Oct 23, 2015 at 10:33 PM, <aspect-devel-request at geodynamics.org>
> wrote:
>
>> Send Aspect-devel mailing list submissions to
>>         aspect-devel at geodynamics.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>>
>> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
>> or, via email, send a message with subject or body 'help' to
>>         aspect-devel-request at geodynamics.org
>>
>> You can reach the person managing the list at
>>         aspect-devel-owner at geodynamics.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of Aspect-devel digest..."
>>
>>
>> Today's Topics:
>>
>>    1. Re: How to grab the radius of every interior sphere made up
>>       from cell faces in the domain and some MPI questions (Ian Rose)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Fri, 23 Oct 2015 22:33:13 -0700
>> From: Ian Rose <ian.rose at berkeley.edu>
>> To: aspect-devel <aspect-devel at geodynamics.org>
>> Subject: Re: [aspect-devel] How to grab the radius of every interior
>>         sphere made up from cell faces in the domain and some MPI
>> questions
>> Message-ID:
>>         <CAEm=8TS_3AGCnU+=r=
>> qcMyCyDVJJFbweva3h7Zfp-RAqPFqdjQ at mail.gmail.com>
>> Content-Type: text/plain; charset="utf-8"
>>
>> Thanks for summarizing, Rene.
>>
>> I would actually think that my approach would be the more efficient
>> approach, since it does not involve an expensive interpolation to a
>> different grid, nor does it involve binning into radial slices. All three
>> approaches will ultimately involve a big MPI sum over the local
>> contributions to the expansion coefficients.
>>
>> To answer your second question: my approach will allow you to visualize
>> the
>> geoid on the surface and at the CMB, but it will not work for visualizing
>> gravity anomalies in the interior (at least not without solving an
>> additional poisson equation with the geoid as boundary conditions). I
>> would
>> note that making a visualization postprocessor is potentially quite
>> expensive in any approach. With the spherical harmonic transform approach
>> each evaluation would take O(n^2 m) multiplications, where n is the
>> spherical harmonic order and m is the number of radial slices.
>>
>> My code is pretty close to producing output for benchmarking.  The main
>> obstacles are (1) the difficulty with evaluating dynamic topography,
>> especially at lower resolution, and (2) having a good set of tests to
>> benchmark against.
>>
>> On Fri, Oct 23, 2015 at 3:45 PM, Rene Gassmoeller <rengas at gfz-potsdam.de>
>> wrote:
>>
>> > Hi Shangxin, Hi Ian,
>> >
>> > let me try to summarize the ideas of the three versions (as far as I
>> > understood them) to find a point how to best proceed, ordered by
>> increasing
>> > complexity:
>> >
>> > (1) The version I put online seems to be the simplest in terms of
>> > computation (also because it is very similar to what CitcomS does),
>> however
>> > it is also the least accurate, because it simply integrates everything
>> at
>> > the mesh points. But it already has a way  to visualize the geoid as a
>> > visualization pp by expanding the coefficients on the usual output mesh
>> > positions, something I would eventually would like to have in whatever
>> > version we produce.
>> >
>> > (2) Shangxin's version improves the above approach by integrating the
>> > density contributions on a more reasonable grid, and tries to align the
>> > layers with the cell surfaces. This is more tricky with AMR and parallel
>> > computations, however I think it could be done and would improve the
>> > accuracy (the crucial point would be the speed and scalability though).
>> You
>> > might need to check for every point of your new grid if it is in the
>> local
>> > domain by trying to look for the active cell surrounding it
>> > (GridTools::find_active_cell_around_point can do that for you). You
>> might
>> > be able to speed this up a bit, by just looking at the points that lie
>> in
>> > the bounding box of the local triangulation? Afterwards you want to know
>> > the solution at the points that are in the local domain (e.g. by using
>> > FEFieldFunction or by manually generating quadrature rules for every
>> > point). This approach might be very slow though.
>> >
>> > (3) Ian's version involves changing the internal computation to a
>> > multipole expansion, but might actually lead to the most accurate
>> > representation, and also automatically circumvents the problem of the
>> > interpolation to different positions. I must admit that I did not look
>> into
>> > the math exactly, but if we can show that this approach passes some
>> kind of
>> > test and is not significantly too slow to implement, then I have
>> absolutely
>> > no problem using this approach. Ian just to be clear: In this approach
>> we
>> > also calculate multipole expansion coefficients that could be queried
>> by a
>> > visualization postprocessor and expanded again at arbitrary positions
>> (like
>> > our output coordinates)? And since you use the same parallel
>> implementation
>> > than the first approach you simply calculate local coefficients and
>> > afterwards sum them globally by MPI::sum so that parallel computation is
>> > inherently easy?
>> >
>> > So how to proceed best? Ian how far is your code from producing some
>> > output that we could our plugins compare to? I must admit that I would
>> be
>> > happy to skip the tedious interpolation of values to a different grid
>> ;).
>> >
>> > Let me know what you two prefer,
>> > Rene
>> >
>> >
>> >
>> > On 10/23/2015 11:28 AM, Ian Rose wrote:
>> >
>> > Coming a little late to this discussion, but I am also interested in
>> > having a geoid postprocessor (in fact, I think it is a really important
>> > missing feature in ASPECT: the geoid is one of our best observables).
>> > During the hackathon I put a bunch of work into an implementation of
>> one,
>> > using Rene's as a starting point. Unfortunately, it has stalled a bit
>> from
>> > my end, but we should work to avoid duplicated effort if possible.
>> >
>> > I would like to put in a plug for an alternative approach to calculating
>> > the geoid, which should
>> > (a) work better in parallel, and
>> > (b) not require a huge amount of interpolation onto spherical surfaces.
>> >
>> > Shangxin, the formula you included comes from solving Laplace's equation
>> > using spherical multipole moments. Evaluating the multipole moments
>> > involves integrals over the volume of the domain. Typically what is done
>> > for planets is to do this spectrally using spherical harmonics (this is
>> > mathematically appealing, since spherical harmonics are deeply related
>> to
>> > multipole moments).  The volume integral is split up into integrals of
>> > spherical harmonics over balls of constant radius, and then an integral
>> in
>> > radius, resulting in the formula you gave.
>> >
>> > Unfortunately this approach has a couple of drawbacks which make it a
>> bit
>> > more difficult to use with ASPECT, and as far as I understand, that is
>> what
>> > you are struggling with here.
>> > (1) Spherical harmonic transforms are usually done using quadrature
>> over a
>> > Gauss-Legendre grid. This involves a huge amount of interpolation.
>> > (2) You have to identify the values of your function at different radii,
>> > which is a pain in an unstructured grid, and doubly so when AMR is used.
>> >
>> > My suggestion for an alternative is to skip the whole spherical harmonic
>> > transform on constant radius balls.  Instead, we could operate directly
>> on
>> > the multipole expansion, doing volume integrals using the finite element
>> > grid (remember, finite elements are great at integrating over volumes).
>> > This has the advantage of being simpler and requires no interpolation or
>> > ball-radius-locating.
>> >
>> > I have a branch with an initial implementation of this scheme in it. It
>> is
>> > not quite working, and as Rene implied, benchmarking is difficult. I'll
>> > submit a PR in a moment.
>> >
>> > Best,
>> > Ian
>> >
>> > On Thu, Oct 22, 2015 at 1:22 PM, Shangxin Liu <sxliu at vt.edu> wrote:
>> >
>> >> Hi Rene,
>> >>
>> >> I tested my code in a simple degree 3 order 2 harmonic perturbation
>> case.
>> >> I think you also do that in your code. In my code there is a main
>> >> execute template performing the computation of geoid from spherical
>> >> harmonic expansions. There are other three templates each as a
>> function to
>> >> return the spherical harmonic coefficients of surface topo, CMB topo,
>> and
>> >> density anomaly. There is also another template as a function to
>> perform
>> >> the data transformation to spherical harmonic coefficients and return
>> the
>> >> values. Maybe this is not the best way, but it now works in the single
>> >> processor for the test.
>> >>
>> >> As for grabbing different layers' radii, you're right, my approach for
>> >> now is exactly
>> >>
>> >> vector R_sphere initialized with inner radius and outer radius, (r_CMB,
>> >> r_surface);
>> >> Then loop through all cells
>> >>   if (the radius of the midpoint of the upper face is already in the
>> >> vector)
>> >>     skip cell;
>> >>   else
>> >>     push_back radius into R_sphere;
>> >> reorder the elements in R_sphere from the smallest r_CMB to largest
>> >> r_surface.
>> >> Then perform the numerical integral of rho_l_m(r) through each layer.
>> >>
>> >> This approach now works fine in a single processor (I've took care of
>> the
>> >> floating point math in my code), but doesn't work in parallel since
>> each
>> >> processor will only cover a sub-domain and in this case each processor
>> will
>> >> hold a vector R_sphere and each of them may not include all the radius
>> >> values throughout the mantle. At the beginning, I once considered
>> directly
>> >> searching for the radii with equal division, 'minimal_radius + (n /
>> >> number_of_layers) * model_depth' with
>> >> n being the number of the layer I'm in. However, I found that in 0
>> >> initial global refinement degree, the output three radii
>> (3480,4501,6371)
>> >> are not in equal interval. This puzzled me a bit and made me resort to
>> >> the above "awkward" way. So how can I solve this problem in the
>> parallel
>> >> code?
>> >>
>> >> As for the adaptive refinement problem, yes, from the beginning I've
>> >> realized this "aligning layers with the cell's boundary" approach will
>> >> break down in AMR. But for now I still don't know the detail of the
>> >> adaptive mesh refinement in ASPECT so I've never used this parameter
>> in my
>> >> global model. Maybe We should use a better approach to also
>> accommodate AMR.
>> >>
>> >> About the second problem, let me describe in detail what I'm doing:
>> >>
>> >> In my previous template(function) to compute spherical harmonic
>> >> coefficients, to deal with the term sin(theta)*delta_theta*delta_phi
>> (theta
>> >> is polar and phi is the azimuth), I directly regard each cell's upper
>> face
>> >> on the sphere as a small region for the sphere integral, and use the
>> area
>> >> of each cell's upper face over the square of the current sphere to
>> compute,
>> >> i.e., sin(theta)*delta_theta*delta_phi = delta_area/(r_sphere^2).
>> However,
>> >> this approach works fine in higher refinement degree but generates a
>> >> considerable error in the coarse refinement. So now I make a new
>> function
>> >> to generate a new 2D sphere grid for theta and phi to compute sphere
>> >> integral (theta = 0:size:180; phi = 0:size:360). The size of the grid
>> is
>> >> controlled by an input parameter. In this template, I need to make some
>> >> kind of 2D sphere interpolation to get the value at the new grid point
>> from
>> >> the output value at each cell upper face's midpoint. For now I just
>> use the
>> >> simplest "nearest interpolation" to find the closest cell upper face
>> >> midpoint to the new grid point and then set the value at the new grid
>> point
>> >> to that of the closest cell upper face midpoint. In this way, the
>> result in
>> >> the lower refinement degree indeed becomes more accurate. However, this
>> >> induces another similar MPI problem as above. Each processor will only
>> >> cover part of the total cells so I need to further compare between
>> >> different processor to find the closest one among all the local_closest
>> >> results and also find the cell upper face's midpoint of that "closest"
>> >> processor.
>> >>
>> >> Hope this further clarify the problems and my workflow,
>> >>
>> >> Best,
>> >> Shangxin
>> >>
>> >>>
>> >>>
>> >>> Message: 1
>> >>> Date: Thu, 22 Oct 2015 11:31:44 -0500
>> >>> From: Rene Gassmoeller < <rengas at gfz-potsdam.de>rengas at gfz-potsdam.de
>> >
>> >>> To: aspect-devel at geodynamics.org
>> >>> Subject: Re: [aspect-devel] How to grab the radius of every interior
>> >>>         sphere made up from cell faces in the domain and some MPI
>> >>> questions
>> >>> Message-ID: <56290F70.80402 at gfz-potsdam.de>
>> >>> Content-Type: text/plain; charset=utf-8
>> >>>
>> >>>
>> >>> Hi Shangxin,
>> >>> I think I can help you with some of your questions, but first:
>> >>> I think I had mentioned at the Hackathon and also at one of the video
>> >>> user meetings earlier this year that I was working together with some
>> >>> others on a geoid postprocessor already. At the hackathon and
>> afterwards
>> >>> Jacky, Ian, Anthony Osei-Tutu (a PhD student with us in Potsdam) and
>> >>> myself put something together that is already close to working. Since
>> >>> the project that gave us the incentive to write the postprocessor
>> >>> developed in a slightly different way it was never brought into a
>> shape
>> >>> where we opened a pull request for the main version (it was a branch
>> on
>> >>> my github repository). I did not know you where working on something
>> >>> similar, otherwise I would have opened one. But I did that now, so you
>> >>> can have a look at:
>> >>>
>> >>> https://github.com/geodynamics/aspect/pull/648
>> >>>
>> >>> The code is not exactly finished, we never benchmarked it properly and
>> >>> it gives some curious patterns for low resolutions, but it seems to
>> work
>> >>> ok for higher resolutions. Maybe you have more experience with testing
>> >>> geoid output or can ask Scott for some testcases?
>> >>>
>> >>> >From what I read in your mail, we might be able to merge the best of
>> the
>> >>> two versions into something that actually makes it into the main code.
>> >>> Our approach was mostly to take the algorithm used in CitcomS and
>> >>> transfer it to ASPECT with as little content changes as possible,
>> while
>> >>> still putting it into our usual code style. Instead of a lengthy
>> >>> description of our approach, let me just try to answer your questions
>> to
>> >>> point out what we do to circumvent your problems.
>> >>>
>> >>> > After getting the
>> >>> > topography value of each cell along the boundary, there is a another
>> >>> > template in my code to transfer them to the form of spherical
>> harmonic
>> >>> > coefficients.
>> >>>
>> >>> I am not sure what you mean by template here. In most places we use
>> only
>> >>> one template parameter for most of our classes, which is the
>> dimension.
>> >>> Do you mean function?
>> >>>
>> >>> >
>> >>> > As for the density anomaly, The formula needs to first get the
>> density
>> >>> > anomaly value per cell of each depth layer (in my code, the number
>> of
>> >>> total
>> >>> > sphere layers is determined by initial refinement degree, i.e, there
>> >>> are 2
>> >>> > layers in 0 refinement degree, each sphere layer has 48 density
>> >>> values, one
>> >>> > value per cell).
>> >>>
>> >>> I think this approach is nice for a uniform mesh, but it might break
>> >>> down for AMR. Even if you start with only global refinement, your mesh
>> >>> might get coarsened over time. In our plugin we therefore follow the
>> >>> approach of the depth average postprocessor, in which we make the
>> number
>> >>> of layers an input parameter for the user. Most likely the user will
>> >>> know best, what the 'average' resolution of the model will be, and
>> what
>> >>> resolution might be reasonable for the number of layers. This of
>> course
>> >>> poses the problem that some cells are partly in one layer and partly
>> in
>> >>> another. For now, we decided to simply attribute the cell to the
>> layer,
>> >>> which contains its midpoint. The same problem will occur in your
>> >>> approach when you switch on AMR.
>> >>>
>> >>> Then I transfer the density anomaly values for each layers
>> >>> > into spherical harmonic coefficients, rho_l_m(r), and integral the
>> >>> > coefficients from r_CMB to r_surface (in the 0 refinement degree,
>> the
>> >>> > numerical integral contains two layers, 3480 to 4501km and 4501 to
>> 6371
>> >>> > km). So in this integral, I need to grab the radius value of each
>> >>> internal
>> >>> > sphere, i.e, the radius of the cell's upper face globally (in
>> simplest
>> >>> > refinement degree 0 case there are three radius 3480,4501,and
>> 6371). My
>> >>> > current approach is to loop over all the cells and get the location
>> of
>> >>> each
>> >>> > cell's upper face midpoint, then compute the distance between earth
>> >>> center
>> >>> > and these midpoint, remove the same distance value of the cell at
>> the
>> >>> same
>> >>> > depth, finally push_back them to a "R_sphere vector"
>> ((3480,4501,6371)
>> >>> in 0
>> >>> > refinement degree). But I think this approach is not an elegant way
>> and
>> >>> > also will not work in parallel since each processor only holds a
>> >>> > sub-domain. So is there a fast way to directly grab the radius of
>> each
>> >>> > internal sphere determined by the cell face in deal.ii/ASPECT?
>> >>>
>> >>> This is indeed more complicated in parallel and also you need to be
>> >>> careful about floating-point math here. As far as I understand you
>> ask:
>> >>> for all cells
>> >>>   if (the radius of the midpoint of the upper face is already in the
>> >>> vector)
>> >>>     skip cell
>> >>>   else
>> >>>     add radius
>> >>>
>> >>> However, the above mentioned approach will break down with an adaptive
>> >>> mesh, because there might be more refined cells only in a part of the
>> >>> domain. I guess the problem you are encountering mostly stems from the
>> >>> fact that you want to align your layers with the cell boundaries
>> >>> (something we did not enforce, although it might be more accurate).
>> >>> But another approach could be the following: The cells divide the
>> total
>> >>> model depth into equal parts. Therefore, the vector you are searching
>> is
>> >>> essentially 'minimal_radius + (n / number_of_layers) * model_depth'
>> with
>> >>> n being the number of the layer you are in. No need for looping over
>> >>> cells, or comparing radii of the face midpoints. This is the approach
>> we
>> >>> used.
>> >>>
>> >>> >
>> >>> > My second problem is the comparison between the results of different
>> >>> > processor. To be concise, there is a loop through all the cells to
>> find
>> >>> > which cell's upper face midpoint is the closest to a fixed point.
>> >>>
>> >>> What is this fixed point? And what do you want to do there? If you
>> just
>> >>> want to evaluate the solution at that point, consider using the
>> >>> FEFieldFunction class of deal.II to get the solution at an arbitrary
>> >>> point in the domain. If you indeed need to find the cell that contains
>> >>> this point (and the closest face midpoint will always be the one of
>> this
>> >>> cell -- or one of the neighbouring cells, if the neighbours are more
>> >>> refined than the containing cell) then consider using the
>> >>> GridTools::find_active_cell_around_point function.
>> >>>
>> >>> Hope that helps. Best regards,
>> >>> Rene
>> >>>
>> >>> >
>> >>> > On Wed, Oct 21, 2015 at 5:34 PM, Wolfgang Bangerth <
>> bangerth at tamu.edu>
>> >>> > wrote:
>> >>> >
>> >>> >>
>> >>> >> Shangxin,
>> >>> >> I have difficulty following your questions, and I believe it is
>> >>> because
>> >>> >> you have already picked a specific algorithm and you are asking
>> how to
>> >>> >> implement it. But we know neither the algorithm, nor what the
>> problem
>> >>> is
>> >>> >> you are trying to solve.
>> >>> >>
>> >>> >> Can you give us an overview *in formulas* of what you are actually
>> >>> trying
>> >>> >> to compute, along with an overview of *how* you try to compute it?
>> >>> Maybe
>> >>> >> that already helps us give you suggestions how to avoid some of the
>> >>> >> problems you see.
>> >>> >>
>> >>> >> Best
>> >>> >>  W.
>> >>> >>
>> >>> >>
>> >>> >> On 10/21/2015 03:23 PM, Shangxin Liu wrote:
>> >>> >>
>> >>> >>> Hi;
>> >>> >>>
>> >>> >>> I'm now making a new plugin into the postprocess directory to
>> compute
>> >>> >>> geoid anomaly. The code has been finished but it needs some
>> further
>> >>> >>> modification to run in parallel. To realize this, I run into these
>> >>> two
>> >>> >>> problems:
>> >>> >>>
>> >>> >>> 1. The geoid computation needs the integral of the density anomaly
>> >>> from
>> >>> >>> CMB to surface. For now, I do the integral based on cells. So to
>> get
>> >>> all
>> >>> >>> the radii of the internal spheres, the code loops over all of the
>> >>> cells
>> >>> >>> and get the radius of each cell's upper face and push_back the
>> radius
>> >>> >>> into a vector after removing the same values from the cell faces
>> at
>> >>> the
>> >>> >>> same depth. But I suppose this solution is not so elegant and also
>> >>> will
>> >>> >>> not work in the multiple MPI processors since each processor will
>> >>> only
>> >>> >>> cover part of the domain. So is there a fast way in
>> deal.ii/ASPECT to
>> >>> >>> directly grab all the radii of the spheres determined by the cell
>> >>> faces
>> >>> >>> within the domain?
>> >>> >>>
>> >>> >>> 2. In the spherical harmonic coefficients computation, I need to
>> >>> >>> evaluate the values at each grid points I created prepared for
>> sphere
>> >>> >>> integral from the output values at the center of each cell face.
>> My
>> >>> >>> current solution is to loop over all the sphere cells and find the
>> >>> >>> closest cell face midpoint to the grid point (nearest point
>> >>> >>> interpolation). But since each processor will only cover part of
>> the
>> >>> >>> domain so I also need to compare among all the processors to find
>> the
>> >>> >>> closest point and the corresponding density anomaly/topo values at
>> >>> that
>> >>> >>> point. I notice that there is a Utilities::MPI::max/min command
>> can
>> >>> >>> compare among different processors. But my purpose is not only to
>> get
>> >>> >>> the closet distance, but also need to know which processor provide
>> >>> this
>> >>> >>> closest cell face midpoint and the corresponding location of this
>> >>> closet
>> >>> >>> cell midpoint hence to get the density anomaly/topo there. Are
>> there
>> >>> any
>> >>> >>> MPI commands in deal.ii/ASPECT to realize this?
>> >>> >>>
>> >>> >>> Best,
>> >>> >>> Shangxin
>> >>> >>>
>> >>> >>
>> >>> >>
>> >>> >> --
>> >>> >>
>> >>>
>> ------------------------------------------------------------------------
>> >>> >> Wolfgang Bangerth               email:
>> >>> bangerth at math.tamu.edu
>> >>> >>                                 www:
>> >>> http://www.math.tamu.edu/~bangerth/
>> >>> >>
>> >>> >>
>> >>> >
>> >>> >
>> >>> >
>> >>> > _______________________________________________
>> >>> > Aspect-devel mailing list
>> >>> > Aspect-devel at geodynamics.org
>> >>> > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
>> >>> >
>> >>>
>> >>>
>> >>> ------------------------------
>> >>>
>> >>> Message: 2
>> >>> Date: Thu, 22 Oct 2015 12:32:29 -0400
>> >>> From: Shangxin Liu <sxliu at vt.edu>
>> >>> To: Wolfgang Bangerth < <bangerth at tamu.edu>bangerth at tamu.edu>
>> >>> Cc: aspect-devel <aspect-devel at geodynamics.org>
>> >>> Subject: Re: [aspect-devel] How to grab the radius of every interior
>> >>>         sphere made up from cell faces in the domain and some MPI
>> >>> questions
>> >>> Message-ID:
>> >>>         <
>> >>> CACg+8PRfGDny2B43n2o82t9G2YtbRWF9XJAsTE7vA8yGrPjG_w at mail.gmail.com>
>> >>> Content-Type: text/plain; charset="utf-8"
>> >>>
>> >>>
>> >>> Hi;
>> >>>
>> >>> To make the problems more explicit, here I attach the formulas. After
>> >>> transferring the surface topo, CMB topo, and each depth's density
>> anomaly
>> >>> into spherical harmonic coefficients, the geoid anomaly is computed in
>> >>> this
>> >>> way:
>> >>>
>> >>> [image: Inline image 1]
>> >>> G is the gravitational constant, g is the gravity, n,m are spherical
>> >>> harmonic order and degree. The first item is the integral from core
>> >>> mantle
>> >>> boundary r_c to surface r_s of the density anomaly spherical harmonic
>> >>> coefficient. The second term is the contribution from surface dynamic
>> >>> topo,
>> >>> delta_rho_s is the density contrast between mantle and sea water. The
>> >>> third
>> >>> term is the contribution from CMB dynamic topo, delta_rho_c is the
>> >>> density
>> >>> contrast throughout the core mantle boundary.
>> >>>
>> >>> For the numerical integral of the first term, my question is how to
>> fast
>> >>> grab the radius of each spherical layer determined by the cells' upper
>> >>> face
>> >>> in ASPECT/deal.ii (In 0 initial refinement degree, there are three
>> values
>> >>> (3480,4501,6371); in 1 initial refinement degree, there are 5
>> >>> values...). I
>> >>> think my current approach to loop over every cell and remove the same
>> >>> upper
>> >>> face radius from the cells at the same depth then put all the radius
>> >>> values
>> >>> into a vector is not efficient enough and will also not work in
>> parallel
>> >>> computing.
>> >>>
>> >>> Best,
>> >>>
>> >>> Shangxin
>> >>>
>> >>> On Wed, Oct 21, 2015 at 10:41 PM, Shangxin Liu <sxliu at vt.edu> wrote:
>> >>>
>> >>> > Hi Wolfgang,
>> >>> >
>> >>> > Let me briefly describe my workflow because computing geoid in
>> >>> spherical
>> >>> > harmonic is a bit complicated in detail. There are three parts,
>> surface
>> >>> > dynamic topo, CMB dynamic topo, and density anomaly throughout the
>> >>> > spherical domain that contribute to geoid. The surface and CMB
>> dynamic
>> >>> > topography are easier in programming. I just use the results of
>> surface
>> >>> > dynamic topography and then change the if loop condition to search
>> for
>> >>> the
>> >>> > core mantle boundary cells to compute CMB topo. After getting the
>> >>> > topography value of each cell along the boundary, there is a another
>> >>> > template in my code to transfer them to the form of spherical
>> harmonic
>> >>> > coefficients.
>> >>> >
>> >>> > As for the density anomaly, The formula needs to first get the
>> density
>> >>> > anomaly value per cell of each depth layer (in my code, the number
>> of
>> >>> total
>> >>> > sphere layers is determined by initial refinement degree, i.e, there
>> >>> are 2
>> >>> > layers in 0 refinement degree, each sphere layer has 48 density
>> >>> values, one
>> >>> > value per cell). Then I transfer the density anomaly values for each
>> >>> layers
>> >>> > into spherical harmonic coefficients, rho_l_m(r), and integral the
>> >>> > coefficients from r_CMB to r_surface (in the 0 refinement degree,
>> the
>> >>> > numerical integral contains two layers, 3480 to 4501km and 4501 to
>> 6371
>> >>> > km). So in this integral, I need to grab the radius value of each
>> >>> internal
>> >>> > sphere, i.e, the radius of the cell's upper face globally (in
>> simplest
>> >>> > refinement degree 0 case there are three radius 3480,4501,and
>> 6371). My
>> >>> > current approach is to loop over all the cells and get the location
>> of
>> >>> each
>> >>> > cell's upper face midpoint, then compute the distance between earth
>> >>> center
>> >>> > and these midpoint, remove the same distance value of the cell at
>> the
>> >>> same
>> >>> > depth, finally push_back them to a "R_sphere vector"
>> ((3480,4501,6371)
>> >>> in 0
>> >>> > refinement degree). But I think this approach is not an elegant way
>> and
>> >>> > also will not work in parallel since each processor only holds a
>> >>> > sub-domain. So is there a fast way to directly grab the radius of
>> each
>> >>> > internal sphere determined by the cell face in deal.ii/ASPECT?
>> >>> >
>> >>> > My second problem is the comparison between the results of different
>> >>> > processor. To be concise, there is a loop through all the cells to
>> find
>> >>> > which cell's upper face midpoint is the closest to a fixed point.
>> So in
>> >>> > parallel running, each processor returns a local_min_distance value
>> >>> and the
>> >>> > location of cell face's midpoint(x,y,z) corresponding to this
>> value. I
>> >>> need
>> >>> > to find the min distance among them and also the location of the
>> cell
>> >>> > face's midpoint corresponding to this min_distance. I know in ASPECT
>> >>> the
>> >>> > namespace Utilities::MPI::min can return the min value among the
>> >>> results of
>> >>> > all the active processors, but I also need to grab the location of
>> the
>> >>> cell
>> >>> > face's midpoint correspond to this min_distance contained by one of
>> the
>> >>> > processors. I don't know how to do this. Is there any ways to loop
>> >>> through
>> >>> > all the MPI processors to find both the min_distance and the
>> >>> corresponding
>> >>> > cell face's midpoint? I wonder whether deal.ii/ASPECT has some
>> commands
>> >>> > like "variable_name(MPI_id)" to return the result of each processor.
>> >>> >
>> >>> > Best,
>> >>> > Shangxin
>> >>> >
>> >>> > On Wed, Oct 21, 2015 at 5:34 PM, Wolfgang Bangerth <
>> bangerth at tamu.edu>
>> >>> > wrote:
>> >>> >
>> >>> >>
>> >>> >> Shangxin,
>> >>> >> I have difficulty following your questions, and I believe it is
>> >>> because
>> >>> >> you have already picked a specific algorithm and you are asking
>> how to
>> >>> >> implement it. But we know neither the algorithm, nor what the
>> problem
>> >>> is
>> >>> >> you are trying to solve.
>> >>> >>
>> >>> >> Can you give us an overview *in formulas* of what you are actually
>> >>> trying
>> >>> >> to compute, along with an overview of *how* you try to compute it?
>> >>> Maybe
>> >>> >> that already helps us give you suggestions how to avoid some of the
>> >>> >> problems you see.
>> >>> >>
>> >>> >> Best
>> >>> >>  W.
>> >>> >>
>> >>> >>
>> >>> >> On 10/21/2015 03:23 PM, Shangxin Liu wrote:
>> >>> >>
>> >>> >>> Hi;
>> >>> >>>
>> >>> >>> I'm now making a new plugin into the postprocess directory to
>> compute
>> >>> >>> geoid anomaly. The code has been finished but it needs some
>> further
>> >>> >>> modification to run in parallel. To realize this, I run into these
>> >>> two
>> >>> >>> problems:
>> >>> >>>
>> >>> >>> 1. The geoid computation needs the integral of the density anomaly
>> >>> from
>> >>> >>> CMB to surface. For now, I do the integral based on cells. So to
>> get
>> >>> all
>> >>> >>> the radii of the internal spheres, the code loops over all of the
>> >>> cells
>> >>> >>> and get the radius of each cell's upper face and push_back the
>> radius
>> >>> >>> into a vector after removing the same values from the cell faces
>> at
>> >>> the
>> >>> >>> same depth. But I suppose this solution is not so elegant and also
>> >>> will
>> >>> >>> not work in the multiple MPI processors since each processor will
>> >>> only
>> >>> >>> cover part of the domain. So is there a fast way in
>> deal.ii/ASPECT to
>> >>> >>> directly grab all the radii of the spheres determined by the cell
>> >>> faces
>> >>> >>> within the domain?
>> >>> >>>
>> >>> >>> 2. In the spherical harmonic coefficients computation, I need to
>> >>> >>> evaluate the values at each grid points I created prepared for
>> sphere
>> >>> >>> integral from the output values at the center of each cell face.
>> My
>> >>> >>> current solution is to loop over all the sphere cells and find the
>> >>> >>> closest cell face midpoint to the grid point (nearest point
>> >>> >>> interpolation). But since each processor will only cover part of
>> the
>> >>> >>> domain so I also need to compare among all the processors to find
>> the
>> >>> >>> closest point and the corresponding density anomaly/topo values at
>> >>> that
>> >>> >>> point. I notice that there is a Utilities::MPI::max/min command
>> can
>> >>> >>> compare among different processors. But my purpose is not only to
>> get
>> >>> >>> the closet distance, but also need to know which processor provide
>> >>> this
>> >>> >>> closest cell face midpoint and the corresponding location of this
>> >>> closet
>> >>> >>> cell midpoint hence to get the density anomaly/topo there. Are
>> there
>> >>> any
>> >>> >>> MPI commands in deal.ii/ASPECT to realize this?
>> >>> >>>
>> >>> >>> Best,
>> >>> >>> Shangxin
>> >>> >>>
>> >>> >>
>> >>> >>
>> >>> >> --
>> >>> >>
>> >>>
>> ------------------------------------------------------------------------
>> >>> >> Wolfgang Bangerth               email:
>> >>> bangerth at math.tamu.edu
>> >>> >>                                 www:
>> >>> http://www.math.tamu.edu/~bangerth/
>> >>> >>
>> >>> >>
>> >>> >
>> >>> -------------- next part --------------
>> >>> An HTML attachment was scrubbed...
>> >>> URL: <
>> >>>
>> http://lists.geodynamics.org/pipermail/aspect-devel/attachments/20151022/21fce3d3/attachment.html
>> >>> >
>> >>> -------------- next part --------------
>> >>> A non-text attachment was scrubbed...
>> >>> Name: Screen Shot 2015-10-22 at 12.12.18 PM.png
>> >>> Type: image/png
>> >>> Size: 16938 bytes
>> >>> Desc: not available
>> >>> URL: <
>> >>>
>> http://lists.geodynamics.org/pipermail/aspect-devel/attachments/20151022/21fce3d3/attachment.png
>> >>> >
>> >>>
>> >>> ------------------------------
>> >>>
>> >>> Subject: Digest Footer
>> >>>
>> >>> _______________________________________________
>> >>> Aspect-devel mailing list
>> >>> Aspect-devel at geodynamics.org
>> >>> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
>> >>>
>> >>> ------------------------------
>> >>>
>> >>> End of Aspect-devel Digest, Vol 47, Issue 3
>> >>> *******************************************
>> >>>
>> >>
>> >>
>> >> _______________________________________________
>> >> Aspect-devel mailing list
>> >> Aspect-devel at geodynamics.org
>> >> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
>> >>
>> >
>> >
>> >
>> > _______________________________________________
>> > Aspect-devel mailing listAspect-devel at geodynamics.orghttp://
>> lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
>> >
>> >
>> >
>> > _______________________________________________
>> > Aspect-devel mailing list
>> > Aspect-devel at geodynamics.org
>> > http://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
>> >
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL: <
>> http://lists.geodynamics.org/pipermail/aspect-devel/attachments/20151023/8ff83b4f/attachment.html
>> >
>>
>> ------------------------------
>>
>> Subject: Digest Footer
>>
>> _______________________________________________
>> Aspect-devel mailing list
>> Aspect-devel at geodynamics.org
>> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
>>
>> ------------------------------
>>
>> End of Aspect-devel Digest, Vol 47, Issue 7
>> *******************************************
>>
>
>
> _______________________________________________
> Aspect-devel mailing list
> Aspect-devel at geodynamics.org
> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.geodynamics.org/pipermail/aspect-devel/attachments/20151024/b966211a/attachment-0001.html>


More information about the Aspect-devel mailing list