[aspect-devel] How to grab the radius of every interior sphere made up from cell faces in the domain and some MPI questions

Rene Gassmoeller rengas at gfz-potsdam.de
Thu Oct 22 09:31:44 PDT 2015

Hi Shangxin,
I think I can help you with some of your questions, but first:
I think I had mentioned at the Hackathon and also at one of the video
user meetings earlier this year that I was working together with some
others on a geoid postprocessor already. At the hackathon and afterwards
Jacky, Ian, Anthony Osei-Tutu (a PhD student with us in Potsdam) and
myself put something together that is already close to working. Since
the project that gave us the incentive to write the postprocessor
developed in a slightly different way it was never brought into a shape
where we opened a pull request for the main version (it was a branch on
my github repository). I did not know you where working on something
similar, otherwise I would have opened one. But I did that now, so you
can have a look at:


The code is not exactly finished, we never benchmarked it properly and
it gives some curious patterns for low resolutions, but it seems to work
ok for higher resolutions. Maybe you have more experience with testing
geoid output or can ask Scott for some testcases?

>From what I read in your mail, we might be able to merge the best of the
two versions into something that actually makes it into the main code.
Our approach was mostly to take the algorithm used in CitcomS and
transfer it to ASPECT with as little content changes as possible, while
still putting it into our usual code style. Instead of a lengthy
description of our approach, let me just try to answer your questions to
point out what we do to circumvent your problems.

> After getting the
> topography value of each cell along the boundary, there is a another
> template in my code to transfer them to the form of spherical harmonic
> coefficients.

I am not sure what you mean by template here. In most places we use only
one template parameter for most of our classes, which is the dimension.
Do you mean function?

> As for the density anomaly, The formula needs to first get the density
> anomaly value per cell of each depth layer (in my code, the number of total
> sphere layers is determined by initial refinement degree, i.e, there are 2
> layers in 0 refinement degree, each sphere layer has 48 density values, one
> value per cell). 

I think this approach is nice for a uniform mesh, but it might break
down for AMR. Even if you start with only global refinement, your mesh
might get coarsened over time. In our plugin we therefore follow the
approach of the depth average postprocessor, in which we make the number
of layers an input parameter for the user. Most likely the user will
know best, what the 'average' resolution of the model will be, and what
resolution might be reasonable for the number of layers. This of course
poses the problem that some cells are partly in one layer and partly in
another. For now, we decided to simply attribute the cell to the layer,
which contains its midpoint. The same problem will occur in your
approach when you switch on AMR.

Then I transfer the density anomaly values for each layers
> into spherical harmonic coefficients, rho_l_m(r), and integral the
> coefficients from r_CMB to r_surface (in the 0 refinement degree, the
> numerical integral contains two layers, 3480 to 4501km and 4501 to 6371
> km). So in this integral, I need to grab the radius value of each internal
> sphere, i.e, the radius of the cell's upper face globally (in simplest
> refinement degree 0 case there are three radius 3480,4501,and 6371). My
> current approach is to loop over all the cells and get the location of each
> cell's upper face midpoint, then compute the distance between earth center
> and these midpoint, remove the same distance value of the cell at the same
> depth, finally push_back them to a "R_sphere vector" ((3480,4501,6371) in 0
> refinement degree). But I think this approach is not an elegant way and
> also will not work in parallel since each processor only holds a
> sub-domain. So is there a fast way to directly grab the radius of each
> internal sphere determined by the cell face in deal.ii/ASPECT?

This is indeed more complicated in parallel and also you need to be
careful about floating-point math here. As far as I understand you ask:
for all cells
  if (the radius of the midpoint of the upper face is already in the vector)
    skip cell
    add radius

However, the above mentioned approach will break down with an adaptive
mesh, because there might be more refined cells only in a part of the
domain. I guess the problem you are encountering mostly stems from the
fact that you want to align your layers with the cell boundaries
(something we did not enforce, although it might be more accurate).
But another approach could be the following: The cells divide the total
model depth into equal parts. Therefore, the vector you are searching is
essentially 'minimal_radius + (n / number_of_layers) * model_depth' with
n being the number of the layer you are in. No need for looping over
cells, or comparing radii of the face midpoints. This is the approach we

> My second problem is the comparison between the results of different
> processor. To be concise, there is a loop through all the cells to find
> which cell's upper face midpoint is the closest to a fixed point. 

What is this fixed point? And what do you want to do there? If you just
want to evaluate the solution at that point, consider using the
FEFieldFunction class of deal.II to get the solution at an arbitrary
point in the domain. If you indeed need to find the cell that contains
this point (and the closest face midpoint will always be the one of this
cell -- or one of the neighbouring cells, if the neighbours are more
refined than the containing cell) then consider using the
GridTools::find_active_cell_around_point function.

Hope that helps. Best regards,

> On Wed, Oct 21, 2015 at 5:34 PM, Wolfgang Bangerth <bangerth at tamu.edu>
> wrote:
>> Shangxin,
>> I have difficulty following your questions, and I believe it is because
>> you have already picked a specific algorithm and you are asking how to
>> implement it. But we know neither the algorithm, nor what the problem is
>> you are trying to solve.
>> Can you give us an overview *in formulas* of what you are actually trying
>> to compute, along with an overview of *how* you try to compute it? Maybe
>> that already helps us give you suggestions how to avoid some of the
>> problems you see.
>> Best
>>  W.
>> On 10/21/2015 03:23 PM, Shangxin Liu wrote:
>>> Hi;
>>> I'm now making a new plugin into the postprocess directory to compute
>>> geoid anomaly. The code has been finished but it needs some further
>>> modification to run in parallel. To realize this, I run into these two
>>> problems:
>>> 1. The geoid computation needs the integral of the density anomaly from
>>> CMB to surface. For now, I do the integral based on cells. So to get all
>>> the radii of the internal spheres, the code loops over all of the cells
>>> and get the radius of each cell's upper face and push_back the radius
>>> into a vector after removing the same values from the cell faces at the
>>> same depth. But I suppose this solution is not so elegant and also will
>>> not work in the multiple MPI processors since each processor will only
>>> cover part of the domain. So is there a fast way in deal.ii/ASPECT to
>>> directly grab all the radii of the spheres determined by the cell faces
>>> within the domain?
>>> 2. In the spherical harmonic coefficients computation, I need to
>>> evaluate the values at each grid points I created prepared for sphere
>>> integral from the output values at the center of each cell face. My
>>> current solution is to loop over all the sphere cells and find the
>>> closest cell face midpoint to the grid point (nearest point
>>> interpolation). But since each processor will only cover part of the
>>> domain so I also need to compare among all the processors to find the
>>> closest point and the corresponding density anomaly/topo values at that
>>> point. I notice that there is a Utilities::MPI::max/min command can
>>> compare among different processors. But my purpose is not only to get
>>> the closet distance, but also need to know which processor provide this
>>> closest cell face midpoint and the corresponding location of this closet
>>> cell midpoint hence to get the density anomaly/topo there. Are there any
>>> MPI commands in deal.ii/ASPECT to realize this?
>>> Best,
>>> Shangxin
>> --
>> ------------------------------------------------------------------------
>> Wolfgang Bangerth               email:            bangerth at math.tamu.edu
>>                                 www: http://www.math.tamu.edu/~bangerth/
> _______________________________________________
> Aspect-devel mailing list
> Aspect-devel at geodynamics.org
> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel

More information about the Aspect-devel mailing list