[aspect-devel] AGU meeting feedback

Wolfgang Bangerth bangerth at math.tamu.edu
Thu Dec 26 10:03:52 PST 2013


Cedric & all:
thanks for getting this excellent list of things together. It's a long list 
for sure, but I'm sure we can make progress on several of these if we 
coordinate some of the work.

I will be offline for the next 11 days, but let me at least start the 
conversation with some observations so that others can chime in.



> - C.T. showed in his talk (hereby attached) that the Rayleigh-Taylor
> experiment as
> presented in van Keken et al. (JGR 1997) run with ASPECT yields
> abnormal results. It is of course understood that this experiment
> does not constitute a true benchmark since no analytical solution is
> known, but having a look at ASPECT's results vs. the results obtained
> with many other codes over the past 15 years leads to the inescapable
> conclusion that ASPECT (as provided to the user in version r1889)
> is the most 'off' in terms of second Vrms peak location and height.
> Even more worrying is the lack of convergence to stable results
> when resolution is increased.
> Despite varying many parameters this puzzling effect could not be eliminated.
> Since many users are currently busy studying plume dynamics, it is our
> belief that this problem should be addressed within the briefest delay.

I had seen this problem the first time when Elvira Mulyukova showed it to me. 
Like you, we played with a number of parameters but could not identify what 
causes it. I didn't, at the time realize how widely used the benchmark is, and 
had convinced myself that it must be due to the fact that the benchmark simply 
exhibits an instability where the exact solution is not completely 
deterministic and depends on parameters that are non-physical (such as the 
mesh resolution). I came to this conclusion because the first peak is (correct 
me if I'm wrong) actually pretty well compatible with the previous data.

Your comparison makes me rethink this. If all different codes produce 
essentially the same numbers, then either (i) all of these codes use 
essentially the same method and consequently all share the same bias, or (ii) 
Aspect is wrong. I tend to think (ii) is the case, but I have absolutely no 
idea why that might be so.

You show that you've played with a great many parameters. Were all of them 
using the entropy artificial viscosity stabilization? Have you tried to go 
back to the "traditional" first order viscosity? You should be able to get 
there by setting the cR parameter to essentially infinity (say, to 10^300). 
The entropy viscosity method takes the minimum of the "traditiona" 
stabilization and the term that is proportional to cR.


> - During an informal conversation around a poster, a researcher himself busy
> implementing compositional fields as a way to track materials told C.T. that
> after careful testing he concluded that the entropy viscosity method was not
> the best out there, which somehow goes against the philosophy of the code.
> Given that the dynamics of lithospheric deformation is driven
> more by density differences between materials than temperature effects,
> having a thoroughly tested and validated stabilisation scheme seems to be a
> pressing task.

Like you mention, it's a complex issue. We've done many comparisons and in 
many cases, the method turned out to be better (in Aspect as well as other 
codes we've written for other equations). That said, I am willing to believe 
that there are cases where that's not the case. Would the researcher mentioned 
be willing to provide a case where this could be demonstrated?


> - The cookbooks provided in the manual/distribution are useful but new ones
> should be added (such as the R-T experiment when fixed), especially ones which
> would involve large deformation of multiple materials. All users happily
> agreed to
> provide some in the near future.

Yes. I would most definitely be happy to work with anyone who's interested in 
helping out with this effort. These cookbooks look deceptively simple, but 
they're a fair amount of work because one has to find the right setup, create 
pictures that try to illuminate the issues one wants to demonstrate with a 
given cookbook, etc. We would definitely be very appreciative of any help we 
can get!


> - Given that ASPECT relies on many libraries (DEAL.II, trilinos, p4est, blas,
> ...),
> and that users install and run it on rather different systems,
> few of us were certain that the code was running at
> peak performance. We therefore suggested that all cookbooks should come with
> an indicative run time (provided by us) for first order comparison.

That's an excellent suggestion. I've made a margin note in the cookbooks 
chapter to this effect.

I'm sure you've already noted that the manual now also contains a section on 
making aspect run faster.


> - The virtual machine should be better advertised (in the manual, and online
> too), as it is
> an easy way to test the code before switching to ASPECT.

Yes. We've brought out a new deal.II release a couple of days ago and will 
follow up with a new Aspect release after the new year. I'm sure Timo will be 
willing to also update the virtual machine and we can make sure we link better 
to it from the website.


> - Nonlinear solvers (type and properties) should be better described and
> thoroughly
> checked. Pertaining parameters to said solvers should also be made available
> to the user
> with advice on how to tune them.

Right. These nonlinear solvers have been around for a while in the code, but 
they weren't well tested (and still have a bit to go). I've fixed a number of 
bugs in the past couple of months. When I get around to fully testing them, 
then their parameters will also be better described and they will earn their 
own cookbooks.


> - The issue of stress boundary conditions was brought up by those of us concerned
> with lithospheric-scale experiments. Could these be implemented ? if so in
> which delay ?
> Could so-called open boundary conditions be implemented too ? Anne would like
> to cooperate on those.

We'd need to agree on what exactly "stress boundary conditions" and "open 
boundary conditions" refers to :-) I presume that the first is a prescribed 
force on the boundary? And the latter then simply be no force? No force 
already exists, if you don't specify anything else for a particular boundary. 
Prescribed, nonzero force should not be terribly difficult to implement.

Anne -- can you elaborate?

(Parenthetically, what would be more difficult to implement is a "floating" 
boundary condition where the force is provided by the hydrostatic pressure at 
a given depth because we then have to have a way of prescribing this pressure.)


> - Parameters regarding the pre-conditioner and the solver are at the moment
> hidden
> from the user. It is probably a good thing for beginners, but could there be some
> sort of expert mode which would allow to tune these algorithms from the input
> file
> for more advanced users ?

Possibly. Which parameters in particular would you be interested in? The 
details of the AMG preconditioner?


> - While Courant condition-based timesteps are useful, we would also like to
> have the possibility
> of using constant timesteps without having to resort to hard coding.

That, too, should not be too difficult. If someone wanted to write a little 
patch that
   - introduces a new parameter, say, "Time step determination" with values
     "fixed at xxx" and "CFL-based" where "xxx" can be any number (in
     source/simulator/parameters.cc)
   - reads this from the input file (same place)
   - uses it in the place where we currently set the time step
then I'm sure we'll be happy to accept such a patch.

> - Dynamic mesh refinement is a very great feature of the code but it was our
> opinion
> that the coarsening sometimes yields very very coarse meshes and a minimum
> refinement level
> (overriding the coarsening algorithm if necessary) would be greatly appreciated.
> Juliane has been working on this issue recently and seems to be ready to
> incorporate this
> in the main version.

Same here -- patches are definitely welcome. The place to modify is 
core.cc:refine_mesh().


> - The issue of being able to switch between trilinos and petsc in a simple way
> was brought up.
> How much effort would this take ? Could it be done ?

Timo did a lot of work in deal.II to allow for such experiments. I will let 
him comment on this.


> - Some of us find the code slow. Given its unique features and algorithms, this
> feeling has to be backed up by hard facts. Could CIG demonstrate that for
> comparable simulations,
> it does indeed does better than CITCOM ? Katrina is willing to look at this
> point with
> the help of CIG. How could we define an objective metric for performance so
> that one can compare his or
> her code with ASPECT ?

I think it would be great to have such a comparison, preferably in terms of 
error as a function of run time. I don't know how it will turn out (I have 
hopes that it helps that Aspect is now at least 50% faster than it used to be 
6 months ago). I'll use the opportunity to again point at the new section in 
the manual on how to make Aspect run faster.


> - CITCOM and many other codes of the past 20 years use Q1P0 elements. While we
> understand
> that this element is not without issues, its stencil and its first order
> nature would most likely
> yield smaller matrices which are faster to solve. However it should be
> supplemented with
> a pressure smoothing algorithm. What is the developer stance on this element,
> and are you willing
> to implement a pressure filtering scheme ? (C.T. has some experience with
> these schemes.)

I don't know enough about these schemes. Given the length of the TODO list, 
it's not close to the top for me. The Q1P0 scheme is known to be very 
diffusive and have issues with stability which makes me rather reluctant to 
use it.

Simply running it would probably not be terribly difficult -- I think you may 
almost be able to choose it from the input file (choose polynomial degree = 1, 
locally conservative = on, or similar).


> - At the moment, the general 'simple' material model can only deal with two
> materials (a background one
> and an additional composition). It is our opinion that a simple material model
> should
> allow for an indefinite number of compositions and that the averaging scheme
> equations should be
> modified accordingly. At the same time, the manual should mention and
> justify why the geometric averaging for viscosity has been chosen.
> Anne has such a model and is willing to provide it in the near future.
> This would allow CT for instance to provide the setup for the Schmeling et al.
> 2008 experiment
> or the Crameri et al. 2012 experiment as additional cookbooks.

The "simple" model at one point really was "simple", but it's gained all sorts 
of bells and whistles by now. I'm not too fond of this. My preference would be 
to define a different class that has a different name and does what you 
propose. I see no reason to not add such a model if anyone wanted to 
contribute it!


> - Ian has been looking at the implementation of a real free surface +
> stabilisation and is willing to
> introduce it in the release code with the help/supervision of the developers.

Great.

> - The manual shows results of the Blankenbach et al. (1989) experiment for all
> three
> Rayleigh numbers but surprisingly results are truncated in time and the system
> has not yet reached steady state.

I assume you refer to the "play time" examples of section 6.1.1? Can you 
elaborate?

(In general, I am used to writing deal.II tutorial programs where the focus is 
more on showing "how it can be done" than what the actual results are. I 
suppose this is what was the driving force here as well -- I didn't care that 
the results are truncated, because the section shows how you can obtain the 
results you are looking for. But this can of course be changed if we wanted to 
include not just the "method" but also the "results".)



> - There were informal talks these past months on the forum about level set
> methods
> and active tracers. Could the developers clarify their intentions with regards to
> these approaches ?

My impression from talking to the community is that active tracers become very 
expensive when you move from 2d to 3d. My personal thought is that active 
fields are a much better choice, but I recognize that the mantle convection 
community has a historical perspective that favors particles over fields.

As far as level sets are concerned -- you can have them already today. You 
just need to define your material in a way so that c>0 means one material and 
c<0 means another. The 'simple' material model already does that.

Or was the question whether we want to add any sharpening/renormalization 
steps when a field is to be interpreted as a level set?


> - Tutorial videos could also be beneficial to new users.

Yes. This is difficult. I seem to have lost my sponsor for the videos. Our TV 
station, KAMU, recorded the deal.II videos as part of its educational mission. 
I've inquired whether they'd be willing to record more of these videos, but 
haven't been able to get anywhere other than hearing "let me see if we can 
find the money for this". I recognize that I'm not particularly high on their 
list of priorities. I will need to ask this again and see whether they're 
willing to record and produce more.


> We also thought that in order for ASPECT to be accepted by the community
> more pro-active steps should be taken at EGU, AGU and similar meetings:
> ASPECT developers should make sure they come to these meetings and promote
> their work while users show applications and extensions. Mini-workshops and
> users social activities could also be organised during these meetings to promote
> cooperation. Perhaps in Europe, the users could help organise these.
>
> The deadline for EGU is January 16. Would it be a good idea that all
> ASPECT users submit their abstracts in the same session to provide a strong
> front?

I, for one, would think that's a great idea, but I won't be able to go myself.

Eric had proposed CIDER (California) and SEDI (Japan) as venues for tutorials 
next summer and I might consider SEDI. The timing for CIDER is difficult for 
me to reconcile with other travel.


> Although realising this is a long list of points, we look forward to your
> response and a fruitful cooperation.

I do too. Thanks for bringing these issues up.

I would like to throw in one point in this direction as well. Right now, the 
number of people who are working on developing Aspect is relatively small -- 
together less than one person-equivalent. My lesson from deal.II is that what 
it takes for a project to succeed is that we build a *community* of people who 
work together, send patches, and help out with questions. I hope that meetings 
like the one you had at AGU can help start such a community (and for this 
alone I wished I had been able to come to AGU). I'm sure I'm speaking for both 
Timo and myself in saying that we would love to work with others on 
incorporating whatever patches they have. The point is that you had 9 people 
at your meeting; if everyone had patches worth only 10% of their development 
time, then we'd have twice the progress in Aspect. Let's see if we can't make 
that happen!

Cheers
  W.

-- 
------------------------------------------------------------------------
Wolfgang Bangerth               email:            bangerth at math.tamu.edu
                                 www: http://www.math.tamu.edu/~bangerth/



More information about the Aspect-devel mailing list