[aspect-devel] Aspect 2.0 tests failure question

Juliane Dannberg judannberg at gmail.com
Mon May 14 11:15:08 PDT 2018


Hi all,

just one more point here: Ravi, you were talking about looking at which 
tests fail, and seeing how significant the differences are when 
integrated over time.

So I just wanted to mention that many tests have a very coarse 
resolution to make them run fast, as we want to run them for each pull 
request, and one of their main purposes is for us to be able to see if a 
change in the code results in any change in the model output. So even if 
there are significant differences over time, that might just be because 
the problem is underresolved, and in some cases a small perturbation due 
to, for example, a different iteration count/time step size may grow 
over time. This is something you would expect in a test, but not in a 
production run, as you would run sensitivity tests to make sure that 
your problem is properly resolved.

So if you want to have some comparison for the model output on your 
system, I think it would be a more useful exercise to look at the 
benchmark cases (the input files in the benchmark folder) and see if 
they reproduce the benchmark solution. That would also have the 
advantage that you can look at benchmarks for the features that you are 
specifically interested in.

Best,
Juliane


On 05/14/2018 11:00 AM, Rene Gassmoeller wrote:
> On 05/14/2018 05:35 AM, Wolfgang Bangerth wrote:
>> That's not quite correct: we give numdiff both a relative and an
>> absolute tolerance. Numbers are considered equal if they satisfy
>> either of these criteria. I don't quite remember what the absolute
>> tolerance is, but let's say it is 1e-6, then the numbers
>>    1.234e-9
>> and
>>    2.345e-11
>> are considered equal, even though their *relative* difference is quite
>> large.
>>
> Oh, you are right, we use an absolute tolerance for diffs of 1e-6 and a
> relative tolerance of 1e-8. Still there are tests that for example
> change their number of solver iterations from 6 to 7 (i.e. ~15%
> difference). We could add special cases for these columns, but since
> results will change slightly, subsequent timestep sizes will be
> different and everything will change. Maybe instead we should collect a
> list of tests that are likely to succeed on all systems (like direct
> solver, single timestep tests) and add them to the list of quick tests.
>
>
> Best,
> Rene
>

----------------------------------------------------------------------
Juliane Dannberg
Project Scientist, UC Davis
jdannberg.github.io <https://jdannberg.github.io/>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.geodynamics.org/pipermail/aspect-devel/attachments/20180514/71299a88/attachment.html>


More information about the Aspect-devel mailing list