[aspect-devel] Aspect 2.0 tests failure question

Ravi Kanda kanda.vs.ravi at gmail.com
Mon May 14 11:55:29 PDT 2018


Juliane,

Oh, I see.  Yes, it is important to do spatio-temporal 
sensitivity/resolution tests to make sure the mechanism of interest is 
well resolved, and stably reproducible.  Trying to reproduce appropriate 
benchmarks before moving on to production runs is indeed a good idea - 
and these are what I have used in the past (with other codes) to check 
the basic integrity of my installation.

Thanks,
Ravi.
=========================

On 05/14/2018 12:15 PM, Juliane Dannberg wrote:
>
> Hi all,
>
> just one more point here: Ravi, you were talking about looking at 
> which tests fail, and seeing how significant the differences are when 
> integrated over time.
>
> So I just wanted to mention that many tests have a very coarse 
> resolution to make them run fast, as we want to run them for each pull 
> request, and one of their main purposes is for us to be able to see if 
> a change in the code results in any change in the model output. So 
> even if there are significant differences over time, that might just 
> be because the problem is underresolved, and in some cases a small 
> perturbation due to, for example, a different iteration count/time 
> step size may grow over time. This is something you would expect in a 
> test, but not in a production run, as you would run sensitivity tests 
> to make sure that your problem is properly resolved.
>
> So if you want to have some comparison for the model output on your 
> system, I think it would be a more useful exercise to look at the 
> benchmark cases (the input files in the benchmark folder) and see if 
> they reproduce the benchmark solution. That would also have the 
> advantage that you can look at benchmarks for the features that you 
> are specifically interested in.
>
> Best,
> Juliane
>
>
> On 05/14/2018 11:00 AM, Rene Gassmoeller wrote:
>> On 05/14/2018 05:35 AM, Wolfgang Bangerth wrote:
>>> That's not quite correct: we give numdiff both a relative and an
>>> absolute tolerance. Numbers are considered equal if they satisfy
>>> either of these criteria. I don't quite remember what the absolute
>>> tolerance is, but let's say it is 1e-6, then the numbers
>>>    1.234e-9
>>> and
>>>    2.345e-11
>>> are considered equal, even though their *relative* difference is quite
>>> large.
>>>
>> Oh, you are right, we use an absolute tolerance for diffs of 1e-6 and a
>> relative tolerance of 1e-8. Still there are tests that for example
>> change their number of solver iterations from 6 to 7 (i.e. ~15%
>> difference). We could add special cases for these columns, but since
>> results will change slightly, subsequent timestep sizes will be
>> different and everything will change. Maybe instead we should collect a
>> list of tests that are likely to succeed on all systems (like direct
>> solver, single timestep tests) and add them to the list of quick tests.
>>
>>
>> Best,
>> Rene
>>
>
> ----------------------------------------------------------------------
> Juliane Dannberg
> Project Scientist, UC Davis
> jdannberg.github.io <https://jdannberg.github.io/>
>
>
>
>
> _______________________________________________
> Aspect-devel mailing list
> Aspect-devel at geodynamics.org
> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel

-- 
---------------------------------------------------------------------
Ravi Kanda
Research Fellow
Department of Geology
Utah State University
4505 Old Main Hill
Logan UT 84322-4505
--------------------------
Web Page: https://ravi-vs-kanda.github.io

----------------------------------------------------------------------
For a human being, the unexamined life is not worth living - SOCRATES
----------------------------------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.geodynamics.org/pipermail/aspect-devel/attachments/20180514/f7837636/attachment-0001.html>


More information about the Aspect-devel mailing list