[aspect-devel] Aspect 2.0 tests failure question

Ravi Kanda kanda.vs.ravi at gmail.com
Fri May 11 11:13:49 PDT 2018


Thanks, Rene, for the detailed response.

So, it seems to me that the tolerance for comparison - i.e., at what 
significant digit do the user install and reference install 'diverge' - 
might be user and problem dependent.  As Wolfgang mentioned, if for 
example mesh refinement is sensitive to small floating point 
differences, then there may be no "one tolerance fits all" solution. 
Personally, I am not as concerned about the percentage of tests "passed" 
- rather, I am more interested in how significant the differences are 
when integrated over time.  So, as long as I can check against the ref. 
tester results, I can figure this out.

I am now thinking it will be a good learning exercise for me to scan 
through the "failures" for my current install and check for any 
pattern(s).  This might be helpful in debugging future simulations with 
this install, and at the very least, may provide me with a "watch-out" 
list when interpreting output.

Best,
Ravi.
=================


On 05/11/2018 10:27 AM, Rene Gassmoeller wrote:
> Hi all,
>
> let me add to this. I agree with Wolfgang that a relaxed tolerance would
> probably need to be around O(1), because we have test output that should
> be technically zero, but due to floating point imprecision, and due to
> the nature of the iterative solvers, is around 1e-18. The slightest
> change in iteration count can change this to 2e-18 or 0.5e-18, which
> technically is still zero, but drastically different from the reference
> result. Thus, I do not think a relaxed tolerance will be helpful. One
> option we already have is to just run all of the tests and check for
> crashes, but do not compare the output results. You can use this by
> running cmake with the '-D ASPECT_RUN_ALL_TESTS=ON -D
> ASPECT_COMPARE_TEST_RESULTS=OFF' flags. This will at least tell you if
> all tests run. We can of course discuss making this the default option
> if 'make tests' is run, but I also think this can be misleading, because
> 'running the tests' then actually does not include any comparison. What
> is your opinion?
>
> @Ravi: 35% failures is actually a pretty normal number, I have seen
> clusters with more failures. To get to 100% passes you can also run the
> official ASPECT tester on your local machine if you have docker
> installed. Take a look at the script
> 'cmake/compile_and_update_tests.sh'. But this will only test the ASPECT
> code, not your local installation, because it does everything inside of
> the docker container.
>
> Best,
>
> Rene
>
>
>
> On 05/11/2018 05:36 AM, Wolfgang Bangerth wrote:
>> Hi Bob,
>>
>>> Wolfgang, do you think it might be valuable to add a relaxed
>>> tolerance to the tests for end users, to suppress the non critical
>>> failures? We could always keep the pure diff when testing on Timo's
>>> server.
>>>
>>> I could play with this at the start of the Hackathon if you wanted,
>>> in lieu of the annual typo hunt? My OCD always plays up when I see
>>> failed tests!
>> I think we all feel that way :-)
>>
>> We already do allow for a tolerance if you have the 'numdiff' program
>> installed, which compares numbers based on their relative size. (If
>> you don't have 'numdiff', it just uses 'diff' that does a stupid text
>> match and doesn't know anything about numbers at all. So at the very
>> least one would need to have 'numdiff' installed to make this useful.
>>
>> I don't know by how much one would need to relax the tolerance. I
>> think it would make for an interesting experiment to find that out. It
>> may be that there are some tests where small floating point
>> differences lead to different refinement decisions -- in which case we
>> enter the realm of O(1) differences. Either way, it would be
>> interesting to play with it where cmake could set a default (loose)
>> tolerance unless one passes a stricter tolerance requirement to cmake
>> -- which is what one would do on the "official" tester.
>>
>> Nice project for sure!
>>
>> Cheers
>>   W.
>>

-- 
---------------------------------------------------------------------
Ravi Kanda
Research Fellow
Department of Geology
Utah State University
4505 Old Main Hill
Logan UT 84322-4505
--------------------------
Web Page: https://ravi-vs-kanda.github.io

----------------------------------------------------------------------
For a human being, the unexamined life is not worth living - SOCRATES
----------------------------------------------------------------------



More information about the Aspect-devel mailing list