<div dir="ltr">he Rene,<div><br></div><div>what is the status of getting dealii to compile with the cray compiler?</div><div>last year we still needed a pre-release of the compiler to get it to work</div><div>its possible to build with shared libraries as well (gcc)</div>
<div>let me know if you need help with setting that up.</div><div>the instructions are for the pre cmake migration so i will have to do a quick check to adapt the hacks to the new build system.</div><div><br></div><div>cheers</div>
<div>Thomas<br><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jan 13, 2014 at 6:05 PM, Rene Gassmoeller <span dir="ltr"><<a href="mailto:rengas@gfz-potsdam.de" target="_blank">rengas@gfz-potsdam.de</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear all,<br>
I have a follow up question on the Input/Output issue Wolfgang had. I am<br>
currently doing scaling tests on a Cray XC30 up to 4000 cores and<br>
getting the same error messages. I am writing in a designated $WORK<br>
directory that is intended for data output, however there is also a fast<br>
local $TMPDIR directory on each compute node.<br>
I guess my question is: Is the Number of grouped files = 1 parameter<br>
always useful in case of a large parallel computation and an existent<br>
MPI I/O system or is this cluster specific (in that case I will just<br>
contact the system administrators for help)?<br>
<br>
Another thing I would like to mention is that this system only allows<br>
for static linking. With my limited knowledge I was only able to compile<br>
ASPECT by commenting out the option to dynamically load external<br>
libraries by user input. Could somebody who introduced the possibility<br>
to dynamically load libraries at runtime comment on the work to make<br>
this a switch at compile-time? I dont know much about this, otherwise I<br>
would search for a solution myself. In case this creates a longer<br>
discussion I will open up a new thread on the mailing list.<br>
<br>
Thanks for comments and suggestions,<br>
Rene<br>
<div><br>
On 10/14/2013 05:02 PM, Wolfgang Bangerth wrote:<br>
> On 10/11/2013 01:32 PM, Timo Heister wrote:<br>
>>> I'm running this big computation and I'm getting these errors:<br>
>><br>
>> For a large parallel computation I would use MPI I/O using<br>
>> set Number of grouped files = 1<br>
><br>
> Good point. I had forgotten about this flag.<br>
><br>
>><br>
>>> ***** ERROR: could not move /tmp/tmp.gDzuQS to<br>
>>> output/solution-00001.0007.vtu *****<br>
>><br>
>> Is this on brazos? Are you writing/moving into your slow NFS ~/? I got<br>
>> several GB/s writing with MPI I/O to the parallel filesystem (was it<br>
>> called data or fdata?).<br>
><br>
> Yes. I was just writing to something under $HOME. Maybe not very smart...<br>
><br>
><br>
>> retry and otherwise abort the computation I would say.<br>
><br>
> OK, that's what I'm doing now. It shouldn't just fail any more now.<br>
><br>
> Best<br>
> W.<br>
><br>
><br>
</div><div><div>_______________________________________________<br>
Aspect-devel mailing list<br>
<a href="mailto:Aspect-devel@geodynamics.org" target="_blank">Aspect-devel@geodynamics.org</a><br>
<a href="http://geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel" target="_blank">http://geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel</a><br>
</div></div></blockquote></div><br></div></div></div>