<div dir="ltr">he Timo,<div><br></div><div>certainly true but i do not observe a large spread in time spend in these collectives among mpi processes.</div><div>i will put in a few barriers to see if i can track down the time sink better.</div>
<div><br></div><div>cheers</div><div>Thomas</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Sep 6, 2013 at 5:18 PM, Timo Heister <span dir="ltr"><<a href="mailto:heister@math.tamu.edu" target="_blank">heister@math.tamu.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hey Thomas,<br>
<br>
one thing to keep in mind is that if some function takes a long time<br>
(like compress() or MPI_sum) it is because some of the processors<br>
arrive late. You want to put MPI_Barrier() in front of blocks that you<br>
want to analyse.<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
On Fri, Sep 6, 2013 at 11:06 AM, Thomas Geenen <<a href="mailto:geenen@gmail.com">geenen@gmail.com</a>> wrote:<br>
> it seems to have been a glitch in the system.<br>
> i reran the experiments and the walltime spend in the pressure_normalization<br>
> has reduces a lot<br>
> i now see timings for all cores similar to the lowest timings in the<br>
> previous run.<br>
><br>
> for that run i observed large outliers in the walltime for several cores<br>
> up to 30 times higher with a lot of time spend in the global mpi routines<br>
> MPI::sum etc<br>
><br>
> next on the list is the call to compress<br>
> dealii::BlockMatrixBase<dealii::TrilinosWrappers::SparseMatrix>::compress<br>
><br>
> cheers<br>
> Thomas<br>
> ps i was running between 100 and 200K dfd per core for the timing runs<br>
><br>
><br>
> On Fri, Aug 30, 2013 at 8:24 PM, Timo Heister <<a href="mailto:heister@clemson.edu">heister@clemson.edu</a>> wrote:<br>
>><br>
>> > Interesting. The function contains one explicit MPI call, one creation<br>
>> > of a completely distributed vector (should not be very expensive) plus<br>
>> > one copy from a completely distributed to a ghosted vector (should also<br>
>> > not be very expensive). Can you break down which part of the function is<br>
>> > expensive?<br>
>><br>
>> It might be one of 3 things:<br>
>> 1. Thomas found a work imbalance in this function (as he said, some<br>
>> processor might not have anything to do). This could show up in his<br>
>> instrumentation of processors being idle (but does not mean it takes a<br>
>> significant amount of total runtime).<br>
>> 2. It is instead a work imbalance/issue in the computation that<br>
>> happens before the normalization and the timers are not synchronized<br>
>> correctly.<br>
>> 3. He has only very few unknowns per processor which skews the timings.<br>
>><br>
>> > That<br>
>> > said, I always tried to follow the principle of least surprise, which<br>
>> > would mean to make sure that the pressure is normalized or that the<br>
>> > linear systems are indeed solved to sufficient accuracy.<br>
>><br>
>> I agree.<br>
>><br>
>> > Instead of globally relaxing tolerances or switch off pressure<br>
>> > normalization, how about having a section in the manual in which we list<br>
>> > ways to make the code faster if you know what you do? I'll be happy to<br>
>> > write this section.<br>
>><br>
>> Sounds good. Some more ideas:<br>
>> - use optimized mode :-)<br>
>> - lower order of temperature/compositional discretization<br>
>> - disable postprocessing if not needed<br>
>><br>
>> --<br>
>> Timo Heister<br>
>> <a href="http://www.math.clemson.edu/~heister/" target="_blank">http://www.math.clemson.edu/~heister/</a><br>
>> _______________________________________________<br>
>> Aspect-devel mailing list<br>
>> <a href="mailto:Aspect-devel@geodynamics.org">Aspect-devel@geodynamics.org</a><br>
>> <a href="http://geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel" target="_blank">http://geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel</a><br>
><br>
><br>
<br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Timo Heister<br>
<a href="http://www.math.tamu.edu/~heister/" target="_blank">http://www.math.tamu.edu/~heister/</a><br>
</font></span></blockquote></div><br></div>