[aspect-devel] pressure normalization
Thomas Geenen
geenen at gmail.com
Fri Sep 6 08:54:04 PDT 2013
he Timo,
certainly true but i do not observe a large spread in time spend in these
collectives among mpi processes.
i will put in a few barriers to see if i can track down the time sink
better.
cheers
Thomas
On Fri, Sep 6, 2013 at 5:18 PM, Timo Heister <heister at math.tamu.edu> wrote:
> Hey Thomas,
>
> one thing to keep in mind is that if some function takes a long time
> (like compress() or MPI_sum) it is because some of the processors
> arrive late. You want to put MPI_Barrier() in front of blocks that you
> want to analyse.
>
>
>
> On Fri, Sep 6, 2013 at 11:06 AM, Thomas Geenen <geenen at gmail.com> wrote:
> > it seems to have been a glitch in the system.
> > i reran the experiments and the walltime spend in the
> pressure_normalization
> > has reduces a lot
> > i now see timings for all cores similar to the lowest timings in the
> > previous run.
> >
> > for that run i observed large outliers in the walltime for several cores
> > up to 30 times higher with a lot of time spend in the global mpi routines
> > MPI::sum etc
> >
> > next on the list is the call to compress
> > dealii::BlockMatrixBase<dealii::TrilinosWrappers::SparseMatrix>::compress
> >
> > cheers
> > Thomas
> > ps i was running between 100 and 200K dfd per core for the timing runs
> >
> >
> > On Fri, Aug 30, 2013 at 8:24 PM, Timo Heister <heister at clemson.edu>
> wrote:
> >>
> >> > Interesting. The function contains one explicit MPI call, one creation
> >> > of a completely distributed vector (should not be very expensive) plus
> >> > one copy from a completely distributed to a ghosted vector (should
> also
> >> > not be very expensive). Can you break down which part of the function
> is
> >> > expensive?
> >>
> >> It might be one of 3 things:
> >> 1. Thomas found a work imbalance in this function (as he said, some
> >> processor might not have anything to do). This could show up in his
> >> instrumentation of processors being idle (but does not mean it takes a
> >> significant amount of total runtime).
> >> 2. It is instead a work imbalance/issue in the computation that
> >> happens before the normalization and the timers are not synchronized
> >> correctly.
> >> 3. He has only very few unknowns per processor which skews the timings.
> >>
> >> > That
> >> > said, I always tried to follow the principle of least surprise, which
> >> > would mean to make sure that the pressure is normalized or that the
> >> > linear systems are indeed solved to sufficient accuracy.
> >>
> >> I agree.
> >>
> >> > Instead of globally relaxing tolerances or switch off pressure
> >> > normalization, how about having a section in the manual in which we
> list
> >> > ways to make the code faster if you know what you do? I'll be happy to
> >> > write this section.
> >>
> >> Sounds good. Some more ideas:
> >> - use optimized mode :-)
> >> - lower order of temperature/compositional discretization
> >> - disable postprocessing if not needed
> >>
> >> --
> >> Timo Heister
> >> http://www.math.clemson.edu/~heister/
> >> _______________________________________________
> >> Aspect-devel mailing list
> >> Aspect-devel at geodynamics.org
> >> http://geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
> >
> >
>
>
>
> --
> Timo Heister
> http://www.math.tamu.edu/~heister/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://geodynamics.org/pipermail/aspect-devel/attachments/20130906/6d1dbed5/attachment.html>
More information about the Aspect-devel
mailing list