[aspect-devel] ASPECT scaling on a Cray XC30
heister at clemson.edu
Wed Feb 5 15:44:37 PST 2014
Very nice results. A couple of comments:
- the Stokes solver is not scaling well weakly but looks okay in
strong scaling (I wonder why that is happening because the setup is
- it looks like scaling with >50k DoFs/core is very good and would
probably go beyond 6k cores - very nice!
- the scaling on a single node is surprisingly good. I wonder if this
is due to the architecture or due to cpu binding (what MPI library are
you using and what options do you specify for mpirun?)
- the Stokes solver always needs something like 30+6 iterations, which
is not ideal. Can you try switching the cheap iterations off and
compare the runtime (a single data point would be enough)?
- can you please post the .prm you used?
- I wonder if it would be worthwhile to look at the second timestep to
ignore effects like MPI buffer creation.
On Wed, Feb 5, 2014 at 4:08 AM, Rene Gassmoeller <rengas at gfz-potsdam.de> wrote:
>> In other words, you see *strong* scaling all the way to 6k cores (and
>> better-than-perfect scaling to 3072 cores). In other words, the fact
>> that my computation is slow is because I'm not using enough cores and I
>> could run the whole thing at ~7 times faster (1 day instead of 1 week!)
>> if I used 8 times as many cores. That's good to know!
> Yes I was quite happy about this as well.
>> PS: What is the meaning of the lines where you strike through the numbers?
> I striked through the models, which were run on a single node. They do
> not fall out of line very much, but still they scale a bit differently
> than the others (because they do not need inter-node communication).
> Aspect-devel mailing list
> Aspect-devel at geodynamics.org
More information about the Aspect-devel