[CIG-SEISMO] Reasonable problem size for one machine

Chen Zou chenzou at uchicago.edu
Fri Feb 9 06:54:45 PST 2018


Hi all,

I am trying to understand the cache performance of a single machine in the
cluster/supercomputer running sw4 job. To make the results reasonable, it
is necessary to find a reasonable problem size for each machine to solve.

I am aware that there is the SCEC test suite and sample input like
LOH.1-h100.in
<https://github.com/geodynamics/sw4lite/blob/master/tests/loh1/LOH.1-h100.in>.
But what is rule of thumb to determine the number of machines to run the
job to get reasonable performance? Say 1000 DOF per machine?

There is also another thing I would like to confirm. To my understanding,
the job running at each node is like a batch job for multiple DOFs. The job
for each DOF is similar and there is limited data reuse between the job. If
this understanding is correct, does this mean that the cache performance
won't vary a lot by problem size as long as the machine gets a reasonably
large number of DOF to solve? My current experiments with different problem
size do support this statement.


Thanks in advance.


Regards,

Chen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.geodynamics.org/pipermail/cig-seismo/attachments/20180209/6ae1cb77/attachment.html>


More information about the CIG-SEISMO mailing list