[CIG-SHORT] elapsed time concerns

Brad Aagaard baagaard at usgs.gov
Fri Nov 18 15:04:58 PST 2016


Alberto,

The PETSc log summary provides important performance information.

Use these settings to see what is happening in the solver and the 
performance (as used in examples/3d/hex8/pylithapp.cfg):


[pylithapp.petsc]
ksp_monitor = true
ksp_view = true
snes_monitor = true
snes_view = true
log_view = true

Regards,
Brad


On 11/18/16 2:24 PM, alberto cominelli wrote:
> Dear All,
>
> I am using pylith to make a convergence study on a 12 core Xeon box,
> with Intel(R) Xeon(R)  E5-2643 v2 cpus running at @ 3.50GHz and 64 gb of
> memory.
> The problem at hand is a 3D domain consisting of two layers, the upper
> one dry, with 25000kg/m3 density and the lower on water saturated with a
> 20% porosity.  Besides differences in saturated condistions, rock is
> characterised as an elastic, istropic and homogeneous material.
> The domain is discretised  by means of  hexaedral elements using a
> tartan type grid developed around a fault, a 20% sloping fault. Fault
> rehology is very simple, a friction model with 0.6 friction coefficient,
>
> To simulate a consolidation problem, fluid pressure is included in the
> model using initial stress on a cell basis assuming that pressure is
> constant inside each cell.
> This means I input a initial_stress.spatialdb file containg data for
> ncells * 8 quadrature points.
> I am a bit surprised by elapsed time values I get along my convergence
> study.
> For instance, one case consists of 52731 nodes and 48630 elements. To
> properly initialise the model I give initial stress values in 386880. I
> make two steps in 48 minutes, with most of the time spent in integrators
> - as far as I understand.
>
> With "Integrators" I mean what is labelled by these lines in pylith output:
> -- Initializing integrators.
>  >>
> /home/comi/Pylith2.1.3/lib64/python2.7/site-packages/pylith/problems/Formulation.py
> [0m:474 [0m:_initialize [0m
> I guess this step means building residuals and stiffness matrices, but I
> am not sure about. Notably, in the second steo I do not change anything
> and then I get very few  linear/non linear iteration in the latter step.
>
> I wonder if this time is fine according to you experience and if it is
> worth going parallel to improve computational efficiency. I am willing
> to make much more complx cases  up to some millions of nodes and I
> wonder how far I can go using only one core.
> Regards,
>  Alberto.
>
> I am attaching a snapshot of one simulation log (not for the entire
> case) in case it may help.
> Regards,
>   Alberto.
>
>
>
>
> _______________________________________________
> CIG-SHORT mailing list
> CIG-SHORT at geodynamics.org
> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>



More information about the CIG-SHORT mailing list