[CIG-SHORT] Change in dt means new Jacobian

Brad Aagaard baagaard at usgs.gov
Mon Sep 3 14:32:45 PDT 2012


Birendra,

Matt's point about something odd in your problem setup is the first 
avenue I would investigate. Some comments on the options you proposed 
are below.

On 9/2/12 10:24 AM, Birendra jha wrote:
> I am trying to run my Pylith simulation faster. Right now it takes about 3 min for Linear solve (using default KSP solver, about 960 iterations) and 3 min for integrating residual. I have about 190000 hex cells (mesh is very refined in the center and becomes coarse outward). No faults. Elastic materials. I used non-uniform user specified timesteps.
>
> I thought of following options:
>
> (1) Try different solver. I am thinking of using field split preconditioner with multigrid, as in tet4/step02.cfg.

Field split with ML for a problem without faults should be better than 
ASM for all but very small problems. This is especially true for large 
problems because the number of iterations with ASM grows quite rapidly 
with problem size, while AMG is much less sensitive to problem size.

> (2) Switch from vtk to hdf5 output

You can look at the log summary and see how much time is in the poststep 
stage. If it is significant, then switching to HDF5 output will reduce 
runtime. If you do any post-processing, HDF5 is much easier to use 
because Matlab and many other tools (we now include h5py in the binary) 
can access data in HDF5 files with just a few lines of code.

> (3) Run in parallel. I am looking into running it using sge. Or just running it on one multicore machine.

Running in parallel should help. Our weak-scaling benchmarking results 
show the limits of the memory bandwidth tend to kick in at around 4 
processors on dual quad-core and dual six-core machines. This means you 
can get pretty good speedup by using 2-4 cores. Using a cluster with 
multiple compute nodes also gives reasonable speedup for large problems 
with millions of DOF.

> (4) Increase linear solve tolerance from 1E-12 to 1E-11.

What is the uncertainty in the observations you are using to constrain 
your simulations? Why 1.0e-12 and not 1.0e-8? Does using a larger 
tolerance affect your results in any significant way?

> (5) Fix timestep.
>
> Are these good options to consider?
>
> I have a question on (5). Manual says
> Warning: Changing the time step requires recomputing the Jacobian of the system, which can greatly increase the runtime if the time-step size changes frequently.
>
> Where does this happen in the program, i.e a change in dt setting needNewJacobian=true? I ask this because I could not find message on the stdout about reforming Jacobian when I am using non-uniform dt (it is possible that I missed it, the timesteps remains constant for a while then changes then becomes constant again).


The Jacobian will only be reformed when you change the time step if it 
needs to be. This depends on the bulk constitutive model. If the 
Jacobian does need to be reformed there is a clear tradeoff between 
changing the time step and reforming the Jacobian and computing more 
time steps but not having to reform the Jacobian. See Charles's previous 
response for cases when this happens.

Regards,
Brad



More information about the CIG-SHORT mailing list