[CIG-SHORT] Intermediate Results

BOK10 at pitt.edu BOK10 at pitt.edu
Mon Mar 11 09:48:13 PDT 2013


I believe I forgot to mention that my friction zero_tolerance = 1e-12

I'll raise the ksp_gmres_restart from 50 to 100-200.


Bobby

> On Mon, Mar 11, 2013 at 12:33 PM, <BOK10 at pitt.edu> wrote:
>
>> So, I just checked and both the linear and nonlinear solutions are
>> converging. I'm not sure I know what you mean by different solution
>> settings? Do you mean the following:
>>
>> # Preconditioner settings.
>> pc_type = asm
>> sub_pc_factor_shift_type = nonzero
>>
>> # Convergence parameters.
>> ksp_rtol = 1.0e-20
>>
>
> These kinds of tolerances usually mean that something is not scaled right.
> You
> cannot really get meaningful information below 1.0e-12.
>
> Brad, does this have something to do with the friction solve?
>
>
>> ksp_atol = 1.0e-13
>> ksp_max_it = 1000000
>> ksp_gmres_restart = 50
>>
>
> This restart is too small. If you have more memory, increase it to
> 100-200.
>
>
>> # Linear solver monitoring options.
>> ksp_monitor = true
>> ksp_view = true
>> ksp_converged_reason = true
>>
>> # Nonlinear solver monitoring options.
>> snes_rtol = 1.0e-20
>>
>
> Again, this seems way too small.
>
>   Thanks,
>
>     Matt
>
>
>> snes_atol = 1.0e-11
>> snes_max_it = 1000000
>> snes_monitor = true
>> snes_view = true
>> snes_converged_reason = true
>>
>> # PETSc summary -- useful for performance information.
>> log_summary = true
>>
>> Bobby
>>
>> > Hi Bobby,
>> >
>> > Are all of the solutions converging (both linear and nonlinear)?  I
>> would
>> > have to look more at the different solution settings to see which of
>> the
>> > two you've shown is more reasonable.
>> >
>> > Cheers,
>> > Charles
>> >
>> >
>> > On 9/03/2013, at 6:11 AM, BOK10 at pitt.edu wrote:
>> >
>> >> Thanks! I tried a mixture of those suggestions, and i was able to get
>> it
>> >> to reduce its runtime by half.
>> >>
>> >> I do have a question regarding the most suitable ksp/snes tolerances
>> >> though:
>> >>
>> >> I ran two simulations. The first had the following:
>> >>
>> >> for the fault zero_tolerance = 1e-12
>> >> ksp_rtol/snes_rtol = 1e-20
>> >> ksp_atol = 1e-13
>> >> snes_atol = 1e-11
>> >>
>> >> The second had this:
>> >>
>> >> for the fault zero_tolerance = 1e-14
>> >> ksp_rtol/snes_rtol = 1e-20
>> >> ksp_atol = 1e-15
>> >> snes_atol = 1e-13
>> >>
>> >> What I'm trying to do is look at the time it takes for certain
>> portions
>> >> of
>> >> my fault to rupture >1m. Running both simulations, I got wildly
>> >> different
>> >> results, and I'm not sure which to rely on at this point. Is there
>> any
>> >> insight you might be able to give me regarding the best tolerances to
>> >> settle on?
>> >>
>> >> Bobby
>> >>
>> >>
>> >>> Hmm.  It sounds to me as though you need to play with your
>> parameters a
>> >>> bit.  I'm assuming you're using a frictional fault, which can
>> >>> definitely
>> >>> take a while to converge; however, you can do a few things to speed
>> >>> things
>> >>> up:
>> >>>
>> >>> 1.  Follow all of the suggestions Brad had from his previous e-mail.
>> >>> 2.  Make sure you have the highest quality mesh you can get.  Just
>> one
>> >>> poorly formed element, especially on the fault, can really slow
>> things
>> >>> down.
>> >>> 3.  Possibly try reducing your time step size.  It's possible that
>> your
>> >>> load increment per timestep is too high for reasonable convergence.
>> >>>
>> >>> When you have a chance, I would see if you can build PyLith from
>> source
>> >>> on
>> >>> your cluster.  In addition to allowing parallel runs, this will let
>> you
>> >>> take advantage of any machine-specific tools (e.g., optimized math
>> >>> libraries, etc.).
>> >>>
>> >>> Let us know whether any of this helps.
>> >>>
>> >>> Cheers,
>> >>> Charles
>> >>>
>> >>>
>> >>> On 7/03/2013, at 12:15 PM, BOK10 at pitt.edu wrote:
>> >>>
>> >>>> Hi Charles,
>> >>>>
>> >>>> I'm not running in parallel (I couldn't get the model to process in
>> >>>> parallel on the cluster). Each time step takes ~80 minutes with
>> loose
>> >>>> tolerances, and about 2 hours for tighter tolerances.
>> >>>>
>> >>>> The mesh itself is about 6,000 cells (2D). The faults have a 2 km
>> >>>> discretization, and the boundaries have a 5 km discretization. Im
>> >>>> using
>> >>>> elasticplanestrain.
>> >>>>
>> >>>> Bobby
>> >>>>
>> >>>>> What sort of machine are you running on, and are you running in
>> >>>>> parallel?
>> >>>>> I'm not sure what your problem size is, but 80 time steps
>> shouldn't
>> >>>>> take
>> >>>>> that long to run, unless it's a very nonlinear problem.  How large
>> is
>> >>>>> your
>> >>>>> mesh, and what sort of rheology are you using?
>> >>>>>
>> >>>>> Cheers,
>> >>>>> Charles
>> >>>>>
>> >>>>>
>> >>>>> On 7/03/2013, at 11:36 AM, BOK10 at pitt.edu wrote:
>> >>>>>
>> >>>>>> Hi Charles,
>> >>>>>>
>> >>>>>> It takes pretty long for a simulation to finish processing, and I
>> >>>>>> was
>> >>>>>> hoping to split the simulation up into parts so I can come back
>> to
>> >>>>>> it
>> >>>>>> later. It's not a necessity, but more a convenience issue.
>> >>>>>>
>> >>>>>> I think I'll just continue on with running it overnight.
>> >>>>>>
>> >>>>>> Thanks,
>> >>>>>> Bobby
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>>> Hi Bobby,
>> >>>>>>>
>> >>>>>>> I'm not quite sure what you have in mind.  If you're running any
>> >>>>>>> sort
>> >>>>>>> of
>> >>>>>>> viscoelastic problem, you would need to save the entire state at
>> >>>>>>> the
>> >>>>>>> end
>> >>>>>>> of each run.  I don't see what benefit there would be from doing
>> >>>>>>> this,
>> >>>>>>> since you would still need to finish each run to get all the
>> state
>> >>>>>>> variables at the end of each chunk, and then feed them into the
>> >>>>>>> next
>> >>>>>>> simulation as initial state variables.
>> >>>>>>>
>> >>>>>>> If your problem is completely elastic, I suppose you could run
>> them
>> >>>>>>> in
>> >>>>>>> the
>> >>>>>>> way you're suggesting, and then use linear superposition to
>> obtain
>> >>>>>>> the
>> >>>>>>> final result.  What is your reason for wanting to break up the
>> >>>>>>> simulation?
>> >>>>>>>
>> >>>>>>> Cheers,
>> >>>>>>> Charles
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> On 7/03/2013, at 11:22 AM, BOK10 at pitt.edu wrote:
>> >>>>>>>
>> >>>>>>>> Is it possible to split a simulation into parts? I'm running my
>> >>>>>>>> model
>> >>>>>>>> for
>> >>>>>>>> 400 years at 5 year time intervals, but is it possible to split
>> it
>> >>>>>>>> to
>> >>>>>>>> 100
>> >>>>>>>> year chunks and run them serially?
>> >>>>>>>>
>> >>>>>>>> Bobby
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> _______________________________________________
>> >>>>>>>> CIG-SHORT mailing list
>> >>>>>>>> CIG-SHORT at geodynamics.org
>> >>>>>>>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>> >>>>>>>
>> >>>>>>> Charles A. Williams
>> >>>>>>> Scientist
>> >>>>>>> GNS Science
>> >>>>>>> 1 Fairway Drive, Avalon
>> >>>>>>> PO Box 30368
>> >>>>>>> Lower Hutt  5040
>> >>>>>>> New Zealand
>> >>>>>>> ph (office): 0064-4570-4566
>> >>>>>>> fax (office): 0064-4570-4600
>> >>>>>>> C.Williams at gns.cri.nz
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Notice: This email and any attachments are confidential.
>> >>>>>>> If received in error please destroy and immediately notify us.
>> >>>>>>> Do not copy or disclose the contents.
>> >>>>>>>
>> >>>>>>> _______________________________________________
>> >>>>>>> CIG-SHORT mailing list
>> >>>>>>> CIG-SHORT at geodynamics.org
>> >>>>>>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>> >>>>>>
>> >>>>>>
>> >>>>>> _______________________________________________
>> >>>>>> CIG-SHORT mailing list
>> >>>>>> CIG-SHORT at geodynamics.org
>> >>>>>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>> >>>>>
>> >>>>> Charles A. Williams
>> >>>>> Scientist
>> >>>>> GNS Science
>> >>>>> 1 Fairway Drive, Avalon
>> >>>>> PO Box 30368
>> >>>>> Lower Hutt  5040
>> >>>>> New Zealand
>> >>>>> ph (office): 0064-4570-4566
>> >>>>> fax (office): 0064-4570-4600
>> >>>>> C.Williams at gns.cri.nz
>> >>>>>
>> >>>>> _______________________________________________
>> >>>>> CIG-SHORT mailing list
>> >>>>> CIG-SHORT at geodynamics.org
>> >>>>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>> >>>>
>> >>>>
>> >>>> _______________________________________________
>> >>>> CIG-SHORT mailing list
>> >>>> CIG-SHORT at geodynamics.org
>> >>>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>> >>>
>> >>> Charles A. Williams
>> >>> Scientist
>> >>> GNS Science
>> >>> 1 Fairway Drive, Avalon
>> >>> PO Box 30368
>> >>> Lower Hutt  5040
>> >>> New Zealand
>> >>> ph (office): 0064-4570-4566
>> >>> fax (office): 0064-4570-4600
>> >>> C.Williams at gns.cri.nz
>> >>>
>> >>> _______________________________________________
>> >>> CIG-SHORT mailing list
>> >>> CIG-SHORT at geodynamics.org
>> >>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>> >>
>> >>
>> >
>> > Charles A. Williams
>> > Scientist
>> > GNS Science
>> > 1 Fairway Drive, Avalon
>> > PO Box 30368
>> > Lower Hutt  5040
>> > New Zealand
>> > ph (office): 0064-4570-4566
>> > fax (office): 0064-4570-4600
>> > C.Williams at gns.cri.nz
>> >
>> >
>> > Notice: This email and any attachments are confidential.
>> > If received in error please destroy and immediately notify us.
>> > Do not copy or disclose the contents.
>> >
>> > _______________________________________________
>> > CIG-SHORT mailing list
>> > CIG-SHORT at geodynamics.org
>> > http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>>
>>
>> _______________________________________________
>> CIG-SHORT mailing list
>> CIG-SHORT at geodynamics.org
>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
> _______________________________________________
> CIG-SHORT mailing list
> CIG-SHORT at geodynamics.org
> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short




More information about the CIG-SHORT mailing list