[CIG-SHORT] Elastoplastic material question

Eric Lindsey elindsey at ucsd.edu
Fri Nov 22 14:54:17 PST 2013


Hi Brad,

Thanks a lot for looking at this; after following your suggestions I'm
still having a problem though.

First, sorry about the strange boundary conditions on the sides -- I
think I originally did have the correct conditions for simple shear,
but took them out while testing to see what was causing the issue, and
just never put them back. I've fixed them up as you suggest, so that
for the elastic case there is a constant value of each stress
component throughout the domain at every timestep, and it never goes
into absolute tension at any location. I've also used the no-fault
petsc solver settings as suggested.

However, I still get the same "Infeasible stress state" error when
running the plastic case, unless I allow for tensile yield. In either
case the solution looks strange, with significant spatial variations
in the stress, even before any shear is applied. The problem starts at
the corners, which are going into absolute tension because the sides
have extended outwards, despite the normal-traction conditions which
prevented this in the elastic case.

Is there some difference in how the Neumann boundary conditions need
to be applied for the DruckerPrager material? I'm wondering if they
are being interpreted differently in the two cases -- or perhaps I
have a problem with the solver, but I don't see any indication of that
in the log; convergence is fine up until I get the error message.

Thanks,
Eric

On Tue, Nov 19, 2013 at 4:22 PM, Brad Aagaard <baagaard at usgs.gov> wrote:
> Eric,
>
> When I run the elastic case, I see the xx and yy (axial) components of the
> stress tensor becoming tensile near the corners as the +y (top) boundary
> moves to the right (+x direction). The domain is not under pure shear
> because there are no shear tractions on the +x and -x sides of the domain
> (the shear stress is zero on those boundaries); there is deformation in the
> y-direction on the +x and -x sides of the domain.
>
> In the plastic case the areas with the tensile stresses are where the
> plastic strain first appears. The plastic strain soon afterwards shows up in
> three bands across the middle of the domain, where the shear strain is
> largest. These bands are one cell wide, suggesting the width is not well
> resolved. As the shear strain increases, the bands grow. This appears
> consistent with your comments. I don't see any indication that there are
> bugs in the code or that the solver is not working as intended.
>
> Also note that in a nearly uniform resolution mesh with slight
> irregularities associated with the triangular discretization, there will be
> small spatial variations in the accuracy of the solution that can lead to
> localization of the yielding.
>
> To generate pure shear, I recommend
> u_x = u_y = 0 on -y boundary
> u_y = 0 on +y boundary
> T_shear = f(t) on -x, +x, and +y boundaries
>
> For PETSc solver settings, when you don't have a fault you should use
>
> [pylithapp.timedependent.formulation]
> matrix_type = aij
>
> [pylithapp.petsc]
> pc_type = ml
>
> When you have a fault, use
>
> [pylithapp.timedependent.formulation]
> split_fields = True
> use_custom_constraint_pc = True
> matrix_type = aij
>
> [pylithapp.petsc]
> fs_pc_type = fieldsplit
> fs_pc_use_amat = true
> fs_pc_fieldsplit_type = multiplicative
> fs_fieldsplit_0_pc_type = ml
> fs_fieldsplit_1_pc_type = jacobi
> fs_fieldsplit_0_ksp_type = preonly
> fs_fieldsplit_1_ksp_type = preonly
>
> For more info on the solver parameters, you may want to review Matt's
> presentation and slides from the June online tutorial
> (http://www.geodynamics.org/cig/community/workinggroups/short/workshops/cdm2013/agenda).
> There is also a comparison of some combinations of solver parameters in our
> JGR paper (http://dx.doi.org/10.1002/jgrb.50217).
>
> Regards,
>
> Brad
>
>
> On 11/18/2013 05:01 PM, Eric Lindsey wrote:
>>
>> I'm having trouble understanding the DruckerPragerPlaneStrain material.
>> I'm
>> using a simple homogeneous domain with no faults, and a simple Dirichlet
>> BC
>> imposing shear on the top/bottom. I've imposed an initial isotropic
>> compression, then I add the shear displacement gradually until it should
>> exceed the yield stress. On the sides I have a Neumann condition to
>> maintain the normal stress; for the moment I just set the shear tractions
>> to zero, but this value doesn't affect the results I'm getting. I expected
>> to see uniform plastic shear throughout the domain, but instead get the
>> message:
>>
>> RuntimeError: Infeasible stress state - cannot project back to yield
>> surface.
>>
>> If I include "allow_tensile_yield = True", the model runs, but instead of
>> homogeneous strain I'm getting the attached result. This is strange
>> because
>> at no point should the material be under absolute tension; the maximum
>> shear stress in the elastic case is never larger than the magnitude of
>> compressive stress. I think I am misunderstanding the setup of this
>> material somehow; hopefully it's a simple error? Input files are attached,
>> the elastic case works just fine. Any suggestions would be much
>> appreciated.
>>
>> Thanks,
>> Eric
>>
>> Relevant lines from the spatial databases:
>>
>>
>>
>> _______________________________________________
>> CIG-SHORT mailing list
>> CIG-SHORT at geodynamics.org
>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>>
>
> _______________________________________________
> CIG-SHORT mailing list
> CIG-SHORT at geodynamics.org
> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
-------------- next part --------------
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/utils/PetscManager.py:64:initialize
 -- petsc(info)
 -- Initialized PETSc.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/meshio/MeshIOObj.py:55:read
 -- meshiocubit(info)
 -- Reading finite-element mesh
 >> meshio/MeshIOCubit.cc:142:<unknown>
 -- meshiocubit(info)
 -- Reading 3709 vertices.
 >> meshio/MeshIOCubit.cc:197:<unknown>
 -- meshiocubit(info)
 -- Reading 7156 cells in 1 blocks.
 >> meshio/MeshIOCubit.cc:257:<unknown>
 -- meshiocubit(info)
 -- Found 10 node sets.
 >> meshio/MeshIOCubit.cc:285:<unknown>
 -- meshiocubit(info)
 -- Reading node set 'fault' with id 10 containing 101 nodes.
 >> meshio/MeshIOCubit.cc:285:<unknown>
 -- meshiocubit(info)
 -- Reading node set 'face_top' with id 11 containing 101 nodes.
 >> meshio/MeshIOCubit.cc:285:<unknown>
 -- meshiocubit(info)
 -- Reading node set 'bndry_west' with id 12 containing 31 nodes.
 >> meshio/MeshIOCubit.cc:285:<unknown>
 -- meshiocubit(info)
 -- Reading node set 'bndry_east' with id 13 containing 31 nodes.
 >> meshio/MeshIOCubit.cc:285:<unknown>
 -- meshiocubit(info)
 -- Reading node set 'face_bot' with id 14 containing 101 nodes.
 >> meshio/MeshIOCubit.cc:285:<unknown>
 -- meshiocubit(info)
 -- Reading node set 'subfault' with id 15 containing 17 nodes.
 >> meshio/MeshIOCubit.cc:285:<unknown>
 -- meshiocubit(info)
 -- Reading node set 'bndry_west_notopbot' with id 16 containing 29 nodes.
 >> meshio/MeshIOCubit.cc:285:<unknown>
 -- meshiocubit(info)
 -- Reading node set 'bndry_east_notopbot' with id 17 containing 29 nodes.
 >> meshio/MeshIOCubit.cc:285:<unknown>
 -- meshiocubit(info)
 -- Reading node set 'bndry_west_nofault' with id 18 containing 28 nodes.
 >> meshio/MeshIOCubit.cc:285:<unknown>
 -- meshiocubit(info)
 -- Reading node set 'bndry_east_nofault' with id 19 containing 28 nodes.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:92:preinitialize
 -- timedependent(info)
 -- Pre-initializing problem.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:390:_setupMaterials
 -- implicit(info)
 -- Pre-initializing materials.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:404:_setupMaterials
 -- implicit(info)
 -- Added elasticity integrator for material 'Drucker Prager elastoplastic crust material'.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:420:_setupBC
 -- implicit(info)
 -- Pre-initializing boundary conditions.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:439:_setupBC
 -- implicit(info)
 -- Added boundary condition 'face_bot' as a constraint.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:439:_setupBC
 -- implicit(info)
 -- Added boundary condition 'face_top' as a constraint.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/feassemble/FIATLagrange.py:407:initialize
 -- fiatlagrange(info)
 -- Cell geometry: 
 -- <pylith.feassemble.CellGeometry.GeometryLine2D; proxy of <Swig Object of type 'pylith::feassemble::GeometryLine2D *' at 0x1ef10f0> >
 -- Vertices: 
 -- [[-1.]
 [ 1.]]
 -- Quad pts:
 -- [[-0.57735027]
 [ 0.57735027]]
 -- Quad wts:
 -- [ 1.  1.]
 -- Basis fns @ quad pts ):
 -- [[ 0.78867513  0.21132487]
 [ 0.21132487  0.78867513]]
 -- Basis fn derivatives @ quad pts:
 -- [[[-0.5]
  [ 0.5]]

 [[-0.5]
  [ 0.5]]]
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:433:_setupBC
 -- implicit(info)
 -- Added boundary condition 'bndry_west' as an integrator.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/feassemble/FIATLagrange.py:407:initialize
 -- fiatlagrange(info)
 -- Cell geometry: 
 -- <pylith.feassemble.CellGeometry.GeometryLine2D; proxy of <Swig Object of type 'pylith::feassemble::GeometryLine2D *' at 0x1f00690> >
 -- Vertices: 
 -- [[-1.]
 [ 1.]]
 -- Quad pts:
 -- [[-0.57735027]
 [ 0.57735027]]
 -- Quad wts:
 -- [ 1.  1.]
 -- Basis fns @ quad pts ):
 -- [[ 0.78867513  0.21132487]
 [ 0.21132487  0.78867513]]
 -- Basis fn derivatives @ quad pts:
 -- [[[-0.5]
  [ 0.5]]

 [[-0.5]
  [ 0.5]]]
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:433:_setupBC
 -- implicit(info)
 -- Added boundary condition 'bndry_east' as an integrator.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:459:_setupInterfaces
 -- implicit(info)
 -- Pre-initializing interior interfaces.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:188:preinitialize
 -- implicit(info)
 -- Pre-initializing output.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Problem.py:150:verifyConfiguration
 -- timedependent(info)
 -- Verifying compatibility of problem configuration.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:117:initialize
 -- timedependent(info)
 -- Initializing problem.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:500:_initialize
 -- implicit(info)
 -- Initializing integrators.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/bc/Neumann.py:134:initialize
 -- timedependent(info)
 -- Initializing Neumann boundary 'bndry_west'.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/bc/Neumann.py:134:initialize
 -- timedependent(info)
 -- Initializing Neumann boundary 'bndry_east'.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:510:_initialize
 -- implicit(info)
 -- Initializing constraints.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:516:_initialize
 -- implicit(info)
 -- Setting up solution output.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:525:_initialize
 -- implicit(info)
 -- Creating solution field.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:136:initialize
 -- implicit(info)
 -- Creating other fields.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:159:initialize
 -- implicit(info)
 -- Creating Jacobian matrix.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:169:initialize
 -- implicit(info)
 -- Initializing solver.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:131:run
 -- timedependent(info)
 -- Solving problem.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:137:run
 -- timedependent(info)
 -- Preparing for prestep with elastic behavior.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:266:prestepElastic
 -- implicit(info)
 -- Setting constraints.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:582:_reformJacobian
 -- implicit(info)
 -- Integrating Jacobian operator.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:152:run
 -- timedependent(info)
 -- Computing prestep with elastic behavior.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:607:_reformResidual
 -- implicit(info)
 -- Integrating residual term in operator.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:215:step
 -- implicit(info)
 -- Solving equations.
  0 SNES Function norm 2.655070958436e-19 
Nonlinear solve converged due to CONVERGED_FNORM_ABS iterations 0
SNES Object: 1 MPI processes
  type: newtonls
  maximum iterations=500, maximum function evaluations=10000
  tolerances: relative=1e-10, absolute=1e-09, solution=1e-08
  total number of linear solver iterations=0
  total number of function evaluations=1
  SNESLineSearch Object:   1 MPI processes
    type: shell
    maxstep=1.000000e+08, minlambda=1.000000e-12
    tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08
    maximum iterations=1
  KSP Object:   1 MPI processes
    type: gmres
      GMRES: restart=50, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
      GMRES: happy breakdown tolerance 1e-30
    maximum iterations=1000, initial guess is zero
    tolerances:  relative=1e-09, absolute=1e-13, divergence=10000
    left preconditioning
    using DEFAULT norm type for convergence test
  PC Object:   1 MPI processes
    type: ml
    PC has not been set up so information may be incomplete
      MG: type is MULTIPLICATIVE, levels=0 cycles=unknown
        Cycles per PCApply=0
        Using Galerkin computed coarse grid matrices
    linear system matrix = precond matrix:
    Matrix Object:     1 MPI processes
      type: seqaij
      rows=7014, cols=7014
      total: nonzeros=96108, allocated nonzeros=96108
      total number of mallocs used during MatSetValues calls =0
        using I-node routines: found 3507 nodes, limit used is 5
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:158:run
 -- timedependent(info)
 -- Finishing prestep with elastic behavior.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:250:poststep
 -- implicit(info)
 -- Writing solution fields.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:176:run
 -- timedependent(info)
 -- Main time loop, current time is t=0*s
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:187:run
 -- timedependent(info)
 -- Preparing to advance solution from time t=0*s to t=3.15576e+07*s.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:186:prestep
 -- implicit(info)
 -- Setting constraints.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:582:_reformJacobian
 -- implicit(info)
 -- Integrating Jacobian operator.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:193:run
 -- timedependent(info)
 -- Advancing solution from t=0*s to t=3.15576e+07*s.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:607:_reformResidual
 -- implicit(info)
 -- Integrating residual term in operator.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:215:step
 -- implicit(info)
 -- Solving equations.
  0 SNES Function norm 4.905706665107e-05 
    0 KSP Residual norm 4.976363736363e-04 
    1 KSP Residual norm 7.008169859551e-05 
    2 KSP Residual norm 3.821162995370e-05 
    3 KSP Residual norm 3.342506480468e-05 
    4 KSP Residual norm 2.704294483859e-05 
    5 KSP Residual norm 2.103872962743e-05 
    6 KSP Residual norm 1.693921368109e-05 
    7 KSP Residual norm 1.270473544699e-05 
    8 KSP Residual norm 9.059888340330e-06 
    9 KSP Residual norm 6.592876012537e-06 
   10 KSP Residual norm 4.905339648975e-06 
   11 KSP Residual norm 3.655491376243e-06 
   12 KSP Residual norm 2.553741996562e-06 
   13 KSP Residual norm 1.830613497933e-06 
   14 KSP Residual norm 1.233141174717e-06 
   15 KSP Residual norm 8.189464876128e-07 
   16 KSP Residual norm 5.845056364090e-07 
   17 KSP Residual norm 4.031335789027e-07 
   18 KSP Residual norm 2.523990319113e-07 
   19 KSP Residual norm 1.692341147038e-07 
   20 KSP Residual norm 1.131382926607e-07 
   21 KSP Residual norm 7.415279009114e-08 
   22 KSP Residual norm 5.447715904295e-08 
   23 KSP Residual norm 4.413353818321e-08 
   24 KSP Residual norm 3.463518692135e-08 
   25 KSP Residual norm 2.998803439987e-08 
   26 KSP Residual norm 2.668548613048e-08 
   27 KSP Residual norm 2.159591841429e-08 
   28 KSP Residual norm 1.625520235814e-08 
   29 KSP Residual norm 1.179822696421e-08 
   30 KSP Residual norm 8.776244609351e-09 
   31 KSP Residual norm 6.488896667558e-09 
   32 KSP Residual norm 4.486770641432e-09 
   33 KSP Residual norm 3.354213870920e-09 
   34 KSP Residual norm 2.427897064421e-09 
   35 KSP Residual norm 1.710784280256e-09 
   36 KSP Residual norm 1.246020906032e-09 
   37 KSP Residual norm 9.997540204755e-10 
   38 KSP Residual norm 7.940426076077e-10 
   39 KSP Residual norm 5.588749794954e-10 
   40 KSP Residual norm 3.460632486662e-10 
   41 KSP Residual norm 1.945028917495e-10 
   42 KSP Residual norm 1.181699739164e-10 
   43 KSP Residual norm 7.696546817141e-11 
   44 KSP Residual norm 5.212527608718e-11 
   45 KSP Residual norm 3.140129113829e-11 
   46 KSP Residual norm 1.768561321710e-11 
   47 KSP Residual norm 1.029421450335e-11 
   48 KSP Residual norm 5.096855648087e-12 
   49 KSP Residual norm 2.663233325812e-12 
   50 KSP Residual norm 1.693409316220e-12 
   51 KSP Residual norm 1.366472235395e-12 
   52 KSP Residual norm 1.017421010823e-12 
   53 KSP Residual norm 6.125267177761e-13 
   54 KSP Residual norm 4.066841244614e-13 
  Linear solve converged due to CONVERGED_RTOL iterations 54
KSP Object: 1 MPI processes
  type: gmres
    GMRES: restart=50, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1000, initial guess is zero
  tolerances:  relative=1e-09, absolute=1e-13, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: ml
    MG: type is MULTIPLICATIVE, levels=5 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (mg_coarse_)     1 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (mg_coarse_)     1 MPI processes
      type: lu
        LU: out-of-place factorization
        tolerance for zero pivot 2.22045e-14
        using diagonal shift on blocks to prevent zero pivot
        matrix ordering: nd
        factor fill ratio given 5, needed 1
          Factored matrix follows:
            Matrix Object:             1 MPI processes
              type: seqaij
              rows=1, cols=1
              package used to perform factorization: petsc
              total: nonzeros=1, allocated nonzeros=1
              total number of mallocs used during MatSetValues calls =0
                not using I-node routines
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=1, cols=1
        total: nonzeros=1, allocated nonzeros=1
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (mg_levels_1_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_1_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=3, cols=3
        total: nonzeros=9, allocated nonzeros=9
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 1 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (mg_levels_2_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_2_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=32, cols=32
        total: nonzeros=320, allocated nonzeros=320
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (mg_levels_3_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_3_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=406, cols=406
        total: nonzeros=4302, allocated nonzeros=4302
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 4 -------------------------------
    KSP Object:    (mg_levels_4_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_4_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=7014, cols=7014
        total: nonzeros=96108, allocated nonzeros=96108
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 3507 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Matrix Object:   1 MPI processes
    type: seqaij
    rows=7014, cols=7014
    total: nonzeros=96108, allocated nonzeros=96108
    total number of mallocs used during MatSetValues calls =0
      using I-node routines: found 3507 nodes, limit used is 5
      Line search: Using full step: fnorm 4.905706665107e-05 gnorm 7.240719426006e-06
  1 SNES Function norm 7.240719426006e-06 
    0 KSP Residual norm 5.294113276037e-05 
    1 KSP Residual norm 9.656238864781e-06 
    2 KSP Residual norm 4.675948845449e-06 
    3 KSP Residual norm 3.866367418068e-06 
    4 KSP Residual norm 2.773288984585e-06 
    5 KSP Residual norm 2.267747545690e-06 
    6 KSP Residual norm 1.941153328387e-06 
    7 KSP Residual norm 1.443432905483e-06 
    8 KSP Residual norm 1.073669558294e-06 
    9 KSP Residual norm 8.424375208132e-07 
   10 KSP Residual norm 6.429216319425e-07 
   11 KSP Residual norm 4.915220684937e-07 
   12 KSP Residual norm 3.758233241339e-07 
   13 KSP Residual norm 2.740042884346e-07 
   14 KSP Residual norm 1.861850713300e-07 
   15 KSP Residual norm 1.201943771784e-07 
   16 KSP Residual norm 8.385583252466e-08 
   17 KSP Residual norm 6.239548445312e-08 
   18 KSP Residual norm 4.309657196587e-08 
   19 KSP Residual norm 2.796318961404e-08 
   20 KSP Residual norm 1.990864082979e-08 
   21 KSP Residual norm 1.310331780774e-08 
   22 KSP Residual norm 9.131664532090e-09 
   23 KSP Residual norm 7.451240591315e-09 
   24 KSP Residual norm 5.520456088665e-09 
   25 KSP Residual norm 4.301247552484e-09 
   26 KSP Residual norm 3.811849792600e-09 
   27 KSP Residual norm 3.201641773806e-09 
   28 KSP Residual norm 2.679343004471e-09 
   29 KSP Residual norm 2.068016599042e-09 
   30 KSP Residual norm 1.503902879068e-09 
   31 KSP Residual norm 1.105595918704e-09 
   32 KSP Residual norm 7.685692608421e-10 
   33 KSP Residual norm 4.834810339055e-10 
   34 KSP Residual norm 3.279853380560e-10 
   35 KSP Residual norm 2.254448902248e-10 
   36 KSP Residual norm 1.537507330723e-10 
   37 KSP Residual norm 1.097929456969e-10 
   38 KSP Residual norm 8.725882583402e-11 
   39 KSP Residual norm 6.722847579026e-11 
   40 KSP Residual norm 5.177279055527e-11 
   41 KSP Residual norm 3.868676315419e-11 
   42 KSP Residual norm 2.623433877587e-11 
   43 KSP Residual norm 1.633775305500e-11 
   44 KSP Residual norm 1.129698542129e-11 
   45 KSP Residual norm 8.238437234381e-12 
   46 KSP Residual norm 5.426009840718e-12 
   47 KSP Residual norm 3.034430674728e-12 
   48 KSP Residual norm 1.788614656136e-12 
   49 KSP Residual norm 1.072685284477e-12 
   50 KSP Residual norm 6.809337845597e-13 
   51 KSP Residual norm 5.497168475996e-13 
   52 KSP Residual norm 4.266148885500e-13 
   53 KSP Residual norm 2.824161221599e-13 
   54 KSP Residual norm 1.936768103337e-13 
   55 KSP Residual norm 1.282337987972e-13 
   56 KSP Residual norm 8.363824254864e-14 
  Linear solve converged due to CONVERGED_ATOL iterations 56
KSP Object: 1 MPI processes
  type: gmres
    GMRES: restart=50, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1000, initial guess is zero
  tolerances:  relative=1e-09, absolute=1e-13, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: ml
    MG: type is MULTIPLICATIVE, levels=5 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (mg_coarse_)     1 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (mg_coarse_)     1 MPI processes
      type: lu
        LU: out-of-place factorization
        tolerance for zero pivot 2.22045e-14
        using diagonal shift on blocks to prevent zero pivot
        matrix ordering: nd
        factor fill ratio given 5, needed 1
          Factored matrix follows:
            Matrix Object:             1 MPI processes
              type: seqaij
              rows=1, cols=1
              package used to perform factorization: petsc
              total: nonzeros=1, allocated nonzeros=1
              total number of mallocs used during MatSetValues calls =0
                not using I-node routines
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=1, cols=1
        total: nonzeros=1, allocated nonzeros=1
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (mg_levels_1_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_1_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=3, cols=3
        total: nonzeros=9, allocated nonzeros=9
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 1 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (mg_levels_2_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_2_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=32, cols=32
        total: nonzeros=318, allocated nonzeros=318
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (mg_levels_3_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_3_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=406, cols=406
        total: nonzeros=4300, allocated nonzeros=4300
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 4 -------------------------------
    KSP Object:    (mg_levels_4_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_4_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=7014, cols=7014
        total: nonzeros=96108, allocated nonzeros=96108
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 3507 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Matrix Object:   1 MPI processes
    type: seqaij
    rows=7014, cols=7014
    total: nonzeros=96108, allocated nonzeros=96108
    total number of mallocs used during MatSetValues calls =0
      using I-node routines: found 3507 nodes, limit used is 5
      Line search: Using full step: fnorm 7.240719426006e-06 gnorm 3.541175431497e-07
  2 SNES Function norm 3.541175431497e-07 
    0 KSP Residual norm 1.412624714408e-06 
    1 KSP Residual norm 2.813232122417e-07 
    2 KSP Residual norm 1.455964380052e-07 
    3 KSP Residual norm 1.069810553175e-07 
    4 KSP Residual norm 7.078089113911e-08 
    5 KSP Residual norm 5.805096587093e-08 
    6 KSP Residual norm 5.101502191302e-08 
    7 KSP Residual norm 4.241607117722e-08 
    8 KSP Residual norm 3.187867121229e-08 
    9 KSP Residual norm 2.190132612138e-08 
   10 KSP Residual norm 1.608748912759e-08 
   11 KSP Residual norm 1.234594523118e-08 
   12 KSP Residual norm 9.341231017130e-09 
   13 KSP Residual norm 6.885015260533e-09 
   14 KSP Residual norm 5.141831707127e-09 
   15 KSP Residual norm 3.504036905176e-09 
   16 KSP Residual norm 2.332218088592e-09 
   17 KSP Residual norm 1.704401704989e-09 
   18 KSP Residual norm 1.213998560140e-09 
   19 KSP Residual norm 7.530830844572e-10 
   20 KSP Residual norm 5.270955816367e-10 
   21 KSP Residual norm 3.643874775674e-10 
   22 KSP Residual norm 2.547989824915e-10 
   23 KSP Residual norm 2.025998901446e-10 
   24 KSP Residual norm 1.627058075514e-10 
   25 KSP Residual norm 1.351524653343e-10 
   26 KSP Residual norm 1.218323431447e-10 
   27 KSP Residual norm 1.000924546464e-10 
   28 KSP Residual norm 8.113350182582e-11 
   29 KSP Residual norm 6.388776647250e-11 
   30 KSP Residual norm 4.786589641438e-11 
   31 KSP Residual norm 3.464188972708e-11 
   32 KSP Residual norm 2.405376983006e-11 
   33 KSP Residual norm 1.498263809154e-11 
   34 KSP Residual norm 1.007594140941e-11 
   35 KSP Residual norm 7.301428856673e-12 
   36 KSP Residual norm 5.169463679592e-12 
   37 KSP Residual norm 3.179613419353e-12 
   38 KSP Residual norm 2.210760597335e-12 
   39 KSP Residual norm 1.765986109601e-12 
   40 KSP Residual norm 1.370368852689e-12 
   41 KSP Residual norm 1.059001233765e-12 
   42 KSP Residual norm 7.961140559923e-13 
   43 KSP Residual norm 5.312716655608e-13 
   44 KSP Residual norm 3.346338115902e-13 
   45 KSP Residual norm 2.229411775276e-13 
   46 KSP Residual norm 1.511595858994e-13 
   47 KSP Residual norm 9.380036810481e-14 
  Linear solve converged due to CONVERGED_ATOL iterations 47
KSP Object: 1 MPI processes
  type: gmres
    GMRES: restart=50, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1000, initial guess is zero
  tolerances:  relative=1e-09, absolute=1e-13, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: ml
    MG: type is MULTIPLICATIVE, levels=5 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (mg_coarse_)     1 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (mg_coarse_)     1 MPI processes
      type: lu
        LU: out-of-place factorization
        tolerance for zero pivot 2.22045e-14
        using diagonal shift on blocks to prevent zero pivot
        matrix ordering: nd
        factor fill ratio given 5, needed 1
          Factored matrix follows:
            Matrix Object:             1 MPI processes
              type: seqaij
              rows=1, cols=1
              package used to perform factorization: petsc
              total: nonzeros=1, allocated nonzeros=1
              total number of mallocs used during MatSetValues calls =0
                not using I-node routines
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=1, cols=1
        total: nonzeros=1, allocated nonzeros=1
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (mg_levels_1_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_1_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=3, cols=3
        total: nonzeros=9, allocated nonzeros=9
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 1 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (mg_levels_2_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_2_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=32, cols=32
        total: nonzeros=318, allocated nonzeros=318
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (mg_levels_3_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_3_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=406, cols=406
        total: nonzeros=4300, allocated nonzeros=4300
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 4 -------------------------------
    KSP Object:    (mg_levels_4_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_4_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=7014, cols=7014
        total: nonzeros=96108, allocated nonzeros=96108
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 3507 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Matrix Object:   1 MPI processes
    type: seqaij
    rows=7014, cols=7014
    total: nonzeros=96108, allocated nonzeros=96108
    total number of mallocs used during MatSetValues calls =0
      using I-node routines: found 3507 nodes, limit used is 5
      Line search: Using full step: fnorm 3.541175431497e-07 gnorm 1.912605206971e-09
  3 SNES Function norm 1.912605206971e-09 
    0 KSP Residual norm 2.467393983204e-09 
    1 KSP Residual norm 5.276052081460e-10 
    2 KSP Residual norm 2.754657513325e-10 
    3 KSP Residual norm 1.697304168543e-10 
    4 KSP Residual norm 1.016635134503e-10 
    5 KSP Residual norm 7.243901948662e-11 
    6 KSP Residual norm 5.737326136559e-11 
    7 KSP Residual norm 4.762179620268e-11 
    8 KSP Residual norm 4.178132054677e-11 
    9 KSP Residual norm 3.248699060288e-11 
   10 KSP Residual norm 2.115204338389e-11 
   11 KSP Residual norm 1.481590366743e-11 
   12 KSP Residual norm 1.027350732310e-11 
   13 KSP Residual norm 7.096086662948e-12 
   14 KSP Residual norm 5.097840086684e-12 
   15 KSP Residual norm 3.726378707247e-12 
   16 KSP Residual norm 2.636292540220e-12 
   17 KSP Residual norm 1.798346057409e-12 
   18 KSP Residual norm 1.300561238453e-12 
   19 KSP Residual norm 9.106528344458e-13 
   20 KSP Residual norm 6.906284731787e-13 
   21 KSP Residual norm 5.273764680097e-13 
   22 KSP Residual norm 4.000951739794e-13 
   23 KSP Residual norm 2.986704186287e-13 
   24 KSP Residual norm 2.396150393068e-13 
   25 KSP Residual norm 2.033153129741e-13 
   26 KSP Residual norm 1.820377729041e-13 
   27 KSP Residual norm 1.519964988644e-13 
   28 KSP Residual norm 1.167875890435e-13 
   29 KSP Residual norm 8.556448385070e-14 
  Linear solve converged due to CONVERGED_ATOL iterations 29
KSP Object: 1 MPI processes
  type: gmres
    GMRES: restart=50, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1000, initial guess is zero
  tolerances:  relative=1e-09, absolute=1e-13, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: ml
    MG: type is MULTIPLICATIVE, levels=5 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (mg_coarse_)     1 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (mg_coarse_)     1 MPI processes
      type: lu
        LU: out-of-place factorization
        tolerance for zero pivot 2.22045e-14
        using diagonal shift on blocks to prevent zero pivot
        matrix ordering: nd
        factor fill ratio given 5, needed 1
          Factored matrix follows:
            Matrix Object:             1 MPI processes
              type: seqaij
              rows=1, cols=1
              package used to perform factorization: petsc
              total: nonzeros=1, allocated nonzeros=1
              total number of mallocs used during MatSetValues calls =0
                not using I-node routines
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=1, cols=1
        total: nonzeros=1, allocated nonzeros=1
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (mg_levels_1_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_1_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=3, cols=3
        total: nonzeros=9, allocated nonzeros=9
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 1 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (mg_levels_2_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_2_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=32, cols=32
        total: nonzeros=318, allocated nonzeros=318
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (mg_levels_3_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_3_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=406, cols=406
        total: nonzeros=4300, allocated nonzeros=4300
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 4 -------------------------------
    KSP Object:    (mg_levels_4_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_4_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=7014, cols=7014
        total: nonzeros=96108, allocated nonzeros=96108
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 3507 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Matrix Object:   1 MPI processes
    type: seqaij
    rows=7014, cols=7014
    total: nonzeros=96108, allocated nonzeros=96108
    total number of mallocs used during MatSetValues calls =0
      using I-node routines: found 3507 nodes, limit used is 5
      Line search: Using full step: fnorm 1.912605206971e-09 gnorm 7.934744631846e-14
  4 SNES Function norm 7.934744631846e-14 
Nonlinear solve converged due to CONVERGED_FNORM_ABS iterations 4
SNES Object: 1 MPI processes
  type: newtonls
  maximum iterations=500, maximum function evaluations=10000
  tolerances: relative=1e-10, absolute=1e-09, solution=1e-08
  total number of linear solver iterations=186
  total number of function evaluations=9
  SNESLineSearch Object:   1 MPI processes
    type: shell
    maxstep=1.000000e+08, minlambda=1.000000e-12
    tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08
    maximum iterations=1
  KSP Object:   1 MPI processes
    type: gmres
      GMRES: restart=50, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
      GMRES: happy breakdown tolerance 1e-30
    maximum iterations=1000, initial guess is zero
    tolerances:  relative=1e-09, absolute=1e-13, divergence=10000
    left preconditioning
    using PRECONDITIONED norm type for convergence test
  PC Object:   1 MPI processes
    type: ml
      MG: type is MULTIPLICATIVE, levels=5 cycles=v
        Cycles per PCApply=1
        Using Galerkin computed coarse grid matrices
    Coarse grid solver -- level -------------------------------
      KSP Object:      (mg_coarse_)       1 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (mg_coarse_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          using diagonal shift on blocks to prevent zero pivot
          matrix ordering: nd
          factor fill ratio given 5, needed 1
            Factored matrix follows:
              Matrix Object:               1 MPI processes
                type: seqaij
                rows=1, cols=1
                package used to perform factorization: petsc
                total: nonzeros=1, allocated nonzeros=1
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes
          type: seqaij
          rows=1, cols=1
          total: nonzeros=1, allocated nonzeros=1
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    Down solver (pre-smoother) on level 1 -------------------------------
      KSP Object:      (mg_levels_1_)       1 MPI processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (mg_levels_1_)       1 MPI processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes
          type: seqaij
          rows=3, cols=3
          total: nonzeros=9, allocated nonzeros=9
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 1 nodes, limit used is 5
    Up solver (post-smoother) same as down solver (pre-smoother)
    Down solver (pre-smoother) on level 2 -------------------------------
      KSP Object:      (mg_levels_2_)       1 MPI processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (mg_levels_2_)       1 MPI processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes
          type: seqaij
          rows=32, cols=32
          total: nonzeros=318, allocated nonzeros=318
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    Up solver (post-smoother) same as down solver (pre-smoother)
    Down solver (pre-smoother) on level 3 -------------------------------
      KSP Object:      (mg_levels_3_)       1 MPI processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (mg_levels_3_)       1 MPI processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes
          type: seqaij
          rows=406, cols=406
          total: nonzeros=4300, allocated nonzeros=4300
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    Up solver (post-smoother) same as down solver (pre-smoother)
    Down solver (pre-smoother) on level 4 -------------------------------
      KSP Object:      (mg_levels_4_)       1 MPI processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (mg_levels_4_)       1 MPI processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes
          type: seqaij
          rows=7014, cols=7014
          total: nonzeros=96108, allocated nonzeros=96108
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 3507 nodes, limit used is 5
    Up solver (post-smoother) same as down solver (pre-smoother)
    linear system matrix = precond matrix:
    Matrix Object:     1 MPI processes
      type: seqaij
      rows=7014, cols=7014
      total: nonzeros=96108, allocated nonzeros=96108
      total number of mallocs used during MatSetValues calls =0
        using I-node routines: found 3507 nodes, limit used is 5
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:200:run
 -- timedependent(info)
 -- Finishing advancing solution from t=0*s to t=3.15576e+07*s.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:250:poststep
 -- implicit(info)
 -- Writing solution fields.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:176:run
 -- timedependent(info)
 -- Main time loop, current time is t=3.15576e+07*s
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:187:run
 -- timedependent(info)
 -- Preparing to advance solution from time t=3.15576e+07*s to t=6.31152e+07*s.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:186:prestep
 -- implicit(info)
 -- Setting constraints.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:582:_reformJacobian
 -- implicit(info)
 -- Integrating Jacobian operator.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:193:run
 -- timedependent(info)
 -- Advancing solution from t=3.15576e+07*s to t=6.31152e+07*s.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py:607:_reformResidual
 -- implicit(info)
 -- Integrating residual term in operator.
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py:215:step
 -- implicit(info)
 -- Solving equations.
  0 SNES Function norm 8.857420269813e-05 
    0 KSP Residual norm 1.263105748341e-04 
    1 KSP Residual norm 2.356577059794e-05 
    2 KSP Residual norm 1.228552952190e-05 
    3 KSP Residual norm 7.968529786094e-06 
    4 KSP Residual norm 5.876937173455e-06 
    5 KSP Residual norm 4.233337553742e-06 
    6 KSP Residual norm 3.279012326004e-06 
    7 KSP Residual norm 2.795594723702e-06 
    8 KSP Residual norm 2.269115734102e-06 
    9 KSP Residual norm 1.879403249756e-06 
   10 KSP Residual norm 1.405219542331e-06 
   11 KSP Residual norm 8.301221490726e-07 
   12 KSP Residual norm 4.551238650449e-07 
   13 KSP Residual norm 2.741103317449e-07 
   14 KSP Residual norm 1.682108371537e-07 
   15 KSP Residual norm 1.120514010803e-07 
   16 KSP Residual norm 7.639497567672e-08 
   17 KSP Residual norm 4.938934338546e-08 
   18 KSP Residual norm 3.463186316261e-08 
   19 KSP Residual norm 2.403433225551e-08 
   20 KSP Residual norm 1.830589656843e-08 
   21 KSP Residual norm 1.409941706859e-08 
   22 KSP Residual norm 9.623723494257e-09 
   23 KSP Residual norm 6.207514348167e-09 
   24 KSP Residual norm 4.332600700499e-09 
   25 KSP Residual norm 2.684171800702e-09 
   26 KSP Residual norm 1.571763849832e-09 
   27 KSP Residual norm 1.093491551414e-09 
   28 KSP Residual norm 8.434599430505e-10 
   29 KSP Residual norm 6.758676028354e-10 
   30 KSP Residual norm 4.803475805662e-10 
   31 KSP Residual norm 3.410069218394e-10 
   32 KSP Residual norm 2.556685257800e-10 
   33 KSP Residual norm 1.890577539938e-10 
   34 KSP Residual norm 1.501258446549e-10 
   35 KSP Residual norm 1.190608351078e-10 
   36 KSP Residual norm 7.677414508006e-11 
   37 KSP Residual norm 5.459629387056e-11 
   38 KSP Residual norm 3.534941337171e-11 
   39 KSP Residual norm 2.337360294485e-11 
   40 KSP Residual norm 1.405634063641e-11 
   41 KSP Residual norm 7.261064115985e-12 
   42 KSP Residual norm 4.191854166867e-12 
   43 KSP Residual norm 2.700190194458e-12 
   44 KSP Residual norm 1.924544554425e-12 
   45 KSP Residual norm 1.242462914162e-12 
   46 KSP Residual norm 7.110709319886e-13 
   47 KSP Residual norm 3.999659947368e-13 
   48 KSP Residual norm 2.462107467815e-13 
   49 KSP Residual norm 1.537635839611e-13 
   50 KSP Residual norm 8.832602721816e-14 
  Linear solve converged due to CONVERGED_ATOL iterations 50
KSP Object: 1 MPI processes
  type: gmres
    GMRES: restart=50, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=1000, initial guess is zero
  tolerances:  relative=1e-09, absolute=1e-13, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: ml
    MG: type is MULTIPLICATIVE, levels=5 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (mg_coarse_)     1 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (mg_coarse_)     1 MPI processes
      type: lu
        LU: out-of-place factorization
        tolerance for zero pivot 2.22045e-14
        using diagonal shift on blocks to prevent zero pivot
        matrix ordering: nd
        factor fill ratio given 5, needed 1
          Factored matrix follows:
            Matrix Object:             1 MPI processes
              type: seqaij
              rows=1, cols=1
              package used to perform factorization: petsc
              total: nonzeros=1, allocated nonzeros=1
              total number of mallocs used during MatSetValues calls =0
                not using I-node routines
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=1, cols=1
        total: nonzeros=1, allocated nonzeros=1
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (mg_levels_1_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_1_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=3, cols=3
        total: nonzeros=9, allocated nonzeros=9
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 1 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (mg_levels_2_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_2_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=32, cols=32
        total: nonzeros=320, allocated nonzeros=320
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (mg_levels_3_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_3_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=406, cols=406
        total: nonzeros=4302, allocated nonzeros=4302
        total number of mallocs used during MatSetValues calls =0
          not using I-node routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 4 -------------------------------
    KSP Object:    (mg_levels_4_)     1 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_4_)     1 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       1 MPI processes
        type: seqaij
        rows=7014, cols=7014
        total: nonzeros=96108, allocated nonzeros=96108
        total number of mallocs used during MatSetValues calls =0
          using I-node routines: found 3507 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Matrix Object:   1 MPI processes
    type: seqaij
    rows=7014, cols=7014
    total: nonzeros=96108, allocated nonzeros=96108
    total number of mallocs used during MatSetValues calls =0
      using I-node routines: found 3507 nodes, limit used is 5
      Line search: Using full step: fnorm 8.857420269813e-05 gnorm 7.581623880530e-14
  1 SNES Function norm 7.581623880530e-14 
Nonlinear solve converged due to CONVERGED_FNORM_ABS iterations 1
SNES Object: 1 MPI processes
  type: newtonls
  maximum iterations=500, maximum function evaluations=10000
  tolerances: relative=1e-10, absolute=1e-09, solution=1e-08
  total number of linear solver iterations=50
  total number of function evaluations=3
  SNESLineSearch Object:   1 MPI processes
    type: shell
    maxstep=1.000000e+08, minlambda=1.000000e-12
    tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08
    maximum iterations=1
  KSP Object:   1 MPI processes
    type: gmres
      GMRES: restart=50, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
      GMRES: happy breakdown tolerance 1e-30
    maximum iterations=1000, initial guess is zero
    tolerances:  relative=1e-09, absolute=1e-13, divergence=10000
    left preconditioning
    using PRECONDITIONED norm type for convergence test
  PC Object:   1 MPI processes
    type: ml
      MG: type is MULTIPLICATIVE, levels=5 cycles=v
        Cycles per PCApply=1
        Using Galerkin computed coarse grid matrices
    Coarse grid solver -- level -------------------------------
      KSP Object:      (mg_coarse_)       1 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (mg_coarse_)       1 MPI processes
        type: lu
          LU: out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          using diagonal shift on blocks to prevent zero pivot
          matrix ordering: nd
          factor fill ratio given 5, needed 1
            Factored matrix follows:
              Matrix Object:               1 MPI processes
                type: seqaij
                rows=1, cols=1
                package used to perform factorization: petsc
                total: nonzeros=1, allocated nonzeros=1
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes
          type: seqaij
          rows=1, cols=1
          total: nonzeros=1, allocated nonzeros=1
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    Down solver (pre-smoother) on level 1 -------------------------------
      KSP Object:      (mg_levels_1_)       1 MPI processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (mg_levels_1_)       1 MPI processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes
          type: seqaij
          rows=3, cols=3
          total: nonzeros=9, allocated nonzeros=9
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 1 nodes, limit used is 5
    Up solver (post-smoother) same as down solver (pre-smoother)
    Down solver (pre-smoother) on level 2 -------------------------------
      KSP Object:      (mg_levels_2_)       1 MPI processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (mg_levels_2_)       1 MPI processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes
          type: seqaij
          rows=32, cols=32
          total: nonzeros=320, allocated nonzeros=320
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    Up solver (post-smoother) same as down solver (pre-smoother)
    Down solver (pre-smoother) on level 3 -------------------------------
      KSP Object:      (mg_levels_3_)       1 MPI processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (mg_levels_3_)       1 MPI processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes
          type: seqaij
          rows=406, cols=406
          total: nonzeros=4302, allocated nonzeros=4302
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
    Up solver (post-smoother) same as down solver (pre-smoother)
    Down solver (pre-smoother) on level 4 -------------------------------
      KSP Object:      (mg_levels_4_)       1 MPI processes
        type: richardson
          Richardson: damping factor=1
        maximum iterations=2
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using nonzero initial guess
        using NONE norm type for convergence test
      PC Object:      (mg_levels_4_)       1 MPI processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
        linear system matrix = precond matrix:
        Matrix Object:         1 MPI processes
          type: seqaij
          rows=7014, cols=7014
          total: nonzeros=96108, allocated nonzeros=96108
          total number of mallocs used during MatSetValues calls =0
            using I-node routines: found 3507 nodes, limit used is 5
    Up solver (post-smoother) same as down solver (pre-smoother)
    linear system matrix = precond matrix:
    Matrix Object:     1 MPI processes
      type: seqaij
      rows=7014, cols=7014
      total: nonzeros=96108, allocated nonzeros=96108
      total number of mallocs used during MTraceback (most recent call last):
  File "/home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/apps/PetscApplication.py", line 65, in onComputeNodes
    self.main(*args, **kwds)
  File "/home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/apps/PyLithApp.py", line 126, in main
    self.problem.run(self)
  File "/home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py", line 202, in run
    self.formulation.poststep(t, dt)
  File "/home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Implicit.py", line 245, in poststep
    Formulation.poststep(self, t, dt)
  File "/home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/Formulation.py", line 281, in poststep
    integrator.poststep(t, dt, self.fields)
  File "/home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-atSetValues calls =0
        using I-node routines: found 3507 nodes, limit used is 5
 >> /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/problems/TimeDependent.py:200:run
 -- timedependent(info)
 -- Finishing advancing solution from t=3.15576e+07*s to t=6.31152e+07*s.
Fatal error. Calling MPI_Abort() to abort PyLith application.
packages/pylith/feassemble/Integrator.py", line 102, in poststep
    self.updateStateVars(t, fields)
  File "/home/class239/software/pylith/pylith-1.9.0-linux-x86_64/lib/python2.7/site-packages/pylith/feassemble/feassemble.py", line 421, in updateStateVars
    def updateStateVars(self, *args): return _feassemble.IntegratorElasticity_updateStateVars(self, *args)
RuntimeError: Infeasible stress state - cannot project back to yield surface.
  alphaYield:       0.23094
  alphaFlow:        0.23094
  beta:             2e-07
  trialMeanStress:  -6.73175e-06
  stressInvar2:     1.78544e-06
  yieldFunction:    -3.07845e-06
  feasibleFunction: -3.07845e-06

application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
/home/class239/software/pylith/pylith-1.9.0-linux-x86_64/bin/nemesis: mpirun: exit 255
/home/class239/software/pylith/pylith-1.9.0-linux-x86_64/bin/pylith: /home/class239/software/pylith/pylith-1.9.0-linux-x86_64/bin/nemesis: exit 1
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pylithapp.cfg
Type: application/octet-stream
Size: 7222 bytes
Desc: not available
URL: <http://geodynamics.org/pipermail/cig-short/attachments/20131122/7e6d739f/attachment-0003.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: plastic.cfg
Type: application/octet-stream
Size: 1608 bytes
Desc: not available
URL: <http://geodynamics.org/pipermail/cig-short/attachments/20131122/7e6d739f/attachment-0004.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: elastic.cfg
Type: application/octet-stream
Size: 1384 bytes
Desc: not available
URL: <http://geodynamics.org/pipermail/cig-short/attachments/20131122/7e6d739f/attachment-0005.obj>


More information about the CIG-SHORT mailing list