[CIG-SHORT] Pylith-running mpi on cluster
Niloufar Abolfathian
niloufar.abolfathian at gmail.com
Wed Feb 7 19:29:44 PST 2018
Hi, thanks for your replies. Here are my answers to your questions.
1. What size of problem are you running?
I am running a quasi-static model to simulate a vertically dipping
strike-slip fault with static friction that is loaded by tectonic forces.
The boundary conditions include a far-field velocity of 1 cm/yr and an
initial displacement of 0.1 m applied normal to the fault surface to
maintain a compressive stress on the fault. I want to run this simple model
for thousands of years. The first issue is that the model will give run-time
error after ~1800 years. The second problem I am encountering is that each
run will take more than two days! That is why I am trying to use multicore
so it may run faster. From Matt's link, I understand that I should not
expect the program to run faster when using multicore on my own mac, but I
have tried it on 24 nodes on the cluster and it took the same time as on my
own mac.
2. What solver settings are you using?
pylith.problems.SolverNonlinear
3. Is this a linear or nonlinear problem?
A nonlinear problem.
4. Is this a 2D or 3D problem?
A 3D problem.
5. What does the run log show? This will include convergence information
and a PETSc summary of calls, etc.
I did not have my PETSc summary for the runs. I made a new run for only 200
years and the summary is attached as a text file. And here is my PETSc
configuration:
# Set the solver options.
[pylithapp.petsc]
malloc_dump =
# Preconditioner settings.
pc_type = asm
sub_pc_factor_shift_type = nonzero
# Convergence parameters.
ksp_rtol = 1.0e-8
ksp_atol = 1.0e-12
ksp_max_it = 500
ksp_gmres_restart = 50
# Linear solver monitoring options.
ksp_monitor = true
#ksp_view = true
ksp_converged_reason = true
ksp_error_if_not_converged = true
# Nonlinear solver monitoring options.
snes_rtol = 1.0e-8
snes_atol = 1.0e-12
snes_max_it = 100
snes_monitor = true
snes_linesearch_monitor = true
#snes_view = true
snes_converged_reason = true
snes_error_if_not_converged = true
Hope this information can help. Please let me know if I need to provide you
with any other information.
Thanks,
Niloufar
On Wed, Feb 7, 2018 at 4:01 AM, Matthew Knepley <knepley at rice.edu> wrote:
> On Wed, Feb 7, 2018 at 2:24 PM, Charles Williams <willic3 at gmail.com>
> wrote:
>
>> Dear Niloufar,
>>
>> It is hard to diagnose your problem without more information.
>> Information that would be helpful includes:
>>
>> 1. What size of problem are you running?
>> 2. What solver settings are you using?
>> 3. Is this a linear or nonlinear problem?
>> 4. Is this a 2D or 3D problem?
>> 5. What does the run log show? This will include convergence
>> information and a PETSc summary of calls, etc.
>>
>> There are probably other things it would be good to know, but this should
>> get us started.
>>
>
> In addition to the points Charles makes, it is very useful to understand
> how performance is affected by architecture.
> The advantages of multiple cores are very often oversold by vendors. Here
> is a useful reference:
>
> http://www.mcs.anl.gov/petsc/documentation/faq.html#computers
>
> I recommend running the streams program, which can be found in the PETSc
> installation.
>
> Thanks,
>
> Matt
>
>
>>
>> Cheers,
>> Charles
>>
>>
>> On 7/02/2018, at 1:06 PM, Niloufar Abolfathian <
>> niloufar.abolfathian at gmail.com> wrote:
>>
>> Hi,
>>
>> I am trying to run my code on the cluster but I have not gotten any
>> improvements when using multiple cores.
>>
>> What I have tried:
>>
>> Downloaded binaries for both Mac and Linux. Single core vs multiple
>> cores (2 and 24 for Mac and Linux respectively) takes the same amount of
>> time.
>>
>> Compiled from source, no speed up either using shared memory or mpi, even
>> though the correct number of mpinemesis processes show up on multiple nodes.
>>
>> I appreciate if you can help me with running the mpi on the cluster.
>>
>> Thanks,
>> Niloufar
>> _______________________________________________
>> CIG-SHORT mailing list
>> CIG-SHORT at geodynamics.org
>> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>>
>>
>>
>> *Charles Williams I Geodynamic ModelerGNS Science **I** Te Pῡ Ao*
>> 1 Fairway Drive, Avalon 5010, PO Box 30368, Lower Hutt 5040, New Zealand
>> *Ph* 0064-4-570-4566 I *Mob* 0064-22-350-7326 I *Fax* 0064-4-570-4600
>> *http://www.gns.cri.nz/* <http://www.gns.cri.nz/> *I* *Email: *
>> *C.Williams at gns.cri.nz* <your.email at gns.cri.nz>
>>
>>
>> _______________________________________________
>> CIG-SHORT mailing list
>> CIG-SHORT at geodynamics.org
>> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>>
>
>
> _______________________________________________
> CIG-SHORT mailing list
> CIG-SHORT at geodynamics.org
> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.geodynamics.org/pipermail/cig-short/attachments/20180207/1e3e2322/attachment-0001.html>
-------------- next part --------------
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
/Users/Niloufar/Google Drive/USC_PhD/USC_Summer2017/pylith/pylith-2.2.1rc1-darwin-10.11.6/bin/mpinemesis on a arch-pylith named Niloufars-MacBook-Pro-3.local with 1 processor, by Niloufar Wed Feb 7 19:20:07 2018
Using Petsc Development GIT revision: v3.7.6-4562-g9402bf8 GIT Date: 2017-06-24 14:12:39 -0500
Max Max/Min Avg Total
Time (sec): 4.030e+02 1.00000 4.030e+02
Objects: 6.757e+04 1.00000 6.757e+04
Flop: 6.675e+11 1.00000 6.675e+11 6.675e+11
Flop/sec: 1.656e+09 1.00000 1.656e+09 1.656e+09
Memory: 2.735e+09 1.00000 2.735e+09
MPI Messages: 4.545e+02 1.00000 4.545e+02 4.545e+02
MPI Message Lengths: 9.193e+08 1.00000 2.023e+06 9.193e+08
MPI Reductions: 0.000e+00 0.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flop
and VecAXPY() for complex vectors of length N --> 8N flop
Summary of Stages: ----- Time ------ ----- Flop ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 1.1543e+00 0.3% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
1: Meshing: 2.5443e+00 0.6% 4.8057e+05 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
2: Setup: 1.5769e+01 3.9% 4.5848e+08 0.1% 7.000e+00 1.5% 2.469e+02 0.0% 0.000e+00 0.0%
3: Reform Jacobian: 9.9000e+00 2.5% 1.3096e+10 2.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
4: Reform Residual: 6.5109e+00 1.6% 6.2358e+09 0.9% 2.350e+01 5.2% 1.864e+05 9.2% 0.000e+00 0.0%
5: Solve: 3.6296e+02 90.1% 6.4509e+11 96.6% 4.240e+02 93.3% 1.836e+06 90.8% 0.000e+00 0.0%
6: Prestep: 6.7638e-02 0.0% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
7: Step: 1.5462e-01 0.0% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
8: Poststep: 3.8809e+00 1.0% 2.5778e+09 0.4% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
9: Finalize: 5.8307e-02 0.0% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flop: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flop in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flop --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
--- Event Stage 1: Meshing
MeIm create 1 1.0 2.5382e+00 1.0 4.81e+05 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 100100 0 0 0 0
MeIm adjTopo 1 1.0 7.1196e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 28 0 0 0 0 0
VecScale 1 1.0 3.7001e-04 1.0 4.81e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0100 0 0 0 1299
VecSet 3 1.0 5.3721e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DMPlexInterp 1 1.0 1.2991e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 51 0 0 0 0 0
DMPlexStratify 5 1.0 4.5603e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 18 0 0 0 0 0
Refin refine 1 1.0 3.4580e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
--- Event Stage 2: Setup
BuildTwoSided 2 1.0 1.8327e-04 1.0 0.00e+00 0.0 1.0e+00 4.0e+00 0.0e+00 0 0 0 0 0 0 0 14 0 0 0
VecView 14 1.0 1.1305e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
VecMDot 3 1.0 8.0598e-03 1.0 1.13e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 2 0 0 0 1401
VecNorm 6 1.0 2.1101e-03 1.0 5.65e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 1 0 0 0 2676
VecScale 16 1.0 6.6108e-03 1.0 6.21e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 1 0 0 0 939
VecCopy 10 1.0 4.2552e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 142 1.0 1.0306e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
VecMAXPY 3 1.0 6.7083e-03 1.0 1.13e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 2 0 0 0 1684
VecAssemblyBegin 4 1.0 6.5940e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAssemblyEnd 4 1.0 9.5170e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 6 1.0 5.6601e-03 1.0 8.47e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 2 0 0 0 1497
MatAssemblyBegin 1 1.0 4.0000e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 1 1.0 5.3788e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 1 1.0 2.0291e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DMPlexStratify 2 1.0 2.9484e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DMPlexPrealloc 1 1.0 1.1117e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 71 0 0 0 0 0
SFSetGraph 17 1.0 8.9686e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
SFBcastBegin 2 1.0 1.7391e-05 1.0 0.00e+00 0.0 2.0e+00 2.2e+04 0.0e+00 0 0 0 0 0 0 0 29 40 0 0
SFBcastEnd 2 1.0 1.3004e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFReduceBegin 2 1.0 2.7882e-04 1.0 0.00e+00 0.0 5.0e+00 1.3e+04 0.0e+00 0 0 1 0 0 0 0 71 60 0 0
SFReduceEnd 2 1.0 2.4699e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
TSIm preinit 1 1.0 1.2674e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
TSIm verify 1 1.0 2.0687e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
DtUn preinit 1 1.0 4.8810e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DtUn verify 1 1.0 6.0880e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DtUn init 1 1.0 4.1944e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIm verify 3 1.0 2.0549e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
ElIm init 3 1.0 2.6480e+00 1.0 4.26e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 17 93 0 0 0 161
MaEl3D verify 3 1.0 1.1722e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
OutM init 6 1.0 5.8060e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
OutM open 10 1.0 1.5957e+00 1.0 3.38e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 10 1 0 0 0 2
OutM writeInfo 6 1.0 7.6441e-01 1.0 2.30e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 5 5 0 0 0 30
DiBC verify 3 1.0 5.3668e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DiBC init 3 1.0 8.0582e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
CoDy verify 1 1.0 9.2391e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
CoDy init 1 1.0 2.8816e-02 1.0 4.42e+05 1.0 7.0e+00 1.6e+04 0.0e+00 0 0 2 0 0 0 0100100 0 15
--- Event Stage 3: Reform Jacobian
MatAssemblyBegin 1 1.0 7.5990e-08 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 1 1.0 2.6249e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 1 1.0 1.8877e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIJ setup 3 1.0 4.2196e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 43 0 0 0 0 0
ElIJ compute 3 1.0 5.6263e+00 1.0 1.31e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 57100 0 0 0 2328
FaIJ setup 1 1.0 7.1750e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
FaIJ compute 1 1.0 1.2802e-03 1.0 3.37e+03 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3
--- Event Stage 4: Reform Residual
BuildTwoSided 1 1.0 4.0084e-05 1.0 0.00e+00 0.0 5.0e-01 4.0e+00 0.0e+00 0 0 0 0 0 0 0 2 0 0 0
VecSet 22 1.0 5.4811e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetGraph 1 1.0 1.1778e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFBcastBegin 11 1.0 1.7974e-02 1.0 0.00e+00 0.0 1.1e+01 3.8e+06 0.0e+00 0 0 2 5 0 0 0 47 49 0 0
SFBcastEnd 11 1.0 1.2349e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFReduceBegin 11 1.0 2.0268e-02 1.0 0.00e+00 0.0 1.2e+01 3.5e+06 0.0e+00 0 0 3 5 0 0 0 53 51 0 0
SFReduceEnd 11 1.0 1.4183e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIR setup 33 1.0 8.4337e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 13 0 0 0 0 0
ElIR compute 33 1.0 5.3070e+00 1.0 6.23e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 82100 0 0 0 1174
FaIR setup 11 1.0 6.9916e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
FaIR compute 11 1.0 7.8808e-04 1.0 1.48e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 188
--- Event Stage 5: Solve
BuildTwoSided 4 1.0 1.5944e-04 1.0 0.00e+00 0.0 2.0e+00 4.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecDot 11 1.0 8.3608e-03 1.0 1.04e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1238
VecMDot 2371 1.0 2.1345e+01 1.0 5.42e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 8 0 0 0 6 8 0 0 0 2540
VecNorm 2536 1.0 7.2948e-01 1.0 2.33e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3187
VecScale 2459 1.0 1.6161e+00 1.0 1.16e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 716
VecCopy 143 1.0 5.8122e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 5535 1.0 1.3702e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 132 1.0 1.3306e-01 1.0 1.25e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 939
VecWAXPY 11 1.0 9.4807e-03 1.0 5.18e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 546
VecMAXPY 2426 1.0 2.7285e+01 1.0 5.65e+10 1.0 0.0e+00 0.0e+00 0.0e+00 7 8 0 0 0 8 9 0 0 0 2069
VecScatterBegin 4984 1.0 4.0724e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecReduceArith 22 1.0 1.4248e-02 1.0 2.07e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1453
VecReduceComm 11 1.0 8.0480e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 2492 1.0 2.3032e+00 1.0 3.43e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 1487
MatMult 2426 1.0 1.0767e+02 1.0 1.76e+11 1.0 0.0e+00 0.0e+00 0.0e+00 27 26 0 0 0 30 27 0 0 0 1634
MatSolve 2492 1.0 9.8050e+01 1.0 1.76e+11 1.0 0.0e+00 0.0e+00 0.0e+00 24 26 0 0 0 27 27 0 0 0 1794
MatLUFactorNum 77 1.0 8.9635e+00 1.0 1.62e+10 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 2 3 0 0 0 1806
MatILUFactorSym 2 1.0 5.1486e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 33155 1.0 3.8566e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 33155 1.0 6.1064e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetValues 33000 1.0 5.0141e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 2 1.0 4.7010e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatCreateSubMats 143 1.0 1.1689e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0
MatGetOrdering 2 1.0 9.9418e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatIncreaseOvrlp 2 1.0 8.3985e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 44 1.0 2.3071e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DMPlexPrealloc 1 1.0 6.5604e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetGraph 3 1.0 1.3583e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFBcastBegin 286 1.0 2.7548e-01 1.0 0.00e+00 0.0 2.9e+02 2.0e+06 0.0e+00 0 0 64 63 0 0 0 68 70 0 0
SFBcastEnd 286 1.0 1.8289e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFReduceBegin 132 1.0 1.3772e-01 1.0 0.00e+00 0.0 1.4e+02 1.9e+06 0.0e+00 0 0 30 27 0 0 0 32 30 0 0
SFReduceEnd 132 1.0 9.2549e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIR setup 99 1.0 1.1048e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIR compute 99 1.0 1.5481e+01 1.0 1.87e+10 1.0 0.0e+00 0.0e+00 0.0e+00 4 3 0 0 0 4 3 0 0 0 1207
ElIJ setup 33 1.0 4.7824e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIJ compute 33 1.0 5.9398e+01 1.0 1.44e+11 1.0 0.0e+00 0.0e+00 0.0e+00 15 22 0 0 0 16 22 0 0 0 2425
FaIR setup 33 1.0 2.4487e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
FaIR compute 33 1.0 3.0646e-03 1.0 4.44e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 145
FaIJ setup 11 1.0 8.8675e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
FaIJ compute 11 1.0 1.1670e-02 1.0 3.70e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3
SoNl solve 11 1.0 3.6287e+02 1.0 6.45e+11 1.0 4.1e+02 1.9e+06 0.0e+00 90 97 91 86 0 100100 97 95 0 1778
SoNl scatter 11 1.0 3.6932e-02 1.0 0.00e+00 0.0 1.1e+01 3.8e+06 0.0e+00 0 0 2 5 0 0 0 3 5 0 0
SNESSolve 11 1.0 3.6286e+02 1.0 6.45e+11 1.0 4.1e+02 1.9e+06 0.0e+00 90 97 91 86 0 100100 97 95 0 1778
SNESFunctionEval 33 1.0 2.4471e+01 1.0 1.88e+10 1.0 4.0e+02 1.9e+06 0.0e+00 6 3 88 82 0 7 3 95 90 0 769
SNESJacobianEval 11 1.0 6.0496e+01 1.0 1.44e+11 1.0 1.1e+01 3.8e+06 0.0e+00 15 22 2 5 0 17 22 3 5 0 2381
SNESLineSearch 11 1.0 1.6848e+01 1.0 1.34e+10 1.0 2.6e+02 1.9e+06 0.0e+00 4 2 58 54 0 5 2 62 60 0 795
KSPGMRESOrthog 2371 1.0 4.7530e+01 1.0 1.08e+11 1.0 0.0e+00 0.0e+00 0.0e+00 12 16 0 0 0 13 17 0 0 0 2281
KSPSetUp 154 1.0 1.0987e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 77 1.0 2.7743e+02 1.0 4.81e+11 1.0 0.0e+00 0.0e+00 0.0e+00 69 72 0 0 0 76 75 0 0 0 1735
PCSetUp 154 1.0 1.5414e+01 1.0 1.62e+10 1.0 0.0e+00 0.0e+00 0.0e+00 4 2 0 0 0 4 3 0 0 0 1050
PCSetUpOnBlocks 77 1.0 9.4891e+00 1.0 1.62e+10 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 3 3 0 0 0 1706
PCApply 2492 1.0 1.0340e+02 1.0 1.76e+11 1.0 0.0e+00 0.0e+00 0.0e+00 26 26 0 0 0 28 27 0 0 0 1701
--- Event Stage 6: Prestep
VecSet 11 1.0 2.3280e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 3 0 0 0 0 0
TSIm timestep 11 1.0 9.3724e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
--- Event Stage 7: Step
--- Event Stage 8: Poststep
VecView 34 1.0 2.4672e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 6 0 0 0 0 0
VecCopy 33 1.0 9.3598e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 46 1.0 1.8402e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 5 0 0 0 0 0
VecAXPY 11 1.0 1.3210e-02 1.0 1.06e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 803
VecAssemblyBegin 34 1.0 5.0279e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAssemblyEnd 34 1.0 5.3507e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetGraph 5 1.0 4.0706e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
TSIm poststep 11 1.0 4.7453e-03 1.0 2.22e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 47
TSIm write 11 1.0 3.0060e+00 1.0 2.57e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 77100 0 0 0 854
ElIm poststep 33 1.0 4.1326e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIm write 33 1.0 2.8444e+00 1.0 2.57e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 73100 0 0 0 902
OutM writeData 66 1.0 3.8507e+00 1.0 2.57e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 99100 0 0 0 667
CoDy poststep 11 1.0 3.2440e-03 1.0 2.22e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 68
CoDy write 11 1.0 1.6090e-01 1.0 1.36e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 4 0 0 0 0 1
--- Event Stage 9: Finalize
TSIm finalize 1 1.0 5.8271e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 100 0 0 0 0 0
OutM close 6 1.0 2.5403e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 44 0 0 0 0 0
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Container 0 70 39760 0.
Vector 0 162 553667880 0.
Vector Scatter 0 2 1312 0.
Matrix 0 9 1337681024 0.
Matrix Null Space 0 1 688 0.
Distributed Mesh 0 116 538056 0.
GraphPartitioner 0 3 1812 0.
Index Set 0 58 145122728 0.
IS L to G Mapping 0 2 1936440 0.
Section 0 180 125280 0.
Star Forest Bipartite Graph 0 121 96224 0.
Discrete System 0 116 103936 0.
SNES 0 1 1356 0.
SNESLineSearch 0 1 1000 0.
DMSNES 0 1 664 0.
Krylov Solver 0 4 66432 0.
DMKSP interface 0 1 648 0.
Preconditioner 0 4 4000 0.
--- Event Stage 1: Meshing
Container 1 0 0 0.
Vector 3 2 3848864 0.
Matrix 3 2 5536 0.
Distributed Mesh 9 7 34616 0.
GraphPartitioner 6 5 3020 0.
Index Set 143 139 15355124 0.
Section 22 17 11832 0.
Star Forest Bipartite Graph 18 15 11880 0.
Discrete System 9 7 6272 0.
Viewer 1 0 0 0.
--- Event Stage 2: Setup
Container 78 25 14200 0.
Vector 122 44 110251336 0.
Matrix 3 0 0 0.
Matrix Null Space 1 0 0 0.
Distributed Mesh 109 20 92480 0.
GraphPartitioner 2 0 0 0.
Index Set 80 41 887296 0.
IS L to G Mapping 1 0 0 0.
Section 173 45 31320 0.
Star Forest Bipartite Graph 220 128 101376 0.
Discrete System 109 20 17920 0.
Viewer 10 4 3104 0.
SNES 1 0 0 0.
SNESLineSearch 1 0 0 0.
DMSNES 1 0 0 0.
Krylov Solver 1 0 0 0.
Preconditioner 1 0 0 0.
--- Event Stage 3: Reform Jacobian
Index Set 5 0 0 0.
Section 11 6 4176 0.
--- Event Stage 4: Reform Residual
Index Set 1 0 0 0.
Section 1 0 0 0.
--- Event Stage 5: Solve
Container 7 0 0 0.
Vector 78 2 3780320 0.
Vector Scatter 2 0 0 0.
Matrix 33005 33000 167772000 0.
Distributed Mesh 11 0 0 0.
Index Set 33017 33008 28751144 0.
IS L to G Mapping 1 0 0 0.
Section 99 75 52200 0.
Star Forest Bipartite Graph 24 12 9504 0.
Discrete System 11 0 0 0.
Krylov Solver 3 0 0 0.
DMKSP interface 1 0 0 0.
Preconditioner 3 0 0 0.
--- Event Stage 6: Prestep
--- Event Stage 7: Step
--- Event Stage 8: Poststep
Container 13 4 2272 0.
Vector 24 11 11214176 0.
Distributed Mesh 22 8 36992 0.
Index Set 1 1 102392 0.
Section 33 16 11136 0.
Star Forest Bipartite Graph 44 30 23760 0.
Discrete System 22 8 7168 0.
--- Event Stage 9: Finalize
Vector 0 6 9936 0.
Viewer 0 6 4656 0.
========================================================================================================================
Average time to get PetscTime(): 3.73009e-08
#PETSc Option Table entries:
-friction_ksp_converged_reason
-friction_ksp_gmres_restart 30
-friction_ksp_max_it 25
-friction_pc_type asm
-friction_sub_pc_factor_shift_type nonzero
-ksp_atol 1.0e-12
-ksp_converged_reason
-ksp_error_if_not_converged
-ksp_gmres_restart 50
-ksp_max_it 500
-ksp_monitor
-ksp_rtol 1.0e-8
-log_summary
-log_view
-malloc_dump
-pc_type asm
-snes_atol 1.0e-12
-snes_converged_reason
-snes_error_if_not_converged
-snes_linesearch_monitor
-snes_max_it 100
-snes_monitor
-snes_rtol 1.0e-8
-sub_pc_factor_shift_type nonzero
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --prefix=/Volumes/Tools/unix/pylith-binary/dist --with-c2html=0 --with-x=0 --with-clanguage=C --with-mpicompilers=1 --with-debugging=0 --with-shared-libraries=1 --with-64-bit-points=1 --with-large-file-io=1 --download-chaco=1 --download-ml --with-fc=0 --with-hwloc=0 --with-ssl=0 --with-x=0 --with-c2html=0 --with-lgrind=0 --with-blas-lib=/System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Versions/Current/libBLAS.dylib --with-lapack-lib=/System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Versions/Current/libLAPACK.dylib --with-hdf5=1 --with-hdf5-dir=/Volumes/Tools/unix/pylith-binary/dist --with-zlib=1 --LIBS=-lz --with-fc=0 CPPFLAGS="-I/Volumes/Tools/unix/pylith-binary/dist/include " LDFLAGS="-L/Volumes/Tools/unix/pylith-binary/dist/lib " CFLAGS="-g -O2" CXXFLAGS="-g -O2 -DMPICH_IGNORE_CXX_SEEK" FCFLAGS= PETSC_DIR=/Volumes/Tools/unix/pylith-binary/build/petsc-pylith PETSC_ARCH=arch-pylith
-----------------------------------------
Libraries compiled on Sun Jun 25 13:03:16 2017 on evpn-mp-10-212-56-101.wr.usgs.gov
Machine characteristics: Darwin-15.6.0-x86_64-i386-64bit
Using PETSc directory: /Volumes/Tools/unix/pylith-binary/build/petsc-pylith
Using PETSc arch: arch-pylith
-----------------------------------------
Using C compiler: mpicc -g -O2 -I/Volumes/Tools/unix/pylith-binary/dist/include ${COPTFLAGS} ${CFLAGS}
-----------------------------------------
Using include paths: -I/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/arch-pylith/include -I/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/include -I/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/include -I/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/arch-pylith/include -I/Volumes/Tools/unix/pylith-binary/dist/include
-----------------------------------------
Using C linker: mpicc
Using libraries: -Wl,-rpath,/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/arch-pylith/lib -L/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/arch-pylith/lib -lpetsc -Wl,-rpath,/Volumes/Tools/unix/pylith-binary/dist/lib -L/Volumes/Tools/unix/pylith-binary/dist/lib -Wl,-rpath,/System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Versions/Current -L/System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Versions/Current -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/8.0.0/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/8.0.0/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.0.0/lib/darwin -lclang_rt.osx -lml -lLAPACK -lBLAS -lhdf5_hl -lhdf5 -lchaco -lclang_rt.osx -lmpicxx -lc++ -ldl -lz -lmpi -lpmpi -lSystem -lclang_rt.osx -ldl -lz
-----------------------------------------
WARNING: -log_summary is being deprecated; switch to -log_view
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
/Users/Niloufar/Google Drive/USC_PhD/USC_Summer2017/pylith/pylith-2.2.1rc1-darwin-10.11.6/bin/mpinemesis on a arch-pylith named Niloufars-MacBook-Pro-3.local with 1 processor, by Niloufar Wed Feb 7 19:20:07 2018
Using Petsc Development GIT revision: v3.7.6-4562-g9402bf8 GIT Date: 2017-06-24 14:12:39 -0500
Max Max/Min Avg Total
Time (sec): 4.030e+02 1.00000 4.030e+02
Objects: 6.757e+04 1.00000 6.757e+04
Flop: 6.675e+11 1.00000 6.675e+11 6.675e+11
Flop/sec: 1.656e+09 1.00000 1.656e+09 1.656e+09
Memory: 2.735e+09 1.00000 2.735e+09
MPI Messages: 4.545e+02 1.00000 4.545e+02 4.545e+02
MPI Message Lengths: 9.193e+08 1.00000 2.023e+06 9.193e+08
MPI Reductions: 0.000e+00 0.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flop
and VecAXPY() for complex vectors of length N --> 8N flop
Summary of Stages: ----- Time ------ ----- Flop ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 1.1543e+00 0.3% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
1: Meshing: 2.5443e+00 0.6% 4.8057e+05 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
2: Setup: 1.5769e+01 3.9% 4.5848e+08 0.1% 7.000e+00 1.5% 2.469e+02 0.0% 0.000e+00 0.0%
3: Reform Jacobian: 9.9000e+00 2.5% 1.3096e+10 2.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
4: Reform Residual: 6.5109e+00 1.6% 6.2358e+09 0.9% 2.350e+01 5.2% 1.864e+05 9.2% 0.000e+00 0.0%
5: Solve: 3.6296e+02 90.1% 6.4509e+11 96.6% 4.240e+02 93.3% 1.836e+06 90.8% 0.000e+00 0.0%
6: Prestep: 6.7638e-02 0.0% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
7: Step: 1.5462e-01 0.0% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
8: Poststep: 3.8809e+00 1.0% 2.5778e+09 0.4% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
9: Finalize: 5.8307e-02 0.0% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flop: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flop in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flop --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
--- Event Stage 1: Meshing
MeIm create 1 1.0 2.5382e+00 1.0 4.81e+05 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 100100 0 0 0 0
MeIm adjTopo 1 1.0 7.1196e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 28 0 0 0 0 0
VecScale 1 1.0 3.7001e-04 1.0 4.81e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0100 0 0 0 1299
VecSet 3 1.0 5.3721e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DMPlexInterp 1 1.0 1.2991e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 51 0 0 0 0 0
DMPlexStratify 5 1.0 4.5603e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 18 0 0 0 0 0
Refin refine 1 1.0 3.4580e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
--- Event Stage 2: Setup
BuildTwoSided 2 1.0 1.8327e-04 1.0 0.00e+00 0.0 1.0e+00 4.0e+00 0.0e+00 0 0 0 0 0 0 0 14 0 0 0
VecView 14 1.0 1.1305e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
VecMDot 3 1.0 8.0598e-03 1.0 1.13e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 2 0 0 0 1401
VecNorm 6 1.0 2.1101e-03 1.0 5.65e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 1 0 0 0 2676
VecScale 16 1.0 6.6108e-03 1.0 6.21e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 1 0 0 0 939
VecCopy 10 1.0 4.2552e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 142 1.0 1.0306e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
VecMAXPY 3 1.0 6.7083e-03 1.0 1.13e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 2 0 0 0 1684
VecAssemblyBegin 4 1.0 6.5940e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAssemblyEnd 4 1.0 9.5170e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 6 1.0 5.6601e-03 1.0 8.47e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 2 0 0 0 1497
MatAssemblyBegin 1 1.0 4.0000e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 1 1.0 5.3788e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 1 1.0 2.0291e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DMPlexStratify 2 1.0 2.9484e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DMPlexPrealloc 1 1.0 1.1117e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 71 0 0 0 0 0
SFSetGraph 17 1.0 8.9686e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
SFBcastBegin 2 1.0 1.7391e-05 1.0 0.00e+00 0.0 2.0e+00 2.2e+04 0.0e+00 0 0 0 0 0 0 0 29 40 0 0
SFBcastEnd 2 1.0 1.3004e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFReduceBegin 2 1.0 2.7882e-04 1.0 0.00e+00 0.0 5.0e+00 1.3e+04 0.0e+00 0 0 1 0 0 0 0 71 60 0 0
SFReduceEnd 2 1.0 2.4699e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
TSIm preinit 1 1.0 1.2674e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
TSIm verify 1 1.0 2.0687e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
DtUn preinit 1 1.0 4.8810e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DtUn verify 1 1.0 6.0880e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DtUn init 1 1.0 4.1944e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIm verify 3 1.0 2.0549e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
ElIm init 3 1.0 2.6480e+00 1.0 4.26e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 17 93 0 0 0 161
MaEl3D verify 3 1.0 1.1722e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
OutM init 6 1.0 5.8060e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
OutM open 10 1.0 1.5957e+00 1.0 3.38e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 10 1 0 0 0 2
OutM writeInfo 6 1.0 7.6441e-01 1.0 2.30e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 5 5 0 0 0 30
DiBC verify 3 1.0 5.3668e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DiBC init 3 1.0 8.0582e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
CoDy verify 1 1.0 9.2391e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
CoDy init 1 1.0 2.8816e-02 1.0 4.42e+05 1.0 7.0e+00 1.6e+04 0.0e+00 0 0 2 0 0 0 0100100 0 15
--- Event Stage 3: Reform Jacobian
MatAssemblyBegin 1 1.0 7.5990e-08 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 1 1.0 2.6249e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 1 1.0 1.8877e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIJ setup 3 1.0 4.2196e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 43 0 0 0 0 0
ElIJ compute 3 1.0 5.6263e+00 1.0 1.31e+10 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 57100 0 0 0 2328
FaIJ setup 1 1.0 7.1750e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
FaIJ compute 1 1.0 1.2802e-03 1.0 3.37e+03 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3
--- Event Stage 4: Reform Residual
BuildTwoSided 1 1.0 4.0084e-05 1.0 0.00e+00 0.0 5.0e-01 4.0e+00 0.0e+00 0 0 0 0 0 0 0 2 0 0 0
VecSet 22 1.0 5.4811e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetGraph 1 1.0 1.1778e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFBcastBegin 11 1.0 1.7974e-02 1.0 0.00e+00 0.0 1.1e+01 3.8e+06 0.0e+00 0 0 2 5 0 0 0 47 49 0 0
SFBcastEnd 11 1.0 1.2349e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFReduceBegin 11 1.0 2.0268e-02 1.0 0.00e+00 0.0 1.2e+01 3.5e+06 0.0e+00 0 0 3 5 0 0 0 53 51 0 0
SFReduceEnd 11 1.0 1.4183e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIR setup 33 1.0 8.4337e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 13 0 0 0 0 0
ElIR compute 33 1.0 5.3070e+00 1.0 6.23e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 82100 0 0 0 1174
FaIR setup 11 1.0 6.9916e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
FaIR compute 11 1.0 7.8808e-04 1.0 1.48e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 188
--- Event Stage 5: Solve
BuildTwoSided 4 1.0 1.5944e-04 1.0 0.00e+00 0.0 2.0e+00 4.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecDot 11 1.0 8.3608e-03 1.0 1.04e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1238
VecMDot 2371 1.0 2.1345e+01 1.0 5.42e+10 1.0 0.0e+00 0.0e+00 0.0e+00 5 8 0 0 0 6 8 0 0 0 2540
VecNorm 2536 1.0 7.2948e-01 1.0 2.33e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3187
VecScale 2459 1.0 1.6161e+00 1.0 1.16e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 716
VecCopy 143 1.0 5.8122e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 5535 1.0 1.3702e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 132 1.0 1.3306e-01 1.0 1.25e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 939
VecWAXPY 11 1.0 9.4807e-03 1.0 5.18e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 546
VecMAXPY 2426 1.0 2.7285e+01 1.0 5.65e+10 1.0 0.0e+00 0.0e+00 0.0e+00 7 8 0 0 0 8 9 0 0 0 2069
VecScatterBegin 4984 1.0 4.0724e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecReduceArith 22 1.0 1.4248e-02 1.0 2.07e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1453
VecReduceComm 11 1.0 8.0480e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 2492 1.0 2.3032e+00 1.0 3.43e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 1487
MatMult 2426 1.0 1.0767e+02 1.0 1.76e+11 1.0 0.0e+00 0.0e+00 0.0e+00 27 26 0 0 0 30 27 0 0 0 1634
MatSolve 2492 1.0 9.8050e+01 1.0 1.76e+11 1.0 0.0e+00 0.0e+00 0.0e+00 24 26 0 0 0 27 27 0 0 0 1794
MatLUFactorNum 77 1.0 8.9635e+00 1.0 1.62e+10 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 2 3 0 0 0 1806
MatILUFactorSym 2 1.0 5.1486e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyBegin 33155 1.0 3.8566e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 33155 1.0 6.1064e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetValues 33000 1.0 5.0141e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 2 1.0 4.7010e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatCreateSubMats 143 1.0 1.1689e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0
MatGetOrdering 2 1.0 9.9418e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatIncreaseOvrlp 2 1.0 8.3985e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 44 1.0 2.3071e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DMPlexPrealloc 1 1.0 6.5604e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetGraph 3 1.0 1.3583e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFBcastBegin 286 1.0 2.7548e-01 1.0 0.00e+00 0.0 2.9e+02 2.0e+06 0.0e+00 0 0 64 63 0 0 0 68 70 0 0
SFBcastEnd 286 1.0 1.8289e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFReduceBegin 132 1.0 1.3772e-01 1.0 0.00e+00 0.0 1.4e+02 1.9e+06 0.0e+00 0 0 30 27 0 0 0 32 30 0 0
SFReduceEnd 132 1.0 9.2549e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIR setup 99 1.0 1.1048e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIR compute 99 1.0 1.5481e+01 1.0 1.87e+10 1.0 0.0e+00 0.0e+00 0.0e+00 4 3 0 0 0 4 3 0 0 0 1207
ElIJ setup 33 1.0 4.7824e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIJ compute 33 1.0 5.9398e+01 1.0 1.44e+11 1.0 0.0e+00 0.0e+00 0.0e+00 15 22 0 0 0 16 22 0 0 0 2425
FaIR setup 33 1.0 2.4487e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
FaIR compute 33 1.0 3.0646e-03 1.0 4.44e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 145
FaIJ setup 11 1.0 8.8675e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
FaIJ compute 11 1.0 1.1670e-02 1.0 3.70e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3
SoNl solve 11 1.0 3.6287e+02 1.0 6.45e+11 1.0 4.1e+02 1.9e+06 0.0e+00 90 97 91 86 0 100100 97 95 0 1778
SoNl scatter 11 1.0 3.6932e-02 1.0 0.00e+00 0.0 1.1e+01 3.8e+06 0.0e+00 0 0 2 5 0 0 0 3 5 0 0
SNESSolve 11 1.0 3.6286e+02 1.0 6.45e+11 1.0 4.1e+02 1.9e+06 0.0e+00 90 97 91 86 0 100100 97 95 0 1778
SNESFunctionEval 33 1.0 2.4471e+01 1.0 1.88e+10 1.0 4.0e+02 1.9e+06 0.0e+00 6 3 88 82 0 7 3 95 90 0 769
SNESJacobianEval 11 1.0 6.0496e+01 1.0 1.44e+11 1.0 1.1e+01 3.8e+06 0.0e+00 15 22 2 5 0 17 22 3 5 0 2381
SNESLineSearch 11 1.0 1.6848e+01 1.0 1.34e+10 1.0 2.6e+02 1.9e+06 0.0e+00 4 2 58 54 0 5 2 62 60 0 795
KSPGMRESOrthog 2371 1.0 4.7530e+01 1.0 1.08e+11 1.0 0.0e+00 0.0e+00 0.0e+00 12 16 0 0 0 13 17 0 0 0 2281
KSPSetUp 154 1.0 1.0987e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 77 1.0 2.7743e+02 1.0 4.81e+11 1.0 0.0e+00 0.0e+00 0.0e+00 69 72 0 0 0 76 75 0 0 0 1735
PCSetUp 154 1.0 1.5414e+01 1.0 1.62e+10 1.0 0.0e+00 0.0e+00 0.0e+00 4 2 0 0 0 4 3 0 0 0 1050
PCSetUpOnBlocks 77 1.0 9.4891e+00 1.0 1.62e+10 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 3 3 0 0 0 1706
PCApply 2492 1.0 1.0340e+02 1.0 1.76e+11 1.0 0.0e+00 0.0e+00 0.0e+00 26 26 0 0 0 28 27 0 0 0 1701
--- Event Stage 6: Prestep
VecSet 11 1.0 2.3280e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 3 0 0 0 0 0
TSIm timestep 11 1.0 9.3724e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
--- Event Stage 7: Step
--- Event Stage 8: Poststep
VecView 34 1.0 2.4672e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 6 0 0 0 0 0
VecCopy 33 1.0 9.3598e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 46 1.0 1.8402e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 5 0 0 0 0 0
VecAXPY 11 1.0 1.3210e-02 1.0 1.06e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 803
VecAssemblyBegin 34 1.0 5.0279e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAssemblyEnd 34 1.0 5.3507e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetGraph 5 1.0 4.0706e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
TSIm poststep 11 1.0 4.7453e-03 1.0 2.22e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 47
TSIm write 11 1.0 3.0060e+00 1.0 2.57e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 77100 0 0 0 854
ElIm poststep 33 1.0 4.1326e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
ElIm write 33 1.0 2.8444e+00 1.0 2.57e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 73100 0 0 0 902
OutM writeData 66 1.0 3.8507e+00 1.0 2.57e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 99100 0 0 0 667
CoDy poststep 11 1.0 3.2440e-03 1.0 2.22e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 68
CoDy write 11 1.0 1.6090e-01 1.0 1.36e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 4 0 0 0 0 1
--- Event Stage 9: Finalize
TSIm finalize 1 1.0 5.8271e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 100 0 0 0 0 0
OutM close 6 1.0 2.5403e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 44 0 0 0 0 0
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Container 0 70 39760 0.
Vector 0 162 553667880 0.
Vector Scatter 0 2 1312 0.
Matrix 0 9 1337681024 0.
Matrix Null Space 0 1 688 0.
Distributed Mesh 0 116 538056 0.
GraphPartitioner 0 3 1812 0.
Index Set 0 58 145122728 0.
IS L to G Mapping 0 2 1936440 0.
Section 0 180 125280 0.
Star Forest Bipartite Graph 0 121 96224 0.
Discrete System 0 116 103936 0.
SNES 0 1 1356 0.
SNESLineSearch 0 1 1000 0.
DMSNES 0 1 664 0.
Krylov Solver 0 4 66432 0.
DMKSP interface 0 1 648 0.
Preconditioner 0 4 4000 0.
--- Event Stage 1: Meshing
Container 1 0 0 0.
Vector 3 2 3848864 0.
Matrix 3 2 5536 0.
Distributed Mesh 9 7 34616 0.
GraphPartitioner 6 5 3020 0.
Index Set 143 139 15355124 0.
Section 22 17 11832 0.
Star Forest Bipartite Graph 18 15 11880 0.
Discrete System 9 7 6272 0.
Viewer 1 0 0 0.
--- Event Stage 2: Setup
Container 78 25 14200 0.
Vector 122 44 110251336 0.
Matrix 3 0 0 0.
Matrix Null Space 1 0 0 0.
Distributed Mesh 109 20 92480 0.
GraphPartitioner 2 0 0 0.
Index Set 80 41 887296 0.
IS L to G Mapping 1 0 0 0.
Section 173 45 31320 0.
Star Forest Bipartite Graph 220 128 101376 0.
Discrete System 109 20 17920 0.
Viewer 10 4 3104 0.
SNES 1 0 0 0.
SNESLineSearch 1 0 0 0.
DMSNES 1 0 0 0.
Krylov Solver 1 0 0 0.
Preconditioner 1 0 0 0.
--- Event Stage 3: Reform Jacobian
Index Set 5 0 0 0.
Section 11 6 4176 0.
--- Event Stage 4: Reform Residual
Index Set 1 0 0 0.
Section 1 0 0 0.
--- Event Stage 5: Solve
Container 7 0 0 0.
Vector 78 2 3780320 0.
Vector Scatter 2 0 0 0.
Matrix 33005 33000 167772000 0.
Distributed Mesh 11 0 0 0.
Index Set 33017 33008 28751144 0.
IS L to G Mapping 1 0 0 0.
Section 99 75 52200 0.
Star Forest Bipartite Graph 24 12 9504 0.
Discrete System 11 0 0 0.
Krylov Solver 3 0 0 0.
DMKSP interface 1 0 0 0.
Preconditioner 3 0 0 0.
--- Event Stage 6: Prestep
--- Event Stage 7: Step
--- Event Stage 8: Poststep
Container 13 4 2272 0.
Vector 24 11 11214176 0.
Distributed Mesh 22 8 36992 0.
Index Set 1 1 102392 0.
Section 33 16 11136 0.
Star Forest Bipartite Graph 44 30 23760 0.
Discrete System 22 8 7168 0.
--- Event Stage 9: Finalize
Vector 0 6 9936 0.
Viewer 0 6 4656 0.
========================================================================================================================
Average time to get PetscTime(): 3.08006e-08
#PETSc Option Table entries:
-friction_ksp_converged_reason
-friction_ksp_gmres_restart 30
-friction_ksp_max_it 25
-friction_pc_type asm
-friction_sub_pc_factor_shift_type nonzero
-ksp_atol 1.0e-12
-ksp_converged_reason
-ksp_error_if_not_converged
-ksp_gmres_restart 50
-ksp_max_it 500
-ksp_monitor
-ksp_rtol 1.0e-8
-log_summary
-log_view
-malloc_dump
-pc_type asm
-snes_atol 1.0e-12
-snes_converged_reason
-snes_error_if_not_converged
-snes_linesearch_monitor
-snes_max_it 100
-snes_monitor
-snes_rtol 1.0e-8
-sub_pc_factor_shift_type nonzero
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --prefix=/Volumes/Tools/unix/pylith-binary/dist --with-c2html=0 --with-x=0 --with-clanguage=C --with-mpicompilers=1 --with-debugging=0 --with-shared-libraries=1 --with-64-bit-points=1 --with-large-file-io=1 --download-chaco=1 --download-ml --with-fc=0 --with-hwloc=0 --with-ssl=0 --with-x=0 --with-c2html=0 --with-lgrind=0 --with-blas-lib=/System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Versions/Current/libBLAS.dylib --with-lapack-lib=/System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Versions/Current/libLAPACK.dylib --with-hdf5=1 --with-hdf5-dir=/Volumes/Tools/unix/pylith-binary/dist --with-zlib=1 --LIBS=-lz --with-fc=0 CPPFLAGS="-I/Volumes/Tools/unix/pylith-binary/dist/include " LDFLAGS="-L/Volumes/Tools/unix/pylith-binary/dist/lib " CFLAGS="-g -O2" CXXFLAGS="-g -O2 -DMPICH_IGNORE_CXX_SEEK" FCFLAGS= PETSC_DIR=/Volumes/Tools/unix/pylith-binary/build/petsc-pylith PETSC_ARCH=arch-pylith
-----------------------------------------
Libraries compiled on Sun Jun 25 13:03:16 2017 on evpn-mp-10-212-56-101.wr.usgs.gov
Machine characteristics: Darwin-15.6.0-x86_64-i386-64bit
Using PETSc directory: /Volumes/Tools/unix/pylith-binary/build/petsc-pylith
Using PETSc arch: arch-pylith
-----------------------------------------
Using C compiler: mpicc -g -O2 -I/Volumes/Tools/unix/pylith-binary/dist/include ${COPTFLAGS} ${CFLAGS}
-----------------------------------------
Using include paths: -I/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/arch-pylith/include -I/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/include -I/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/include -I/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/arch-pylith/include -I/Volumes/Tools/unix/pylith-binary/dist/include
-----------------------------------------
Using C linker: mpicc
Using libraries: -Wl,-rpath,/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/arch-pylith/lib -L/Volumes/Tools/unix/pylith-binary/build/petsc-pylith/arch-pylith/lib -lpetsc -Wl,-rpath,/Volumes/Tools/unix/pylith-binary/dist/lib -L/Volumes/Tools/unix/pylith-binary/dist/lib -Wl,-rpath,/System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Versions/Current -L/System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Versions/Current -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/8.0.0/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/8.0.0/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.0.0/lib/darwin -lclang_rt.osx -lml -lLAPACK -lBLAS -lhdf5_hl -lhdf5 -lchaco -lclang_rt.osx -lmpicxx -lc++ -ldl -lz -lmpi -lpmpi -lSystem -lclang_rt.osx -ldl -lz
-----------------------------------------
More information about the CIG-SHORT
mailing list