[CIG-SHORT] Pylith Running on cluster

Matthew Knepley knepley at mcs.anl.gov
Mon Dec 15 14:46:37 PST 2014


On Mon, Dec 15, 2014 at 4:32 PM, Xiao Ma <xiaoma5 at illinois.edu> wrote:

> Hi Matt,
> I have successfuly run the example problem /barshearwave/quad4 with 3
> nodes 12 cores per each nodes. I am wondering about the "fork" wanning , is
> it has something to do with this hanging for my large simulation?
>

Again, I do not think it was hanging, just calculating. You could run a
smaller example
and extrapolate the time (residual evaluation is nearly linear).

  Thanks,

    Matt


> Here's the stderr file :
> *****************************************
>
> ----------------------------------------
>
> Begin Torque Prologue (Mon Dec 15 16:24:20 2014)
>
> Job ID:           1820108.cc-mgmt1.campuscluster.illinois.edu
>
> Username:         xiaoma5
>
> Group:            cee_elbanna
>
> Job Name:         YOURJOBNAME
>
> Limits:
>       ncpus=1,neednodes=3:ppn=12,nodes=3:ppn=12,walltime=04:00:00
>
> Job Queue:        secondary
>
> Account:          cee_elbanna
>
> Nodes:            taub335 taub339 taub349
>
> End Torque Prologue
>
> ----------------------------------------
>
> --------------------------------------------------------------------------
>
> An MPI process has executed an operation involving a call to the
>
> "fork()" system call to create a child process.  Open MPI is currently
>
> operating in a condition that could result in memory corruption or
>
> other system errors; your MPI job may hang, crash, or produce silent
>
> data corruption.  The use of fork() (or system() or other calls that
>
> create child processes) is strongly discouraged.
>
>
>
> The process that invoked fork was:
>
>
>
>   Local host:          taub339 (PID 8722)
>
>   MPI_COMM_WORLD rank: 12
>
>
>
> If you are *absolutely sure* that your application will successfully
>
> and correctly survive a call to fork(), you may disable this warning
>
> by setting the mpi_warn_on_fork MCA parameter to 0.
>
> --------------------------------------------------------------------------
>
> [12]PETSC ERROR:
> ------------------------------------------------------------------------
>
> [12]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
>
> [12]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
>
> [12]PETSC ERROR: or see
> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[12]PETSC
> ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find
> memory corruption errors
>
> [12]PETSC ERROR: configure using --with-debugging=yes, recompile, link,
> and run
>
> [12]PETSC ERROR: to get more information on the crash.
>
> [12]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------
>
> [12]PETSC ERROR: Signal received
>
> [12]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
>
> [12]PETSC ERROR: Petsc Development GIT revision:
> c754102742ddd04620cbeb83ad83e0546d180516  GIT Date: 2014-07-16 16:18:38
> -0500
>
> [12]PETSC ERROR: /home/xiaoma5/project-cse/pylith/bin/mpinemesis on a
> arch-pylith named taub339 by xiaoma5 Mon Dec 15 16:24:24 2014
>
> [12]PETSC ERROR: Configure options
> --prefix=/home/xiaoma5/project-cse/pylith --with-c2html=0 --with-x=0
> --with-clanguage=C --with-mpicompilers=1 --with-debugging=0
> --with-shared-libraries=1 --download-chaco=1 --download-ml=1
> --download-fblaslapack=1 --with-hdf5=1
> --with-hdf5-include=/home/xiaoma5/project-cse/pylith/include
> --with-hdf5-lib=/home/xiaoma5/project-cse/pylith/lib/libhdf5.dylib
> --LIBS=-lz CPPFLAGS="-I/home/xiaoma5/project-cse/pylith/include "
> LDFLAGS="-L/home/xiaoma5/project-cse/pylith/lib " CFLAGS="-g -O2"
> CXXFLAGS="-g -O2 -DMPICH_IGNORE_CXX_SEEK" FCFLAGS="-g -O2"
> PETSC_DIR=/home/xiaoma5/project-cse/src/pylith/pylith-installer-2.0.3-0/petsc-pylith
> PETSC_ARCH=arch-pylith
>
> [12]PETSC ERROR: #1 User provided function() line 0 in  unknown file
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'WARNING: No initial
> state given for friction model 'Slip weakening'. Using default value of
> zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> Slip weakeningWARNING: No initial state given for friction model 'Slip
> weakening'. Using default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.'. Using default value of zero.WARNING: No initial
> state given for friction model 'Slip weakening
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> '. Using default value of zero.
>
>
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> WARNING: No initial state given for friction model 'WARNING: No initial
> state given for friction model 'Slip weakening'. Using default value of
> zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> Slip weakening'. Using default value of zero.
>
> WARNING: No initial state given for friction model 'Slip weakening'. Using
> default value of zero.
>
> [taub349:25960] 35 more processes have sent help message
> help-mpi-runtime.txt / mpi_init:warn-fork
>
> [taub349:25960] Set MCA parameter "orte_base_help_aggregate" to 0 to see
> all help / error messages
>
> ----------------------------------------
>
> Begin Torque Epilogue (Mon Dec 15 16:25:12 2014)
>
> Job ID:           1820108.cc-mgmt1.campuscluster.illinois.edu
>
> Username:         xiaoma5
>
> Group:            cee_elbanna
>
> Job Name:         YOURJOBNAME
>
> Session:          25928
>
> Limits:
> ncpus=1,neednodes=3:ppn=12,nodes=3:ppn=12,walltime=04:00:00
>
> Resources:
> cput=00:15:43,energy_used=0,mem=3103924kb,vmem=15945788kb,walltime=00:00:52
>
> Job Queue:        secondary
>
> Account:          cee_elbanna
>
> Nodes:            taub335 taub339 taub349
>
> End Torque Epilogue
>
> ----------------------------------------
>
> *******************************************
>
> Log file:
>
> ********************************************
> e: 66.70 MB
>
>  -- [8] CPU time: 00:00:32, Memory usage: 83.98 MB
>  >>
> /home/xiaoma5/project-cse/pylith/lib/python2.7/site-packages/pylith/problems/Formulation.py:282:finalize
>  -- explicit(debug)
>  -- [0] CPU time: 00:00:28, Memory usage: 94.72 MB
>  >>
> /home/xiaoma5/project-cse/pylith/lib/python2.7/site-packages/pylith/utils/PetscManager.py:75:finalize
>  -- petsc(info
>  -- Finalizing PETSc.
>
> ************************************************************************************************************************
> ***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r
> -fCourier9' to print this document            ***
>
> ************************************************************************************************************************
>
> ---------------------------------------------- PETSc Performance Summary:
> ----------------------------------------------
>
> /home/xiaoma5/project-cse/pylith/bin/mpinemesis on a arch-pylith named
> taub349 with 36 processors, by xiaoma5 Mon Dec 15 16:25:11 2014
> Using Petsc Development GIT revision:
> c754102742ddd04620cbeb83ad83e0546d180516  GIT Date: 2014-07-16 16:18:38
> -0500
>
>                          Max       Max/Min        Avg      Total
> Time (sec):           4.720e+01      1.00131   4.718e+01
> Objects:              1.915e+03      1.01970   1.881e+03
> Flops:                5.677e+05      1.71888   4.357e+05  1.569e+07
> Flops/sec:            1.204e+04      1.72009   9.236e+03  3.325e+05
> MPI Messages:         2.819e+04     10.26753   5.824e+03  2.097e+05
> MPI Message Lengths:  6.475e+05     12.45007   2.070e+01  4.340e+06
> MPI Reductions:       3.624e+03      1.00000
>
> Flop counting convention: 1 flop = 1 real number operation of type
> (multiply/divide/add/subtract)
>                             e.g., VecAXPY() for real vectors of length N
> --> 2N flops
>                             and VecAXPY() for complex vectors of length N
> --> 8N flops
>
> Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages
> ---  -- Message Lengths --  -- Reductions --
>                         Avg     %Total     Avg     %Total   counts
> %Total     Avg         %Total   counts   %Total
>  0:      Main Stage: 1.0368e-01   0.2%  0.0000e+00   0.0%  0.000e+00
> 0.0%  0.000e+00        0.0%  0.000e+00   0.0%
>  1:         Meshing: 3.3931e-01   0.7%  4.7200e+02   0.0%  1.116e+03
> 0.5%  2.961e-01        1.4%  3.400e+01   0.9%
>  2:           Setup: 3.5577e+00   7.5%  2.5808e+04   0.2%  3.112e+03
> 1.5%  1.982e-01        1.0%  2.890e+02   8.0%
>  3: Reform Jacobian: 1.0409e-02   0.0%  2.2680e+04   0.1%  3.640e+02
> 0.2%  4.752e-02        0.2%  3.000e+00   0.1%
>  4: Reform Residual: 5.9408e-01   1.3%  1.2995e+07  82.8%  6.588e+04
>  31.4%  7.018e+00       33.9%  6.000e+00   0.2%
>  6:         Prestep: 1.9647e-01   0.4%  1.9200e+03   0.0%  0.000e+00
> 0.0%  0.000e+00        0.0%  1.000e+00   0.0%
>  7:            Step: 1.1146e+01  23.6%  1.8517e+06  11.8%  4.398e+04
>  21.0%  4.684e+00       22.6%  8.000e+00   0.2%
>  8:        Poststep: 3.0956e+01  65.6%  7.8912e+05   5.0%  9.522e+04
>  45.4%  8.453e+00       40.8%  3.282e+03  90.6%
>  9:        Finalize: 2.7305e-01   0.6%  0.0000e+00   0.0%  0.000e+00
> 0.0%  0.000e+00        0.0%  0.000e+00   0.0%
>
>
> ------------------------------------------------------------------------------------------------------------------------
> See the 'Profiling' chapter of the users' manual for details on
> interpreting output.
> Phase summary info:
>    Count: number of times phase was executed
>    Time and Flops: Max - maximum over all processors
>                    Ratio - ratio of maximum to minimum over all processors
>    Mess: number of messages sent
>    Avg. len: average message length (bytes)
>    Reduct: number of global reductions
>    Global: entire computation
>    Stage: stages of a computation. Set stages with PetscLogStagePush() and
> PetscLogStagePop().
>       %T - percent time in this phase         %F - percent flops in this
> phase
>       %M - percent messages in this phase     %L - percent message lengths
> in this phase
>       %R - percent reductions in this phase
>    Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time
> over all processors)
>
> ------------------------------------------------------------------------------------------------------------------------
> Event                Count      Time (sec)     Flops
>       --- Global ---  --- Stage ---   Total
>                    Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len
> Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
>
> ------------------------------------------------------------------------------------------------------------------------
>
> --- Event Stage 0: Main Stage
>
>
> --- Event Stage 1: Meshing
>
> MeIm create            1 1.0 3.2108e-01 1.2 1.60e+01 1.3 1.1e+03 5.6e+01
> 3.4e+01  1  0  1  1  1  87100100100100     0
> MeIm adjTopo           1 1.0 2.1299e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 5.0e+00  0  0  0  0  0   5  0  0  0 15     0
> VecScale               1 1.0 1.9569e-0316.4 1.60e+01 1.3 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0100  0  0  0     0
> DMPlexInterp           1 1.0 4.7829e-0338.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> DMPlexPartition        1 1.0 7.3867e-02 1.1 0.00e+00 0.0 3.6e+02 3.6e+01
> 7.0e+00  0  0  0  0  0  21  0 32 21 21     0
> DMPlexDistribute       1 1.0 1.4583e-01 1.1 0.00e+00 0.0 1.1e+03 5.6e+01
> 2.7e+01  0  0  1  1  1  42  0100100 79     0
> DMPlexDistCones        1 1.0 3.1788e-03 2.8 0.00e+00 0.0 3.2e+02 6.4e+01
> 2.0e+00  0  0  0  0  0   1  0 29 33  6     0
> DMPlexDistLabels       1 1.0 1.2003e-02 9.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 1.4e+01  0  0  0  0  0   1  0  0  3 41     0
> DMPlexDistribSF        1 1.0 9.6750e-03172.7 0.00e+00 0.0 1.1e+02 1.3e+02
> 0.0e+00  0  0  0  0  0   3  0 10 23  0     0
> DMPlexDistField        1 1.0 5.4257e-02 1.0 0.00e+00 0.0 3.2e+02 3.7e+01
> 4.0e+00  0  0  0  0  0  16  0 29 20 12     0
> DMPlexStratify         5 1.2 8.7202e-03 8.8 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
> SFSetGraph             8 1.0 2.2359e-0317.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFBcastBegin          13 1.0 2.4010e-0232.6 0.00e+00 0.0 8.3e+02 5.7e+01
> 5.0e+00  0  0  0  1  0   4  0 74 76 15     0
> SFBcastEnd            13 1.0 3.2347e-021085.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   5  0  0  0  0     0
> SFReduceBegin          2 1.0 1.2650e-0310.5 0.00e+00 0.0 1.4e+02 8.4e+01
> 1.0e+00  0  0  0  0  0   0  0 13 20  3     0
> SFReduceEnd            2 1.0 9.5749e-03331.9 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFFetchOpBegin         1 1.0 6.3119e-02 1.1 0.00e+00 0.0 1.1e+02 4.0e+00
> 1.0e+00  0  0  0  0  0  18  0 10  1  3     0
> SFFetchOpEnd           1 1.0 2.9888e-03272.5 0.00e+00 0.0 3.6e+01 4.0e+00
> 0.0e+00  0  0  0  0  0   0  0  3  0  0     0
> Dist distribute        1 1.0 1.4688e-01 1.1 0.00e+00 0.0 1.1e+03 5.6e+01
> 2.7e+01  0  0  1  1  1  42  0100100 79     0
> Refin refine           1 1.0 7.1526e-06 1.9 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
>
> --- Event Stage 2: Setup
>
> VecView                3 1.0 9.3205e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   2  0  0  0  0     0
> VecScale               2 1.0 3.8147e-06 0.0 8.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     3
> VecCopy                9 1.0 1.1010e-021003.9 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> VecSet                57 1.0 3.8862e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> VecAssemblyBegin       1 1.0 3.6640e-03 3.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 3.0e+00  0  0  0  0  0   0  0  0  0  1     0
> VecAssemblyEnd         1 1.0 2.1458e-06 2.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> MatAssemblyBegin       1 1.0 1.1921e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> MatAssemblyEnd         1 1.0 1.3553e-0221.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> MatGetRow             36 0.0 3.0994e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> DMPlexStratify         3 1.0 4.7922e-0522.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFSetGraph             9 1.0 2.4319e-05 2.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFBcastBegin          94 1.0 8.8076e-0210.4 0.00e+00 0.0 2.6e+03 1.3e+01
> 2.0e+00  0  0  1  1  0   1  0 82 78  1     0
> SFBcastEnd            94 1.0 1.2429e-012896.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
> SFReduceBegin         12 1.0 1.3537e-03 5.0 0.00e+00 0.0 1.4e+02 1.7e+01
> 8.0e+00  0  0  0  0  0   0  0  4  6  3     0
> SFReduceEnd           12 1.0 1.0816e-022668.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> TSEx preinit           1 1.0 3.2516e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 6.0e+00  1  0  0  0  0   9  0  0  0  2     0
> TSEx verify            1 1.0 6.1519e-03 2.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> TSEx init              1 1.0 3.2335e+00 1.0 9.48e+02 2.0 3.1e+03 1.3e+01
> 2.8e+02  7  0  1  1  8  89100100100 98     0
> DtUn preinit           1 1.0 1.0014e-05 1.7 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> DtUn verify            1 1.0 1.0014e-05 1.7 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> DtUn init              1 1.0 5.5075e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> ElEx verify            1 1.0 3.9580e-03 3.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> ElEx init              1 1.0 6.2873e-01 1.1 9.48e+02 3.0 1.8e+03 1.3e+01
> 5.1e+01  1  0  1  1  1  17 98 58 57 18     0
> MaPlSn verify          1 1.0 3.5861e-03 6.7 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> OutM init              3 1.0 3.0861e-03 2.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> OutM open              5 1.0 1.1633e+00 1.0 8.00e+00 0.0 2.6e+01 9.5e+00
> 3.4e+01  2  0  0  0  1  32  0  1  1 12     0
> OutM writeInfo         3 1.0 2.0797e+00 1.0 2.16e+02 2.3 1.7e+03 1.3e+01
> 1.2e+02  4  0  1  1  3  58 22 53 53 42     0
> AbBC verify            2 1.0 1.8461e-0332.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> AbBC init              2 1.0 2.5861e-03 1.6 1.12e+02 0.0 0.0e+00 0.0e+00
> 1.0e+01  0  0  0  0  0   0  1  0  0  3     0
> DiBC verify            1 1.0 1.0610e-04 3.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> DiBC init              1 1.0 6.8124e-0257.2 0.00e+00 0.0 2.3e+02 1.3e+01
> 1.0e+01  0  0  0  0  0   1  0  7  7  3     0
> CoDy verify            1 1.0 5.9795e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> CoDy init              1 1.0 1.9730e+00 1.0 1.54e+02 0.0 2.6e+02 1.7e+01
> 1.7e+02  4  0  0  0  5  55  1  8 11 60     0
>
> --- Event Stage 3: Reform Jacobian
>
> VecSet                 2 1.0 8.1062e-06 4.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFSetGraph             1 1.0 7.1526e-06 2.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFBcastBegin           1 1.0 4.0770e-0521.4 0.00e+00 0.0 9.1e+01 4.2e+01
> 0.0e+00  0  0  0  0  0   0  0 25 39  0     0
> SFBcastEnd             1 1.0 3.3319e-03 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   6  0  0  0  0     0
> SFReduceBegin          1 1.0 3.4111e-0355.0 0.00e+00 0.0 2.7e+02 2.2e+01
> 1.0e+00  0  0  0  0  0  26  0 75 61 33     0
> SFReduceEnd            1 1.0 2.0981e-0522.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> ElIJ setup             1 1.0 2.0981e-05 4.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> ElIJ compute           1 1.0 7.4148e-05 2.9 8.40e+02 3.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0 99  0  0  0   302
> AdIJ setup             2 1.0 1.3018e-04 2.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
> AdIJ compute           2 1.0 3.2187e-05 0.0 1.40e+02 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  1  0  0  0     9
> FaIJ setup             1 1.0 2.1458e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> FaIJ compute           1 1.0 9.5367e-07 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
>
> --- Event Stage 4: Reform Residual
>
> VecSet               480 1.0 1.0936e-03 3.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFSetGraph             2 1.0 1.3113e-05 2.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFBcastBegin         240 1.0 1.4002e-03 3.8 0.00e+00 0.0 2.2e+04 2.2e+01
> 0.0e+00  0  0 10 11  0   0  0 33 33  0     0
> SFBcastEnd           240 1.0 6.1437e-014552.8 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0  30  0  0  0  0     0
> SFReduceBegin        480 1.0 4.9629e-03 1.8 0.00e+00 0.0 4.4e+04 2.2e+01
> 2.0e+00  0  0 21 23  0   1  0 67 67 33     0
> SFReduceEnd          480 1.0 6.6732e-011913.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  1  0  0  0  0  60  0  0  0  0     0
> ElIR setup           240 1.0 3.2091e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> ElIR compute         240 1.0 5.5752e-03 1.5 4.58e+05 3.0 0.0e+00 0.0e+00
> 0.0e+00  0 78  0  0  0   1 94  0  0  0  2190
> AdIR setup           480 1.0 2.5492e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> AdIR compute         480 1.0 4.8966e-03103.2 4.42e+04 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  1  0  0  0   0  1  0  0  0    18
> FaIR setup           240 1.0 4.1564e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
> FaIR compute         240 1.0 2.7990e-04 6.5 7.68e+03 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0    55
>
> --- Event Stage 5: Unknown
>
>
> --- Event Stage 6: Prestep
>
> TSEx timestep        240 1.0 1.0135e-0155.1 7.20e+01 3.0 0.0e+00 0.0e+00
> 1.0e+00  0  0  0  0  0  34100  0  0100     0
> TSEx prestep         240 1.0 1.4796e-01 1.7 8.40e+02 3.0 3.6e+02 2.7e+01
> 3.0e+00  0  0  0  0  0  601181  0  0300     0
>
> --- Event Stage 7: Step
>
> VecSet               481 1.0 1.4012e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> VecAXPY              240 1.0 5.3819e-02148.5 9.60e+03 1.7 0.0e+00 0.0e+00
> 0.0e+00  0  1  0  0  0   0 12  0  0  0     4
> SFSetGraph             1 1.0 5.0068e-06 2.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFBcastBegin         242 1.0 5.0166e-03 7.6 0.00e+00 0.0 2.2e+04 2.2e+01
> 0.0e+00  0  0 10 11  0   0  0 50 50  0     0
> SFBcastEnd           242 1.0 2.3892e+0014565.2 0.00e+00 0.0 0.0e+00
> 0.0e+00 0.0e+00  2  0  0  0  0   7  0  0  0  0     0
> SFReduceBegin        240 1.0 2.5533e-0152.1 0.00e+00 0.0 2.2e+04 2.2e+01
> 1.0e+00  0  0 11 11  0   0  0 50 50 12     0
> SFReduceEnd          240 1.0 1.7751e+004097.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  1  0  0  0  0   6  0  0  0  0     0
> FaAS setup           240 1.0 3.3627e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> FaAS compute         240 1.0 6.9878e-03325.7 7.43e+04 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  1  0  0  0   0  8  0  0  0    21
> SoLu setup           240 1.0 2.1734e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SoLu solve           240 1.0 9.5439e-04 1.4 3.84e+03 1.3 0.0e+00 0.0e+00
> 0.0e+00  0  1  0  0  0   0  6  0  0  0   119
> SoLu adjust          240 1.0 2.6357e+00 4.9 1.30e+05 3.2 4.4e+04 2.2e+01
> 8.0e+00  3 11 21 23  0  13 94100100100     1
>
> --- Event Stage 8: Poststep
>
> VecView              240 1.0 3.6254e+00 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  7  0  0  0  0  10  0  0  0  0     0
> VecCopy             1704 1.0 2.8782e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> VecSet               488 1.0 3.1400e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> VecAXPY              240 1.0 5.7673e-04 3.2 9.60e+03 1.7 0.0e+00 0.0e+00
> 0.0e+00  0  1  0  0  0   0 29  0  0  0   399
> VecAssemblyBegin     240 1.0 7.7466e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 7.2e+02  0  0  0  0 20   0  0  0  0 22     0
> VecAssemblyEnd       240 1.0 4.9138e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFBcastBegin         808 1.0 3.2768e-03 2.2 0.00e+00 0.0 4.7e+04 1.3e+01
> 0.0e+00  0  0 22 14  0   0  0 49 34  0     0
> SFBcastEnd           808 1.0 5.1820e-011300.7 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> SFReduceBegin        720 1.0 3.9566e-03 3.7 0.00e+00 0.0 2.2e+03 2.1e+01
> 0.0e+00  0  0  1  1  0   0  0  2  3  0     0
> SFReduceEnd          720 1.0 3.7780e-01886.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> TSEx poststep        240 1.0 3.2119e+01 1.1 4.09e+04 2.2 9.5e+04 1.9e+01
> 3.3e+03 66  5 45 41 91 100100100100100     0
> TSEx write           240 1.0 2.8331e+01 1.0 2.22e+04 1.7 1.1e+04 1.9e+01
> 2.3e+03 59  3  5  5 63  89 68 12 12 70     0
> ElEx poststep        240 1.0 4.8757e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> ElEx write           240 1.0 1.8991e-01 1.1 1.96e+04 3.0 8.8e+03 1.8e+01
> 1.4e+02  0  3  4  4  4   1 66  9  9  4     3
> OutM writeData       720 1.0 3.2015e+01 1.1 2.22e+04 1.7 9.5e+04 1.9e+01
> 3.3e+03 65  3 45 41 91 100 68100100100     0
> AbBC poststep        480 1.0 8.2965e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> AbBC write           480 1.0 2.9407e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> CoDy poststep        240 1.0 1.1666e-02 2.2 9.12e+03 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  2  0  0  0     2
> CoDy write           240 1.0 2.8130e+01 1.0 9.12e+03 0.0 2.2e+03 2.1e+01
> 2.2e+03 58  0  1  1 60  89  2  2  3 66     0
>
> --- Event Stage 9: Finalize
>
> TSEx finalize          1 1.0 2.8769e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  1  0  0  0  0 100  0  0  0  0     0
> OutM close             3 1.0 8.8396e-02 3.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0  14  0  0  0  0     0
>
> ------------------------------------------------------------------------------------------------------------------------
>
> Memory usage is given in bytes:
>
> Object Type          Creations   Destructions     Memory  Descendants' Mem.
> Reports information only for process 0.
>
> --- Event Stage 0: Main Stage
>
>            Container     0             52        30160     0
>               Vector     0             76       118528     0
>     Distributed Mesh     0             81       357552     0
> Star Forest Bipartite Graph     0             85        70032     0
>      Discrete System     0             81        64800     0
>            Index Set     0             14        11112     0
>              Section     0            230       155480     0
>               Viewer     1              0            0     0
>
> --- Event Stage 1: Meshing
>
>            Container     2              2         1160     0
>               Vector     4              3         8712     0
>     Distributed Mesh    11              9        41376     0
> Star Forest Bipartite Graph    29             26        21608     0
>      Discrete System    11              9         7200     0
>            Index Set    45             45        40828     0
>    IS L to G Mapping     1              1         1972     0
>              Section    25             21        14196     0
>
> --- Event Stage 2: Setup
>
>            Container   106             58        33640     0
>               Vector    85             15        23184     0
>               Matrix     1              1         5436     0
>     Distributed Mesh    82              8        35200     0
> Star Forest Bipartite Graph   164             87        71104     0
>      Discrete System    82              8         6400     0
>            Index Set    23             13        10404     0
>              Section   259             73        49348     0
>               Viewer     3              2         1432     0
>
> --- Event Stage 3: Reform Jacobian
>
>              Section     6              0            0     0
>
> --- Event Stage 4: Reform Residual
>
>            Index Set     4              0            0     0
>              Section    16              0            0     0
>
> --- Event Stage 5: Unknown
>
>
> --- Event Stage 6: Prestep
>
>
> --- Event Stage 7: Step
>
>            Container     6              3         1740     0
>               Vector     2              0            0     0
>     Distributed Mesh     2              0            0     0
> Star Forest Bipartite Graph     4              2         1632     0
>      Discrete System     2              0            0     0
>              Section    11              2         1352     0
>
> --- Event Stage 8: Poststep
>
>            Container    18             11         6380     0
>               Vector    16              4         6248     0
>     Distributed Mesh    13              2         8800     0
> Star Forest Bipartite Graph    26             15        12240     0
>      Discrete System    13              2         1600     0
>            Index Set    25             25        19992     0
>              Section   685            654       442104     0
>               Viewer   132            132        95040     0
>
> --- Event Stage 9: Finalize
>
>            Container     0              6         3480     0
>               Vector     0              9        14192     0
>     Distributed Mesh     0              8        35200     0
> Star Forest Bipartite Graph     0              8         6528     0
>      Discrete System     0              8         6400     0
>              Section     0             22        14872     0
>               Viewer     0              1          712     0
>
> ========================================================================================================================
> Average time to get PetscTime(): 0
> Average time for MPI_Barrier(): 2.81811e-05
> Average time for zero size MPI_Send(): 1.47687e-06
> #PETSc Option Table entries:
> -log_summary
> #End of PETSc Option Table entries
> Compiled without FORTRAN kernels
> Compiled with full precision matrices (default)
> sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
> sizeof(PetscScalar) 8 sizeof(PetscInt) 4
> Configure options: --prefix=/home/xiaoma5/project-cse/pylith
> --with-c2html=0 --with-x=0 --with-clanguage=C --with-mpicompilers=1
> --with-debugging=0 --with-shared-libraries=1 --download-chaco=1
> --download-ml=1 --download-fblaslapack=1 --with-hdf5=1
> --with-hdf5-include=/home/xiaoma5/project-cse/pylith/include
> --with-hdf5-lib=/home/xiaoma5/project-cse/pylith/lib/libhdf5.dylib
> --LIBS=-lz CPPFLAGS="-I/home/xiaoma5/project-cse/pylith/include "
> LDFLAGS="-L/home/xiaoma5/project-cse/pylith/lib " CFLAGS="-g -O2"
> CXXFLAGS="-g -O2 -DMPICH_IGNORE_CXX_SEEK" FCFLAGS="-g -O2"
> PETSC_DIR=/home/xiaoma5/project-cse/src/pylith/pylith-installer-2.0.3-0/petsc-pylith
> PETSC_ARCH=arch-pylith
> -----------------------------------------
> Libraries compiled on Sun Dec 14 15:39:18 2014 on taubh2
> Machine characteristics:
> Linux-2.6.32-431.29.2.el6.x86_64-x86_64-with-centos-6.5-Final
> Using PETSc directory:
> /home/xiaoma5/project-cse/src/pylith/pylith-installer-2.0.3-0/petsc-pylith
> Using PETSc arch: arch-pylith
> -----------------------------------------
>
> Using C compiler: mpicc -g -O2 -fPIC
> -I/home/xiaoma5/project-cse/pylith/include ${COPTFLAGS} ${CFLAGS}
> Using Fortran compiler: mpif90  -fPIC -Wall -Wno-unused-variable
> -ffree-line-length-0 -O  -I/home/xiaoma5/project-cse/pylith/include
> ${FOPTFLAGS} ${FFLAGS} -I/home/xiaoma5/project-cse/pylith/include
> -----------------------------------------
>
> Using include paths:
> -I/home/xiaoma5/project-cse/src/pylith/pylith-installer-2.0.3-0/petsc-pylith/arch-pylith/include
> -I/home/xiaoma5/project-cse/src/pylith/pylith-installer-2.0.3-0/petsc-pylith/include
> -I/home/xiaoma5/project-cse/src/pylith/pylith-installer-2.0.3-0/petsc-pylith/include
> -I/home/xiaoma5/project-cse/src/pylith/pylith-installer-2.0.3-0/petsc-pylith/arch-pylith/include
> -I/projects/cse/shared/xiaoma5/src/pylith/pylith-installer-2.0.3-0/petsc-pylith/arch-pylith/include
> -I/home/xiaoma5/project-cse/pylith/include
> -I/usr/local/mpi/openmpi-1.6.5-gcc-4.7.1/include
> -----------------------------------------
>
> Using C linker: mpicc
> Using Fortran linker: mpif90
> Using libraries:
> -Wl,-rpath,/home/xiaoma5/project-cse/src/pylith/pylith-installer-2.0.3-0/petsc-pylith/arch-pylith/lib
> -L/home/xiaoma5/project-cse/src/pylith/pylith-installer-2.0.3-0/petsc-pylith/arch-pylith/lib
> -lpetsc
> -Wl,-rpath,/projects/cse/shared/xiaoma5/src/pylith/pylith-installer-2.0.3-0/petsc-pylith/arch-pylith/lib
> -L/projects/cse/shared/xiaoma5/src/pylith/pylith-installer-2.0.3-0/petsc-pylith/arch-pylith/lib
> -lml -Wl,-rpath,/home/xiaoma5/project-cse/pylith/lib
> -L/home/xiaoma5/project-cse/pylith/lib
> -Wl,-rpath,/usr/local/torque-releases/torque-4.2.6/lib
> -L/usr/local/torque-releases/torque-4.2.6/lib
> -Wl,-rpath,/usr/local/mpi/openmpi-1.6.5-gcc-4.7.1/lib
> -L/usr/local/mpi/openmpi-1.6.5-gcc-4.7.1/lib
> -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7
> -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -lmpi_cxx -lstdc++ -lflapack
> -lfblas -lpthread -lssl -lcrypto -lchaco -lhdf5 -lmpi_f90 -lmpi_f77
> -lgfortran -lm -lmpi_cxx -lstdc++ -ldl -lz -lmpi -libverbs -lrt -lnsl
> -lutil -ltorque -lgcc_s -lpthread -ldl -lz
> -----------------------------------------
> *************************************
>
> --
>
> -------------------------------------------------------------------------------------
> *Xiao Ma*
> Graduate Research Assistant
> MS program in Structural Engineering
> University of Illinois at Urbana-Champaign
> E-mail: *xiaoma5 at illinois.edu <xiaoma5 at illinois.edu>*
>
> --------------------------------------------------------------------------------------
>
> _______________________________________________
> CIG-SHORT mailing list
> CIG-SHORT at geodynamics.org
> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.geodynamics.org/pipermail/cig-short/attachments/20141215/39942a60/attachment-0001.html>


More information about the CIG-SHORT mailing list