[CIG-SHORT] Understanding the memory usage

Matthew Knepley knepley at mcs.anl.gov
Wed Aug 1 11:11:29 PDT 2012


On Wed, Aug 1, 2012 at 10:35 AM, Hongfeng Yang <hyang at whoi.edu> wrote:

> Thanks, Brad and Charles. That is very helpful.
>
> I turn on the debug option in cfg file and read the output of memory
> usage. One CPU always needs nearly three times memory as other cores
> need at different stages. Is that normal?
>

Yes, since there are no truly parallel mesh generators, we do all mesh
manipulation
on process 0 at the beginning of the program. Since it is not the whole
problem, you
don't have P*mem, but it is more memory than any other process

   Matt


> -- [12] CPU time: 00:03:16, Memory usage: 148.12 MB
>   -- [11] CPU time: 00:03:16, Memory usage: 145.62 MB
>   -- [1] CPU time: 00:03:17, Memory usage: 133.54 MB
>   -- [2] CPU time: 00:03:16, Memory usage: 138.90 MB
>   -- [3] CPU time: 00:03:17, Memory usage: 137.79 MB
>   -- [7] CPU time: 00:03:17, Memory usage: 136.34 MB
>   -- [9] CPU time: 00:03:17, Memory usage: 142.91 MB
>   -- [13] CPU time: 00:03:17, Memory usage: 144.40 MB
>   -- [10] CPU time: 00:03:16, Memory usage: 142.25 MB
>   -- [5] CPU time: 00:03:16, Memory usage: 138.32 MB
>   -- [8] CPU time: 00:03:16, Memory usage: 147.87 MB
>   -- [6] CPU time: 00:03:17, Memory usage: 137.40 MB
>   -- [4] CPU time: 00:03:16, Memory usage: 138.12 MB
>   -- [0] CPU time: 00:03:04, Memory usage: 405.86 MB
>
>   Hongfeng
>
> On 08/01/2012 11:06 AM, Brad Aagaard wrote:
> > Hongfeng,
> >
> > You do not need to recompile to get total memory usage printed to stdout
> > at various stages in a run using
> >
> > [pylithapp.journal.debug]
> > timedependent = 1
> > implicit = 1
> > petsc = 1
> > solverlinear = 1
> > meshiocubit = 1
> > implicitelasticity = 1
> > faultcohesivekin = 1
> > fiatlagrange = 1
> > pylithapp = 1
> > materials = 1
> >
> > NOTE: If you are using explicit time stepping, replace "implict" with
> > "explicit".
> >
> > The command being run to get the information is the unix shell command
> > ps with options to get memory info. This is equivalent to "top".
> >
> > To get much more detailed information, you will need to rebuild with the
> > memory logging options
> >
> > PETSc:
> > --with-sieve-memory-logging=1
> >
> > PyLith:
> > --enable-memory-logging
> >
> > Additionally, you would also need command line arguments to enable the
> > performance memory logger which is still experimental and incomplete.
> > These are intended for development use.
> >
> > Regards,
> > Brad
> >
> >
> >
> > On 7/31/12 8:35 PM, Charles Williams wrote:
> >> Hi Hongfeng,
> >>
> >> I'm not sure about the PETSc stuff; however, PyLith does have memory
> >> logging that we use for debugging.  If you have compiled both PETSc and
> >> PyLith with this option turned on, you will be able to use it.  For
> >> PETSc, you need to have configured with:
> >>
> >> --with-sieve-memory-logging=1
> >>
> >> For PyLith, you need to have configured with:
> >>
> >> --enable-memory-logging
> >>
> >> Then, to get memory usage, you can turn on the journal debug facility
> >> for different things, e.g.:
> >>
> >> [pylithapp.journal.debug]
> >> timedependent = 1
> >> implicit = 1
> >> petsc = 1
> >> solverlinear = 1
> >> meshiocubit = 1
> >> implicitelasticity = 1
> >> faultcohesivekin = 1
> >> fiatlagrange = 1
> >> pylithapp = 1
> >> materials = 1
> >>
> >>
> >> I'm not sure if this is what you need or not.
> >>
> >> Cheers,
> >> Charles
> >>
> >>
> >> On 1/08/2012, at 8:00 AM, Hongfeng Yang wrote:
> >>
> >>> Hi all,
> >>>
> >>> After a simulation, I read the memory usage summary from the running
> >>> output as the following:
> >>>
> >>> Memory usage is given in bytes:
> >>>
> >>> Object Type          Creations   Destructions     Memory Descendants'
> Mem.
> >>> Reports information only for process 0.
> >>>
> >>> --- Event Stage 0: Main Stage
> >>>
> >>>                Vector     0             30     14829024     0
> >>>        Vector Scatter     0              6         6216     0
> >>>                Viewer     1              0            0     0
> >>>
> >>> --- Event Stage 1: Meshing
> >>>
> >>>
> >>> --- Event Stage 2: Setup
> >>>
> >>>                Vector    76             51      9908736     0
> >>>        Vector Scatter    12              8         8288     0
> >>>                Viewer     8              3         2040     0
> >>>             Index Set    24             24        17856     0
> >>>
> >>> --- Event Stage 3: Reform Jacobian
> >>>
> >>>
> >>> --- Event Stage 4: Reform Residual
> >>>
> >>>
> >>> --- Event Stage 5: Unknown
> >>>
> >>>
> >>> --- Event Stage 6: Prestep
> >>>
> >>>
> >>> --- Event Stage 7: Step
> >>>
> >>>                Vector     2              0            0     0
> >>>
> >>> --- Event Stage 8: Poststep
> >>>
> >>>                Vector    20             12      8623544     0
> >>>        Vector Scatter     4              2         2072     0
> >>>             Index Set     8              8         5952     0
> >>>
> >>> --- Event Stage 9: Finalize
> >>>
> >>>                Vector     0              5         7520     0
> >>>                Viewer     0              5         3400     0
> >>>
> >>> The accumulated memory usage is about 34 M according to the above info.
> >>> However, looking up the memory usage from "top" on my machine, every
> >>> single run takes about 3% of the total memory, which is 24 G. That
> gives
> >>> an estimation of 720 M per core, a big difference from the Pylith
> output.
> >>>
> >>> Which one really tells me the real memory usage of Pylith? Or am I
> >>> reading something wrong?
> >>>
> >>> Thanks,
> >>>
> >>> Hongfeng
> >>>
> >>>
> >>>
> >>> --
> >>> Postdoc Investigator
> >>> Woods Hole Oceanographic Institution
> >>> Dept. Geology and Geophysics
> >>> 360 Woods Hole Rd, MS 24
> >>> Woods Hole, MA 02543
> >>>
> >>> _______________________________________________
> >>> CIG-SHORT mailing list
> >>> CIG-SHORT at geodynamics.org <mailto:CIG-SHORT at geodynamics.org>
> >>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
> >> Charles A. Williams
> >> Scientist
> >> GNS Science
> >> 1 Fairway Drive, Avalon
> >> PO Box 30368
> >> Lower Hutt  5040
> >> New Zealand
> >> ph (office): 0064-4570-4566
> >> fax (office): 0064-4570-4600
> >> C.Williams at gns.cri.nz <mailto:C.Williams at gns.cri.nz>
> >>
> >>
> >>
> >> _______________________________________________
> >> CIG-SHORT mailing list
> >> CIG-SHORT at geodynamics.org
> >> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
> >>
> >
> > _______________________________________________
> > CIG-SHORT mailing list
> > CIG-SHORT at geodynamics.org
> > http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
> >
>
>
> --
> Postdoc Investigator
> Woods Hole Oceanographic Institution
> Dept. Geology and Geophysics
> 360 Woods Hole Rd, MS 24
> Woods Hole, MA 02543
>
> _______________________________________________
> CIG-SHORT mailing list
> CIG-SHORT at geodynamics.org
> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-short
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://geodynamics.org/pipermail/cig-short/attachments/20120801/ac6c59af/attachment-0001.htm 


More information about the CIG-SHORT mailing list