[aspect-devel] Writing output in parallel
Thomas Geenen
geenen at gmail.com
Tue Feb 28 07:42:14 PST 2012
i agree that for bigger machines your solution is better
most (all) of these machine do not even have local storage
yes indeed the low tech solution i suggested/implemetented is to just
execute a system call to start moving the files during the run in the
background from local scratch storage to network drive
what i suggest is to make something that is good for large systems and
something that will work for your average joe's cluster.
with slow network storage and fast local storage the MPI/IO solution will
be suboptimal i guess.
having the user start messing around with simlinks pointing to drives on
different nodes etc is probably also not very fail save.
cheers
Thomas
On Tue, Feb 28, 2012 at 3:47 PM, Timo Heister <heister at math.tamu.edu> wrote:
> First: the solution to use MPI I/O and merge output files is the only
> way to scale to bigger machines. You can not run with 10'000 cores and
> write out 10'000 files per timestep.
> Second: the merging of files is optional. It is a runtime parameter
> you can set. You might want to generate one file per node (instead of
> one file per core now) or you can leave it as it is today.
> Third: In your setup I would just symlink the output/ directory to the
> local scratch space and copy the files to the central nfs at the end
> of the computation (you can do this in your jobfile). If you want to
> do both things at the same time, you can execute a shell script after
> the visualization that does the mv in the background.
>
> On Tue, Feb 28, 2012 at 8:32 AM, Thomas Geenen <geenen at gmail.com> wrote:
> > i am not sure if this will be very efficient on the type of cluster we
> have.
> >
> > we have a cluster with a bunch of nodes with fast local io that are
> > interconnected with infiniband and have an ethernet connection for the io
> > etc with the master node. On the master node we have our network drive (a
> > large slow beast, NFS).
> >
> > in the proposed solution we will be using the infiniband for the io
> during
> > the computation (assuming the io will be in the background) how will that
> > affect the speed of the solver? how large are the mpi buffers needed for
> > this type of io and is that pinned memory? do we have enough left for the
> > application?
> >
> > if its not doing io in the background this will be a bottleneck for us
> since
> > we have a very basic disk setup on the master node
> >
> > locally (on the compute nodes) we have something like 500MB/s throughput
> so
> > for a typical run on 10-20 nodes we have an effective bandwith of
> 5-10GB/s
> >
> > i would be in favor of implementing a few IO strategies and leave it to
> the
> > user to pick the one that is most efficient for his/her hardware setup.
> >
> > the low tech option i proposed before (write to local storage fast and mv
> > the files in the background over the ethernet connection to the slow
> network
> > drive) will probably work best for me.
> >
> > cheers
> > Thomas
> >
> >
> >
> > On Mon, Feb 27, 2012 at 6:40 PM, Wolfgang Bangerth <
> bangerth at math.tamu.edu>
> > wrote:
> >>
> >>
> >> > Oh, one thing I forgot to mention is that I am not sure if we want one
> >> > big file per time step. It might happen that paraview in parallel is
> >> > less efficient reading one big file. One solution would be to write
> >> > something like n*0.05 files, where n is the number of compute
> >> > processes.
> >>
> >> Yes, go with that. Make the reduction factor (here, 20) a run-time
> >> parameter so that people can choose whatever is convenient for them when
> >> visualizing. For single-processor visualization, one could set it equal
> >> to the number of MPI jobs, for example.
> >>
> >> Cheers
> >> W.
> >>
> >> ------------------------------------------------------------------------
> >> Wolfgang Bangerth email:
> bangerth at math.tamu.edu
> >> www:
> http://www.math.tamu.edu/~bangerth/
> >>
> >> _______________________________________________
> >> Aspect-devel mailing list
> >> Aspect-devel at geodynamics.org
> >> http://geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
> >
> >
> >
> > _______________________________________________
> > Aspect-devel mailing list
> > Aspect-devel at geodynamics.org
> > http://geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
> >
>
>
>
> --
> Timo Heister
> http://www.math.tamu.edu/~heister/
> _______________________________________________
> Aspect-devel mailing list
> Aspect-devel at geodynamics.org
> http://geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://geodynamics.org/pipermail/aspect-devel/attachments/20120228/226657cf/attachment.htm
More information about the Aspect-devel
mailing list