i am not sure if this will be very efficient on the type of cluster we have.<div><br></div><div>we have a cluster with a bunch of nodes with fast local io that are interconnected with infiniband and have an ethernet connection for the io etc with the master node. On the master node we have our network drive (a large slow beast, NFS).</div>
<div><br></div><div>in the proposed solution we will be using the infiniband for the io during the computation (assuming the io will be in the background) how will that affect the speed of the solver? how large are the mpi buffers needed for this type of io and is that pinned memory? do we have enough left for the application?</div>
<div><br></div><div>if its not doing io in the background this will be a bottleneck for us since we have a very basic disk setup on the master node</div><div><br></div><div>locally (on the compute nodes) we have something like 500MB/s throughput so for a typical run on 10-20 nodes we have an effective bandwith of 5-10GB/s </div>
<div><br></div><div>i would be in favor of implementing a few IO strategies and leave it to the user to pick the one that is most efficient for his/her hardware setup.</div><div> </div><div>the low tech option i proposed before (write to local storage fast and mv the files in the background over the ethernet connection to the slow network drive) will probably work best for me.</div>
<div><br></div><div>cheers</div><div>Thomas</div><div><br></div><div><br><br><div class="gmail_quote">On Mon, Feb 27, 2012 at 6:40 PM, Wolfgang Bangerth <span dir="ltr"><<a href="mailto:bangerth@math.tamu.edu">bangerth@math.tamu.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im"><br>
> Oh, one thing I forgot to mention is that I am not sure if we want one<br>
> big file per time step. It might happen that paraview in parallel is<br>
> less efficient reading one big file. One solution would be to write<br>
> something like n*0.05 files, where n is the number of compute<br>
> processes.<br>
<br>
</div>Yes, go with that. Make the reduction factor (here, 20) a run-time<br>
parameter so that people can choose whatever is convenient for them when<br>
visualizing. For single-processor visualization, one could set it equal<br>
to the number of MPI jobs, for example.<br>
<br>
Cheers<br>
<span class="HOEnZb"><font color="#888888"> W.<br>
</font></span><div class="im HOEnZb"><br>
------------------------------------------------------------------------<br>
Wolfgang Bangerth email: <a href="mailto:bangerth@math.tamu.edu">bangerth@math.tamu.edu</a><br>
www: <a href="http://www.math.tamu.edu/~bangerth/" target="_blank">http://www.math.tamu.edu/~bangerth/</a><br>
<br>
</div><div class="HOEnZb"><div class="h5">_______________________________________________<br>
Aspect-devel mailing list<br>
<a href="mailto:Aspect-devel@geodynamics.org">Aspect-devel@geodynamics.org</a><br>
<a href="http://geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel" target="_blank">http://geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel</a><br>
</div></div></blockquote></div><br></div>