[aspect-devel] Postprocessing scripts
Max Rudolph
maxrudolph at ucdavis.edu
Tue Oct 30 13:35:17 PDT 2018
I am happy to open a pull request containing my python scripts to
interpolate output. They won't work for all conceivable cases, especially
DG elements where nodal values can be multiply defined.
Max
On Tue, Oct 30, 2018 at 1:29 PM John Naliboff <jbnaliboff at ucdavis.edu>
wrote:
> Hi all,
>
> I agree with Jonathan that a repository containing post processing scripts
> (Paraview, Visit, Python, etc, etc) would be incredibly useful. In fact,
> Max Rudolph raised this question on the mailing with respect to post
> processing scripts developed at CIDER. I believe the consensus was that we
> should include this type of script collection in the main repository.
>
> Marine - If you prefer to work outside of Paraview, you can also import
> unstructured VTK data into numpy arrays in python.
>
> Here is a link to an example of doing this data conversion (VTK to numpy):
>
> https://stackoverflow.com/questions/23138112/vtk-to-maplotlib-using-numpy
>
> Happy to send over some of my own scripts as well.
>
> However, as Wolfgang noted for very large data sets this will be
> prohibitively time consuming and memory intensive. Perhaps there are way to
> speed this up through cython and python's openmp functionality, but I have
> not looked into it. Paraview can be run in parallel and it really is the
> most efficient way to process and explore large data sets.
>
> Cheers,
> John
>
> On 10/30/18 11:32 AM, Jonathan Perry-Houts wrote:
>
> Perhaps a repository of paraview filters would be useful. It's possible
> to save a "pipeline" as a filter, and expose certain variables to the
> user through it.
>
> Marine: I agree with Wolfgang that this is not an efficient way to do
> things, but it does come in handy sometimes. For what it's worth, here's
> a script to convert vtk output to csv, which you can then handle with
> the usual python tools:
>
> ```
> import glob
> from paraview import simple as pvsimple
>
> for ifname in glob('output/solution/*.pvtu'):
> ofname = ifname.rsplit('.', 1)[0]+'.csv'
>
> reader = pvsimple.OpenDataFile(ifname)
> writer = pvsimple.CreateWriter(ofname, reader)
> writer.FieldAssociation = 'Points'
> writer.UpdatePipeline()
> ```
>
> I also highly recommend compressing those files, because they'll
> probably be enormous:
>
> ```
> import gzip, os, shutil
>
> with open(ofname, 'rb') as f_in, \
> gzip.open(ofname+'.gz', 'wb') as f_out:
> shutil.copyfileobj(f_in, f_out)
>
> os.remove(ofname)
> ```
>
> numpy (and probably Pandas and whatnot) automatically recognize gzipped
> data and treat it transparently as a plain csv file.
>
> Hope that helps someone,
> -JPH
>
> On 10/30/18 10:11 AM, Wolfgang Bangerth wrote:
>
> Marine,
> I'm not sure anyone ever answered this email -- so let me do this 3
> months late...
>
>
>
> But to be able to do a good comparisons with existing results, I'll need
> things like horizontal averages (of composition, temperature,
> velocities) and difference between the field and the horizontal average
> (of composition, temperature). But the horizontal here is average over a
> sphere, so I am guessing the post-processors are not so easy. (to get
> average radial profiles)
>
>
> This is actually what our 'depth average' postprocessor does -- average
> over all points at a certain depth where depth is defined differently
> for each geometry.
>
>
>
> I'm definitely not used to paraview, and I'm more used to do all my data
> processing by hand... So a good tutorial on how to implement that with
> paraview or equivalent, with a scripting (I usually have several dozens
> of runs to process at a time) would be nice :-) But so far, I think that
> a way to obtain a list of the points and values in the volume (without
> duplicates), and maybe something to do the interpolation at required
> points would be nice.
>
>
> This isn't going to be efficient. There may be tens of millions of
> points in a simulation, and they are stored by maybe hundreds of
> processors, and the interpolation between individual points is really
> quite complicated if you have a non-trivial geometry. There are really
> only two efficient ways to deal with this much data: by outputting the
> data in specialized file formats for visualization, and by implementing
> postprocessors within ASPECT. I know that you'd like to do this by hand
> in some scripting language, but don't do it -- you won't be happy this
> way if the data just becomes too large.
>
> Best
> W.
>
>
> _______________________________________________
> Aspect-devel has MOVED to our new forum. Please head to https://community.geodynamics.org.
>
> This mailing address will be disabled at the end of October.
> _______________________________________________
> Aspect-devel mailing listAspect-devel at geodynamics.orghttp://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
>
>
> _______________________________________________
> Aspect-devel has MOVED to our new forum. Please head to
> https://community.geodynamics.org.
>
> This mailing address will be disabled at the end of October.
> _______________________________________________
> Aspect-devel mailing list
> Aspect-devel at geodynamics.org
> http://lists.geodynamics.org/cgi-bin/mailman/listinfo/aspect-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.geodynamics.org/pipermail/aspect-devel/attachments/20181030/0cf30e96/attachment-0001.html>
More information about the Aspect-devel
mailing list