[CIG-CS] Particle problems and Semi-Lagrangian Schemes for Gamr
Walter Landry
walter at geodynamics.org
Wed Jun 1 15:45:43 PDT 2011
Back to the list.
Matthew Knepley <knepley at mcs.anl.gov> wrote:
> On Wed, Jun 1, 2011 at 3:06 AM, Walter Landry <walter at geodynamics.org>wrote:
>> Rather, it is not clear to me how you can identify and track an
>> arbitrary parcel of material through its history. For example,
>> consider a simulation with a single type of material. At time 0 we
>> have tracers at points p0(0), p1(0), p2(0). We take one time step and
>> they are now at points p0(1), p1(1), p2(1). Another time step and
>> they are at p0(2), p1(2), p2(2). At each time step, we have a place
>> to put the tracers' current position, pressure and temperature. At
>> the end of the simulation, we can look at where the tracers end up
>> and examine the history of an individual tracer. With
>> semi-lagrangian, there is no place to put that information.
>
>
> Of course there is. If you need history, save the field at each
> step. Its still much smaller than the particle info at each step.
So if I understand you correctly, we would, at the end of a
simulation, integrate backwards or forwards in time to get the history
of any particular point? Given that, I have a few thoughts.
1) We may want to concentrate particles in places of interest. For
example, if we want to see how material from one place gets
entrained and mixes. For that particular case, we can just define
a fictitious material interface and track that. Alternately, we
can advect a color function.
However, to get the P-T traces of that material, we still have to
save the fields everywhere and do some post-processing. I do not
know how often that problem comes up.
On the flip side, if we decide after the run is completed that we
want to look in more detail at one region, we do not have to run
the simulation again.
2) We do not have to save the current state of the particle at each
time step. We can save it every 50th time step. To integrate
back in time, we will have to save the velocities at every time
step.
So lets do an estimate of the relative storage costs of each
method. Lets assume we have P particles, N elements, and we save
every dt steps. In D dimensions after a T time steps, that is
P*D*T/dt doubles to store the positions of the particles. To store
the velocities, that is N*D*T doubles. The cost to store M other
numbers, such as the pressure and temperature, are M*P*T/dt for
particles and N*M*T/dt for fields.
Storage of particles = (P*T/dt) * (D + M)
Storage of fields = (N*T) * (D + M/dt)
To get some sense of numbers, if we look at Cookbook 7 in the
CitcomS manual, P/N=20, dt=5, D=3, and M=1, making the storage
ratio
Particles/Fields = (20/5*(3+1)) / (3+1/5) = 5
So fields look pretty good for that example.
3) It is a serious error if, in the process of integrating back in
time, one material turns into another. To conserve mass, we have
to apply fixes to the interface after advection. This will not be
reflected in the velocity field, so material may be converted by
these fixes. But particles will have this problem as well. Just
because a particle starts out as a certain material, these same
interface fixes may move it out of that material.
On the other hand, these fixes should improve with resolution. So
maybe we should just look at any material conversion as an
indicator of error.
4) The machinery to integrate back in time will require some work. If
we are running on 100,000 cores, the relevant information will be
scattered amongst files owned by several different processors. Now
it is true that particles will also require work, so we maybe we
just have to pick our poison.
The difference is that I know I can implement particles in a
simple, though memory-hungry way. I can just add 60 fields to each
element, and that will give me up to 20 particles per element. I
do not have to know anything more about deal.II/SAMRAI than I
already know. It will definitely work. Resizeable arrays would be
a better way, but I would have to navigate through the internals of
deal.II/SAMRAI.
In contrast, reconstructing trajectories by reading in grids which
have been adapted over time would probably require navigating
through internals. It is not a typical use case for these
frameworks.
5) The workflow will be different. At the end of a run, rather than
having all of the data available, we will have to do an additional
step of integrating back in time. To automate this, we could
enable it with a parameter in the input file, so it should not be
confusing for the user. However, if the simulation terminates
early (e.g. because it ran out of time), we would still need a way
to do the post-processing manually.
Also, integrating back in time from element nodes means that, after
every time step, the picture of tracers will change. This could be
disconcerting. We could remedy that by integrating forwards in
time. At that point, it feels like we are just reinventing
particles, but we have more flexibility.
So as long as there are not a lot of people who want to use a small
number of tracers with large runs, I think a pure field based approach
should work. I am a bit concerned about implementation issues, but we
can examine those more closely when we face them.
Cheers,
Walter Landry
walter at geodynamics.org
More information about the CIG-CS
mailing list