[CIG-MC] Re: More advection questions

Eh Tan tan2 at geodynamics.org
Wed Sep 19 14:02:29 PDT 2007



Norm Sleep wrote:
> Sorry but I am confused again by pointers.  I cannot fine where v dot
> grad T is actually evaluated.
>
> Where is div grad T evaluated for conduction.
>

For CitcomCU, the advection term is evaluated at line 649 and 656,
Advection_diffusion.c, as:
    v1[i] * tx1[i] + v2[i] * tx2[i] + v3[i] * tx3[i]
where tx1 is the x1 component of grad(T)

The conduction term is evaluated at the same line as:
    E->gNX[el].vpt[GNVXINDEX(0, j, i)] * tx1[i] +
E->gNX[el].vpt[GNVXINDEX(1, j, i)] * tx2[i] +
E->gNX[el].vpt[GNVXINDEX(2, j, i)] * tx3[i])
where gNX is the derivative of the shape function.


> Confirm: The code to find velocity uses velocity from previous time
> step. So that fine step steps do not greatly increase running time.

If you want to reach a certain model time, using a finer step size will
increase the number of steps. The running time of each step is not
affected, but the total running time will increase.


>
> I can do this in FORTRAN trivially.  div grad T is already centered on
> a cart3-D grid so the explicit derivatives are obvious if I can find
> neighbors.

In single processor case, one can access all nodes directly. The 6
neighboring nodes for node n are:

n+1  (+ z-direction)
n-1  (- z-direction)
n+E->lmesh.noz  (+ x-direction)
n-E->lmesh.noz  (- x-direction)
n+E->lmesh.noz*E->lmesh.nox  (+ y-direction)
n-E->lmesh.noz*E->lmesh.nox  (- y-direction)


These nodes must be in the range of [1, E->lmesh.nno] to be a valid node
number.


In multi-processor case, there is no easy way to access ALL neighboring
nodes. A neighboring node might be owned by another processor. Some
parallel communication (via MPI call) will be required, and I won't go
into the detail here.


-- 
Eh Tan
Staff Scientist
Computational Infrastructure for Geodynamics
2750 E. Washington Blvd. Suite 210
Pasadena, CA 91107
(626) 395-1693
http://www.geodynamics.org



More information about the CIG-MC mailing list