<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hello Eh (and everyone else following the saga),<div><br></div><div>We (I mean Bill Broadley) determined that this issue is somehow related to the compilers either</div><div>openmpi-1.3.3 or gcc-4.4. Bill recompiled on the new machine using openmpi-1.3.1 and gcc-4.3 and</div><div>CitcomCU is no longer hanging. </div><div><br></div><div>Apparently, we are not the first to meet this issue with the compiler (there's stuff on the web about the issue -</div><div>I will send the specifics, when I hear back from Bill - he was up until 4am debugging, so I don't have the details yet).</div><div><br></div><div>Thanks for everyone's feedback and help! </div><div>And, thanks to Jinshui for identifying the other (unknown) race condition.</div><div><br></div><div>I've recapped below as some communication was done off the CIG list.</div><div><br></div><div>Magali</div><div><br></div><div>To recap:</div><div><br></div><div>Symptoms: </div><div>1) A CitcomCU test run on a new cluster using 8 processors hangs on either MPI_Isend or MPI_WaitAll,</div><div>but in a random way. Sometimes the first time these commands are used, sometimes not until these</div><div>have been used many times before. The commands are used in functions that exchange information</div><div>between processors, so they are called during initialization when the mass_matrix is defined and</div><div>later on during the solving iterations by the gauss_seidel function. This exact same version of </div><div>CitcomCU runs on another cluster without issues. </div><div><br></div><div>2) CitcomS was compiled on the same cluster with the same compilers and it seems to run fine.</div><div><br></div><div>3) On the new cluster the code was compiled with openmpi-1.3 and gcc-4.4. On the old cluster</div><div>is was compiled with openmpi-1.2.6 and gcc-4.3</div><div><br></div><div>Possibilities:</div><div>1) Race condition in code - always a possibility, but probably not source of this issue: </div><div><span class="Apple-tab-span" style="white-space:pre">        </span>- MPI_Isend is always paired with an MPI_Irecv and is always followed by an MPI_Waitall.</div><div><span class="Apple-tab-span" style="white-space:pre">        </span>- In the test we were running, no markers were being used so E->parallel.mst1, mst2, mst3 were not being used.</div><div><span class="Apple-tab-span" style="white-space:pre">        </span> (although, its certainly good to have found this problem, and I will update this my code). </div><div><span class="Apple-tab-span" style="white-space:pre">        </span> E->parallel.mst is used, but this array is initialized in parallel_domain_decomp1.</div><div><span class="Apple-tab-span" style="white-space:pre">        </span>- The mst array was also big enough (100 x 100) as the test was only on 8 processors.</div><div><br></div><div>2) Machine hardware (chipset + gigabit ethernet) - ugh, daunting.</div><div><br></div><div>3) Compilers. </div><div><br></div><div><br></div><div><br></div><div><div><div>On Nov 18, 2009, at 12:13 AM, Eh Tan wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>Hi Magali,<br><br>Like Shijie said, the function exchange_id_d20() in CitcomCU is very <br>similar to regional_exchange_id_d() in CitcomS. I don't have an <br>immediate answer why one works but the other doesn't.<br><br>BTW, in your earlier email, you mentioned that the code died inside <br>function mass_matrix(). In this email, the code died inside function <br>gauss_seidel(). Did the code die at different places randomly?<br><br>Eh<br><br><br><br>Magali Billen wrote:<br><blockquote type="cite">Hello Eh,<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">This is a run on 8 processors. If I print the stack I get:<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">(gdb) bt<br></blockquote><blockquote type="cite">#0 0x00002b943e3c208a in opal_progress () from<br></blockquote><blockquote type="cite">/share/apps/openmpisb-1.3/gcc-4.4/lib/libopen-pal.so.0<br></blockquote><blockquote type="cite">#1 0x00002b943def5c85 in ompi_request_default_wait_all () from<br></blockquote><blockquote type="cite">/share/apps/openmpisb-1.3/gcc-4.4/lib/libmpi.so.0<br></blockquote><blockquote type="cite">#2 0x00002b943df229d3 in PMPI_Waitall () from<br></blockquote><blockquote type="cite">/share/apps/openmpisb-1.3/gcc-4.4/lib/libmpi.so.0<br></blockquote><blockquote type="cite">#3 0x0000000000427ef5 in exchange_id_d20 ()<br></blockquote><blockquote type="cite">#4 0x00000000004166f3 in gauss_seidel ()<br></blockquote><blockquote type="cite">#5 0x000000000041884b in multi_grid ()<br></blockquote><blockquote type="cite">#6 0x0000000000418c44 in solve_del2_u ()<br></blockquote><blockquote type="cite">#7 0x000000000041b151 in solve_Ahat_p_fhat ()<br></blockquote><blockquote type="cite">#8 0x000000000041b9a1 in solve_constrained_flow_iterative ()<br></blockquote><blockquote type="cite">#9 0x0000000000411ca6 in general_stokes_solver ()<br></blockquote><blockquote type="cite">#10 0x0000000000409c21 in main ()<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">I've attached the version of Parallel_related.c that is used... I have <br></blockquote><blockquote type="cite">not modified this in anyway<br></blockquote><blockquote type="cite">from the CIG release of CitcomCU.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">------------------------------------------------------------------------<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Luckily, there are commented fprintf statements in just that part of <br></blockquote><blockquote type="cite">the code... we'll continue to dig...<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Oh, and just to eliminate the new cluster from suspicion, we <br></blockquote><blockquote type="cite">downloaded, compiled and ran CitcomS<br></blockquote><blockquote type="cite">example1.cfg on the same cluster with the same compilers, and their <br></blockquote><blockquote type="cite">was not problem.<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Maybe this is the sign that I'm suppose to finally switch from <br></blockquote><blockquote type="cite">CitcomCU to CitcomS... :-(<br></blockquote><blockquote type="cite">Magali<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">On Nov 17, 2009, at 5:02 PM, Eh Tan wrote:<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><blockquote type="cite">Hi Magali,<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">How many processors are you using? If more than 100 processors are used,<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">you are seeing this bug:<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><a href="http://www.geodynamics.org/pipermail/cig-mc/2008-March/000080.html">http://www.geodynamics.org/pipermail/cig-mc/2008-March/000080.html</a><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Eh<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Magali Billen wrote:<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">One correction to the e-mail below, we've been compiling CitcomCU<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">using openmpi on our old<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">cluster, so the compiler on the new cluster is the same. The big<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">difference is that the cluster<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">is about twice as fast as the 5-year old cluster. This suggests that<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">this change to a much faster<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">clsuter may have exposed an existing race condition in CitcomCU??<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Magali<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Begin forwarded message:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*From: *Magali Billen <<a href="mailto:mibillen@ucdavis.edu">mibillen@ucdavis.edu</a><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><<a href="mailto:mibillen@ucdavis.edu">mailto:mibillen@ucdavis.edu</a>>><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*Date: *November 17, 2009 4:23:45 PM PST<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*To: *cig-mc@geodynamics.org <<a href="mailto:cig-mc@geodynamics.org">mailto:cig-mc@geodynamics.org</a>><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*Subject: **[CIG-MC] MPI_Isend error*<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Hello,<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I'm using CitcomCU and am having a strange problem with problem<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">either hanging (no error, just doesn't<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">go anywhere) or it dies with an MPI_Isend error (see below). I seem<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">to recall having problems with the MPI_Isend<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">command and the lam-mpi version of mpi, but I've not had any problems<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">with mpich-2.<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">On the new cluster we are compling with openmpi instead of MPICH-2.<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">The MPI_Isend error seems to occur during Initialization in the call<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">to the function mass_matrix, which then<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">calls exchange_node_f20, which is where the call to MPI_Isend is.<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">--snip--<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ok14: parallel shuffle element and id arrays<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ok15: construct shape functions<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[farm.caes.ucdavis.edu:27041] *** An error occurred in MPI_Isend<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[farm.caes.ucdavis.edu:27041] *** on communicator MPI_COMM_WORLD<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[farm.caes.ucdavis.edu:27041] *** MPI_ERR_RANK: invalid rank<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[farm.caes.ucdavis.edu:27041] *** MPI_ERRORS_ARE_FATAL (your MPI job<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">will now abort)<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Has this (or these) types of error occurred for other versions of<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Citcom using MPI_Isend (it seems that CitcomS uses<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">this command also). I'm not sure how to debug this error,<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">especially since sometimes it just hangs with no error.<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Any advice you have would be hepful,<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Magali<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">-----------------------------<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Associate Professor, U.C. Davis<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Department of Geology/KeckCAVEs<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Physical & Earth Sciences Bldg, rm 2129<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Davis, CA 95616<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">-----------------<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">mibillen@ucdavis.edu <<a href="mailto:mibillen@ucdavis.edu">mailto:mibillen@ucdavis.edu</a>><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">(530) 754-5696<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*-----------------------------*<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*** Note new e-mail, building, office*<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">* information as of Sept. 2009 ***<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">-----------------------------<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">_______________________________________________<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">CIG-MC mailing list<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">CIG-MC@geodynamics.org <<a href="mailto:CIG-MC@geodynamics.org">mailto:CIG-MC@geodynamics.org</a>><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc">http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc</a><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">-----------------------------<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Associate Professor, U.C. Davis<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Department of Geology/KeckCAVEs<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Physical & Earth Sciences Bldg, rm 2129<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Davis, CA 95616<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">-----------------<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">mibillen@ucdavis.edu <<a href="mailto:mibillen@ucdavis.edu">mailto:mibillen@ucdavis.edu</a>><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">(530) 754-5696<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*-----------------------------*<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">*** Note new e-mail, building, office*<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">* information as of Sept. 2009 ***<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">-----------------------------<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">------------------------------------------------------------------------<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">_______________________________________________<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">CIG-MC mailing list<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="mailto:CIG-MC@geodynamics.org">CIG-MC@geodynamics.org</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc">http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">-- <br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Eh Tan<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Staff Scientist<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Computational Infrastructure for Geodynamics<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">California Institute of Technology, 158-79<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Pasadena, CA 91125<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">(626) 395-1693<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><a href="http://www.geodynamics.org">http://www.geodynamics.org</a><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">_______________________________________________<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">CIG-MC mailing list<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><a href="mailto:CIG-MC@geodynamics.org">CIG-MC@geodynamics.org</a><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><a href="http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc">http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc</a><br></blockquote></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">-----------------------------<br></blockquote><blockquote type="cite">Associate Professor, U.C. Davis<br></blockquote><blockquote type="cite">Department of Geology/KeckCAVEs<br></blockquote><blockquote type="cite">Physical & Earth Sciences Bldg, rm 2129<br></blockquote><blockquote type="cite">Davis, CA 95616<br></blockquote><blockquote type="cite">-----------------<br></blockquote><blockquote type="cite">mibillen@ucdavis.edu <<a href="mailto:mibillen@ucdavis.edu">mailto:mibillen@ucdavis.edu</a>><br></blockquote><blockquote type="cite">(530) 754-5696<br></blockquote><blockquote type="cite">*-----------------------------*<br></blockquote><blockquote type="cite">*** Note new e-mail, building, office*<br></blockquote><blockquote type="cite">* information as of Sept. 2009 ***<br></blockquote><blockquote type="cite">-----------------------------<br></blockquote><blockquote type="cite"><br></blockquote><br>_______________________________________________<br>CIG-MC mailing list<br><a href="mailto:CIG-MC@geodynamics.org">CIG-MC@geodynamics.org</a><br>http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc<br></div></blockquote></div><br><div apple-content-edited="true"> <span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: 'Lucida Grande'; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: 'Lucida Grande'; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>-----------------------------</div><div>Associate Professor, U.C. Davis</div><div>Department of Geology/KeckCAVEs</div><div>Physical & Earth Sciences Bldg, rm 2129</div><div>Davis, CA 95616</div><div>-----------------</div><div><a href="mailto:mibillen@ucdavis.edu">mibillen@ucdavis.edu</a></div><div>(530) 754-5696</div><div><b>-----------------------------</b></div><div><b>** Note new e-mail, building, office</b></div><div><b> information as of Sept. 2009 **</b></div><div>-----------------------------</div></div></div></span></div></span> </div><br></div></body></html>