Hi Jonathan,<br><br>The output of your dry scheduler run looks normal to me. However, your cluster might have special requirement on the job script. You will need to compare your dry scheduler run with your colleague's job script.<br>
<br>Eh<br><br><div class="gmail_quote">On Tue, Mar 5, 2013 at 4:21 AM, Jonathan Perry-Houts <span dir="ltr"><<a href="mailto:jperryhouts@gmail.com" target="_blank">jperryhouts@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">-----BEGIN PGP SIGNED MESSAGE-----<br>
Hash: SHA1<br>
<br>
</div>Oh, of course! I can't believe I didn't think of that. Thanks Eh.<br>
<br>
As a followup question, now I'm getting the following error when I try<br>
to run the cookbook1 file:<br>
!!!! # of requested CPU is incorrect (expected: 12 got: 1)<br>
<br>
My ~/.pyre/CitcomS/CitcomS.cfg file looks like:<br>
[CitcomS]<br>
scheduler = pbs<br>
[CitcomS.job]<br>
queue = generic<br>
[CitcomS.launcher]<br>
command = mpirun --mca btl_tcp_if_include torbr -np ${nodes}<br>
[CitcomS.pbs]<br>
ppn = 4<br>
<br>
and on dry launcher runs I get a reasonable looking output:<br>
mpirun --mca btl_tcp_if_include torbr -np 12<br>
/home13/username/packages/CitcomS-3.2.0/bin/mpipycitcoms --pyre-start<br>
...paths... pythia mpi:mpistart CitcomS.SimpleApp:SimpleApp<br>
cookbook1.cfg --nodes=12 --macros.nodes=12 --<a href="http://macros.job.name" target="_blank">macros.job.name</a>=<br>
- --<a href="http://macros.job.id" target="_blank">macros.job.id</a>=227074.hn1<br>
<br>
Likewise, the dry scheduler run gives a reasonable looking output:<br>
#!/bin/sh<br>
#PBS -S /bin/sh<br>
#PBS -N jobname<br>
#PBS -q generic<br>
#PBS -o stdout.txt<br>
#PBS -e stderr.txt<br>
#PBS -l nodes=3:ppn=4<br>
cd $PBS_O_WORKDIR<br>
...<br>
# ~~~~ submit command ~~~~<br>
# qsub < [script]<br>
<br>
It appears that somehow there are lots of jobs being submitted with<br>
one processor allocated to each rather than one job with lots of<br>
processors, the program spits out 12 identical pid*.cfg files.<br>
<br>
Thanks again for your help!<br>
<br>
Cheers,<br>
Jonathan<br>
<div class="im"><br>
On 03/04/2013 01:16 AM, tan2 wrote:<br>
> Hi Jonathan,<br>
><br>
> You can add 'module load mpi...' in your ~/.profile (if using bash)<br>
> or ~/.login (if using tcsh).<br>
><br>
><br>
> Eh<br>
><br>
> On Fri, Mar 1, 2013 at 5:36 AM, Jonathan Perry-Houts<br>
> <<a href="mailto:jperryhouts@gmail.com">jperryhouts@gmail.com</a>>wrote:<br>
><br>
</div><div class="im">> Hi all,<br>
><br>
> I was wondering if there's a simple way to modify the script that<br>
> gets run when I run the citcoms executable. The cluster I'm using<br>
> has several versions of MPI installed and I need to run `module<br>
> load mpi...` at the start of any session to set the appropriate<br>
> environment variables for the MPI version I want.<br>
><br>
> The problem I'm having is that once the citcom job submits its self<br>
> to the PBS queue, it tries to use MPI without loading the<br>
> appropriate module first. Is there a way to easily change this<br>
> behavior?<br>
><br>
> Thanks in advance for any help!<br>
><br>
> Cheers, Jonathan Perry-Houts<br>
</div><div class="im">>> _______________________________________________ CIG-MC mailing<br>
>> list <a href="mailto:CIG-MC@geodynamics.org">CIG-MC@geodynamics.org</a><br>
>> <a href="http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc" target="_blank">http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc</a><br>
>><br>
><br>
><br>
><br>
> _______________________________________________ CIG-MC mailing<br>
> list <a href="mailto:CIG-MC@geodynamics.org">CIG-MC@geodynamics.org</a><br>
> <a href="http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc" target="_blank">http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc</a><br>
><br>
</div><div class="im">-----BEGIN PGP SIGNATURE-----<br>
Version: GnuPG v1.4.11 (GNU/Linux)<br>
Comment: Using GnuPG with undefined - <a href="http://www.enigmail.net/" target="_blank">http://www.enigmail.net/</a><br>
<br>
</div>iQEcBAEBAgAGBQJRNQJaAAoJEGe6xJ1FYRpRr5sIALvndHrCjzZHm3VgL8qRziyc<br>
odkPl0rAuUmfQ6SpW2lNuU7QpGrgEEack/WNBVBxcm+jyGjeKseUQQCMJ4Yp1pdr<br>
pwBHbSmi5Dfi+H0GGIpctx2XnU9+jCig9N4OcMiRpE1zTbWDFFnA/l3gTsirIjQZ<br>
IyhkcEb1quBKdvo7a48IU2fs2nrhJb7n5+n/9FZrQlHiNJM+pQLVD8UQM55k5NTt<br>
dAhv3ZdtX293OlTH327JS41xuRZtvqK8Y4vb2YYkZ+bSO9CVU5xW2aTpezbU0Qfc<br>
Y12VgIObN9yZmraJlIZs1J0u4uzTco+7QanP+GeR8ZB9vR+h3HKqFLdamrMOGs4=<br>
=tu8K<br>
<div class="HOEnZb"><div class="h5">-----END PGP SIGNATURE-----<br>
_______________________________________________<br>
CIG-MC mailing list<br>
<a href="mailto:CIG-MC@geodynamics.org">CIG-MC@geodynamics.org</a><br>
<a href="http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc" target="_blank">http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc</a><br>
</div></div></blockquote></div><br>