[CIG-MC] CitcomS launcher modify batch script
tan2
tan2tan2 at gmail.com
Mon Mar 4 17:44:03 PST 2013
Hi Jonathan,
The output of your dry scheduler run looks normal to me. However, your
cluster might have special requirement on the job script. You will need to
compare your dry scheduler run with your colleague's job script.
Eh
On Tue, Mar 5, 2013 at 4:21 AM, Jonathan Perry-Houts
<jperryhouts at gmail.com>wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Oh, of course! I can't believe I didn't think of that. Thanks Eh.
>
> As a followup question, now I'm getting the following error when I try
> to run the cookbook1 file:
> !!!! # of requested CPU is incorrect (expected: 12 got: 1)
>
> My ~/.pyre/CitcomS/CitcomS.cfg file looks like:
> [CitcomS]
> scheduler = pbs
> [CitcomS.job]
> queue = generic
> [CitcomS.launcher]
> command = mpirun --mca btl_tcp_if_include torbr -np ${nodes}
> [CitcomS.pbs]
> ppn = 4
>
> and on dry launcher runs I get a reasonable looking output:
> mpirun --mca btl_tcp_if_include torbr -np 12
> /home13/username/packages/CitcomS-3.2.0/bin/mpipycitcoms --pyre-start
> ...paths... pythia mpi:mpistart CitcomS.SimpleApp:SimpleApp
> cookbook1.cfg --nodes=12 --macros.nodes=12 --macros.job.name=
> - --macros.job.id=227074.hn1
>
> Likewise, the dry scheduler run gives a reasonable looking output:
> #!/bin/sh
> #PBS -S /bin/sh
> #PBS -N jobname
> #PBS -q generic
> #PBS -o stdout.txt
> #PBS -e stderr.txt
> #PBS -l nodes=3:ppn=4
> cd $PBS_O_WORKDIR
> ...
> # ~~~~ submit command ~~~~
> # qsub < [script]
>
> It appears that somehow there are lots of jobs being submitted with
> one processor allocated to each rather than one job with lots of
> processors, the program spits out 12 identical pid*.cfg files.
>
> Thanks again for your help!
>
> Cheers,
> Jonathan
>
> On 03/04/2013 01:16 AM, tan2 wrote:
> > Hi Jonathan,
> >
> > You can add 'module load mpi...' in your ~/.profile (if using bash)
> > or ~/.login (if using tcsh).
> >
> >
> > Eh
> >
> > On Fri, Mar 1, 2013 at 5:36 AM, Jonathan Perry-Houts
> > <jperryhouts at gmail.com>wrote:
> >
> > Hi all,
> >
> > I was wondering if there's a simple way to modify the script that
> > gets run when I run the citcoms executable. The cluster I'm using
> > has several versions of MPI installed and I need to run `module
> > load mpi...` at the start of any session to set the appropriate
> > environment variables for the MPI version I want.
> >
> > The problem I'm having is that once the citcom job submits its self
> > to the PBS queue, it tries to use MPI without loading the
> > appropriate module first. Is there a way to easily change this
> > behavior?
> >
> > Thanks in advance for any help!
> >
> > Cheers, Jonathan Perry-Houts
> >> _______________________________________________ CIG-MC mailing
> >> list CIG-MC at geodynamics.org
> >> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc
> >>
> >
> >
> >
> > _______________________________________________ CIG-MC mailing
> > list CIG-MC at geodynamics.org
> > http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc
> >
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.11 (GNU/Linux)
> Comment: Using GnuPG with undefined - http://www.enigmail.net/
>
> iQEcBAEBAgAGBQJRNQJaAAoJEGe6xJ1FYRpRr5sIALvndHrCjzZHm3VgL8qRziyc
> odkPl0rAuUmfQ6SpW2lNuU7QpGrgEEack/WNBVBxcm+jyGjeKseUQQCMJ4Yp1pdr
> pwBHbSmi5Dfi+H0GGIpctx2XnU9+jCig9N4OcMiRpE1zTbWDFFnA/l3gTsirIjQZ
> IyhkcEb1quBKdvo7a48IU2fs2nrhJb7n5+n/9FZrQlHiNJM+pQLVD8UQM55k5NTt
> dAhv3ZdtX293OlTH327JS41xuRZtvqK8Y4vb2YYkZ+bSO9CVU5xW2aTpezbU0Qfc
> Y12VgIObN9yZmraJlIZs1J0u4uzTco+7QanP+GeR8ZB9vR+h3HKqFLdamrMOGs4=
> =tu8K
> -----END PGP SIGNATURE-----
> _______________________________________________
> CIG-MC mailing list
> CIG-MC at geodynamics.org
> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://geodynamics.org/pipermail/cig-mc/attachments/20130305/9aa93878/attachment.htm
More information about the CIG-MC
mailing list