[CIG-MC] CitcomS launcher modify batch script
Jonathan Perry-Houts
jperryhouts at gmail.com
Mon Mar 11 10:50:19 PDT 2013
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Thanks Eh,
I ran in circles with this issue for a while and decided in the end to
just use the pure C code rather than the python front end. Thanks for
your help.
Cheers,
Jonathan
On 03/04/2013 05:44 PM, tan2 wrote:
> Hi Jonathan,
>
> The output of your dry scheduler run looks normal to me. However,
> your cluster might have special requirement on the job script. You
> will need to compare your dry scheduler run with your colleague's
> job script.
>
> Eh
>
> On Tue, Mar 5, 2013 at 4:21 AM, Jonathan Perry-Houts
> <jperryhouts at gmail.com>wrote:
>
> Oh, of course! I can't believe I didn't think of that. Thanks Eh.
>
> As a followup question, now I'm getting the following error when I
> try to run the cookbook1 file: !!!! # of requested CPU is incorrect
> (expected: 12 got: 1)
>
> My ~/.pyre/CitcomS/CitcomS.cfg file looks like: [CitcomS] scheduler
> = pbs [CitcomS.job] queue = generic [CitcomS.launcher] command =
> mpirun --mca btl_tcp_if_include torbr -np ${nodes} [CitcomS.pbs]
> ppn = 4
>
> and on dry launcher runs I get a reasonable looking output: mpirun
> --mca btl_tcp_if_include torbr -np 12
> /home13/username/packages/CitcomS-3.2.0/bin/mpipycitcoms
> --pyre-start ...paths... pythia mpi:mpistart
> CitcomS.SimpleApp:SimpleApp cookbook1.cfg --nodes=12
> --macros.nodes=12 --macros.job.name= --macros.job.id=227074.hn1
>
> Likewise, the dry scheduler run gives a reasonable looking output:
> #!/bin/sh #PBS -S /bin/sh #PBS -N jobname #PBS -q generic #PBS -o
> stdout.txt #PBS -e stderr.txt #PBS -l nodes=3:ppn=4 cd
> $PBS_O_WORKDIR ... # ~~~~ submit command ~~~~ # qsub < [script]
>
> It appears that somehow there are lots of jobs being submitted
> with one processor allocated to each rather than one job with lots
> of processors, the program spits out 12 identical pid*.cfg files.
>
> Thanks again for your help!
>
> Cheers, Jonathan
>
> On 03/04/2013 01:16 AM, tan2 wrote:
>>>> Hi Jonathan,
>>>>
>>>> You can add 'module load mpi...' in your ~/.profile (if using
>>>> bash) or ~/.login (if using tcsh).
>>>>
>>>>
>>>> Eh
>>>>
>>>> On Fri, Mar 1, 2013 at 5:36 AM, Jonathan Perry-Houts
>>>> <jperryhouts at gmail.com>wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I was wondering if there's a simple way to modify the script
>>>> that gets run when I run the citcoms executable. The cluster
>>>> I'm using has several versions of MPI installed and I need to
>>>> run `module load mpi...` at the start of any session to set
>>>> the appropriate environment variables for the MPI version I
>>>> want.
>>>>
>>>> The problem I'm having is that once the citcom job submits
>>>> its self to the PBS queue, it tries to use MPI without
>>>> loading the appropriate module first. Is there a way to
>>>> easily change this behavior?
>>>>
>>>> Thanks in advance for any help!
>>>>
>>>> Cheers, Jonathan Perry-Houts
>>>>> _______________________________________________ CIG-MC
>>>>> mailing list CIG-MC at geodynamics.org
>>>>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc
>>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________ CIG-MC
>>>> mailing list CIG-MC at geodynamics.org
>>>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc
>>>>
>> _______________________________________________ CIG-MC mailing
>> list CIG-MC at geodynamics.org
>> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc
>>
>
>
>
> _______________________________________________ CIG-MC mailing
> list CIG-MC at geodynamics.org
> http://geodynamics.org/cgi-bin/mailman/listinfo/cig-mc
>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/
iQEcBAEBAgAGBQJRPhlbAAoJEGe6xJ1FYRpR6i0H/3WW+pRqWv+e4Z4CaArtjzBG
vGMQBmAY2UlSlMNUw8sKFsEsvTr+94u9QRbYIWTnVGF+0GPspIfhY+RGsqM7yaio
ZVaRZZy9Yd7ctwHQzFQhgv/zVPESXdATDMTD6ETnlK1HcZ3SJwQRyVoM6Eu01LRq
awIGgRpaGdZHPlWIqvwYrBz0V7jg6W6iHC0xhUtx0vau5cur3tdLnALwObkORH6w
C6+mPQp8tc8WS91jeS977xXVxU4AQkHkVns5fJ0a/zterN+SFP69we0M2E1LmMQ6
tfhceScVOZZnzFOArruwmIH0T9Q3J0a4eZcz5RTH+4iJD187Iu3LSi68Ssoqi08=
=pCVp
-----END PGP SIGNATURE-----
More information about the CIG-MC
mailing list