[molpro-user] run molpro in parallel with sge

Javier Díaz Montes javier.diaz at uclm.es
Tue Dec 4 10:06:49 GMT 2007


Hi,

Thank you, I have compiled using MPI, as Shenggang Li said me. Now,I can  
run molpro using the command that Retui said me. Molpro run ok, and the  
jobs finish.

Regards


En Fri, 30 Nov 2007 19:47:12 +0100, Reuti <reuti at staff.uni-marburg.de>  
escribió:

> Hi,
>
> Am 30.11.2007 um 17:58 schrieb Javier Díaz Montes:
>
>> I am trying to run molpro in parallel. That is, I want to execute a  
>> molpro job in two nodes of a cluster using mpi. However I obtain an  
>> error when I execute my batch file. Someone could I help me?
>> I have installed molpro with this rpm molpro-mpp-2006.1-102.p4.rpm.
>>
>>
>> This is the error:
>> ---------------------------------------------------------------------- 
>> ----
>> mpirun was unable to determine how many processes to launch for the
>> following process:
>>
>>     /home/programs/molpro/molpro-mpp-Linux-i686-i4-2006.1/ 
>> molprop_2006_1_i4_p4_tcgmsg.exe
>>
>> You must specify how many processes to launch, via the -np
>> argument.
>> ---------------------------------------------------------------------- 
>> ----
>>
>> This is my batch file
>>
>> #!/bin/bash
>> #$ -S /bin/bash
>> #$ -V
>> #$ -cwd
>> #$ -pe mpich 2
>>
>> /home/programs/molpro/bin/molpro is2qrtbt.com
>
> we use:
>
> molprop -n $NSLOTS --mpirun-machinefile $TMPDIR/machines < infile >  
> outfile
>
>> And this is the molpro execution file:
>>
>> #!/bin/sh
>> TMPDIR=/state/partition2/$JOB_ID
>
> $TMPDIR is already set by SGE (i.e., it will create and delete a  
> directory dedicated for each job on all involved nodes). Only thing to  
> note is, that you should setup your PE that it is only attached to one  
> queue, as the queuename will be part of the directory, and it should be  
> the same on all nodes.
>
> For this to work, you will need a so called Tight Integration of your  
> parallel application. With a proper set up PE, the in SGE defined  
> start_proc_args will create a link to a rsh-wrapper. Was GA compiled  
> with rsh or ssh? You used the MPI template in your SGE setup?
>
>> mkdir $TMPDIR
>> MOLPRO_CONFIG="\
>> -G 8000000 \
>> -d$TMPDIR \
>> -I$TMPDIR \
>> -W$SGE_O_WORKDIR \
>> -m7272727 \
>> -K131072 \
>> -B64 \
>> --tcgmsg \
>
> If you compiled GA to use TCGMSG over MPI, this should read: --mpi \
>
>> -lmpirun \
>> --environment MOLPRO_COMPILER=ifort9.1 \
>> --environment MOLPRO_MXMBLK=0064 \
>> --environment MOLPRO_MXMBLN=0064 \
>> --environment MOLPRO_CACHE=0012288 \
>> --environment MOLPRO_UNROLL=2 \
>> --environment MOLPRO_MINDGM=5000 \
>> --environment MOLPRO_MINDGC=5000 \
>> --environment MOLPRO_MINDGR=5000 \
>> --environment MOLPRO_MINDGL=5000 \
>> --environment MOLPRO_MINDGV=5000 \
>> --environment MOLPRO_FLOPMXM=001961 \
>> --environment MOLPRO_FLOPDGM=001961 \
>> --environment MOLPRO_FLOPMXV=001302 \
>> --environment MOLPRO_FLOPDGV=001302 \
>> -x /home/programs/molpro/molpro-mpp-Linux-i686-i4-2006.1/ 
>> molprop_2006_1_i4_p4_tcgmsg.exe \
>
> We use TCGMSG over MPI and get molprop_2006_1_i4_p4_mpi.exe
>
>> -L /home/programs/molpro/molpro-mpp-Linux-i686-i4-2006.1/ \
>> ";
>> export MOLPRO_CONFIG
>> exec /home/programs/molpro/molpro-mpp-Linux-i686-i4-2006.1/molpro $*
>
> Additonal hints if you use MPICH1 in SGE:
>
> http://gridengine.sunsource.net/howto/mpich-integration.html
>
> and their mailing lists:
>
> http://gridengine.sunsource.net/maillist.html
>
> especially "users".
>
> -- Reuti



-- 
+---------------------------------------------------------------+
Javier Diaz Montes
PhD Candidate
Grupo de Quimica Computacional y Computacion de Alto Rendimiento.
Departamento de Tecnologias y Sistemas de Informacion.
Escuela Superior de Informatica.
Universidad de Castilla-La Mancha.
Paseo de la Universidad, 4; 13071 Ciudad Real; SPAIN
Tel.: 34-926295300; Ext: 3724
e-mail: javier.diaz at uclm.es
+---------------------------------------------------------------+




More information about the Molpro-user mailing list