[molpro-user] Shared Memory & MPI conflict

tapang at barc.gov.in tapang at barc.gov.in
Mon Feb 12 12:28:11 GMT 2007


Dear Prof. Reuti,
Thanks a lot for your reply. Now I am facing problem
while submitting jobs through PBS quiung system.
I am using the command pqsub -n no_of_proc -l mpich jobscript &

In the jobscript I am using (MPI compiled in our linux machine)

/usr/local/molpro/bin/molpro -d /scratch -n4/1:2 -machinefile $PBS_NODEFILE 
testjob.com &

However, it is not producing any output. Would you please help me.

with best regards,

Tapan Ghanty


Quoting Reuti <reuti at staff.uni-marburg.de>:

> Hi,
> 
> Am 05.02.2007 um 10:51 schrieb tapang at barc.gov.in:
> 
> >
> > Dear Molpro Users,
> >
> > We have compiled Molpro-2006.1 in our linux clusters
> > (with two processors per node) using MPI option.
> > While running using the command,
> >
> > /usr/local/molpro/bin/molpro -d /scratch -n4/2:1 -N
> > user1:node01:2,user1:node02:2 h2o.com >h2o.out &
> 
> are you using a queuingsystem? There is some automatism in Molpro,  
> which will prepare a proper processgroup-file (it's a little bit  
> different in format from the machinefile, as also the program which  
> should be executed is mentioned there).
> 
> If this is already prepared, the --machinefile option and/or nodelist  
> is obsolete.
> 
> -- Reuti
> 
> 
> > it is giving following error:
> >
> >
> > -p4pg and -np are mutually exclusive; -np 4 being ignored.
> > p0_31507: (0.000000) Specified multiple processes sharing memory  
> > without
> > configuring for shared memory.p0_31507: (0.000000) Check the users  
> > manual for
> > more information.
> > p0_31507:  p4_error: read_procgroup: 0
> > P4 procgroup file is /tmp/procgrp.00031457
> >
> >
> > Would anyone please let me know what is to be configured?
> >
> > I have tried many ways, even using mpirun as well as --mpirun- 
> > machinefile
> > nodefile etc etc, however could not succedd.
> >
> > With thanks,
> >
> > Tapan Ghanty
> >
> >
> >
> >
> >
> >
> >
> >
> > -------------------------------------------------
> >
> 




-------------------------------------------------





More information about the Molpro-user mailing list