[molpro-user] Molpro 2006.1 on cluster with infiniband

Kirk Peterson kipeters at wsu.edu
Mon Jan 8 02:41:21 GMT 2007


Hi,

try something like:

/opt/chem/molpro/bin/molpro -n$NCPU -d$JOBDIR --mpirun-machinefile \ 
$PBS_NODEFILE   $JOB.com


-Kirk


On Jan 7, 2007, at 6:01 PM, Jyh-Shyong Ho wrote:

> Hi,
>
> I  got an Infiniband /OpenIB version of GA library and built  
> Molpro2006.1
> on our Opteron cluster.  However, I don't know how to set the
> WRAPPER-mpi varibale correctly in order to run parallel Molpro job
> under TORQUE environment.  Perhaps someone can provide me some help?
>
> Normall, the command to run an MPI job over Infiniband interconnect  
> on our
> cluster is
>
> /opt/mvapich/pathscale/bin/mpirun_rsh -ssh -hostfile $PBS_NODEFILE   
> -np $nproc $executable
>
> Where $PBS_NODEFILE is a temporary file which include the names of  
> nodes assigned
> by TORQUE for this job.  $nproc is the number of processors required.
>
> I defined WRAPPER_mpi in file CONFIG as
> WRAPPER_mpi="/opt/mvapich/pathscale/bin/mpirun_rsh \-ssh"
> since both $PBS_NODEFILE and $nproc variables cannot be defined during
> the compilation.
> I am not able to run the compiled Molpro2006.1 with  command in a  
> TORQUE script file
>
> /opt/chem/molpro/bin/molpro -n$NCPU -d$JOBDIR $JOB.com
>
> I always got an error message:
>
> Cannot find MPIRUN machine file for machine ch_gen2 and  
> architecture LINUX .
> (No device specified)
>
> Which means that the machine list in file $PBS_NODEFILE  was not read.
>
> Any suggestion on how to solve this problem?
>
> Jyh-Shyong Ho, Ph.D.
> Research Scientist
> National Center for High Performance Computing
> Hsinchu, Taiwan, ROC
>
>




More information about the Molpro-user mailing list