[molpro-user] Problems running molpro in parralel (no speedup)

Fogarty, Richard M r.fogarty14 at imperial.ac.uk
Fri Sep 15 17:29:31 CEST 2017


Dear Molpro Users,

I'm having some trouble getting molpro to run in parralel properly and was hoping someone could help.

The Problem:
 Specifically, when i run molpro with multiple cores it gives a lot of indications thats its running in parralel but my jobs get no real time speedup (in fact they often take longer when run in parralel).

For example running (on a cluster) "molpro -n -v $cores input_file.in" for this input_file.in:

"memory,700,m
GDIRECT
symmetry,nosym
{basis set info removed to save space}
{Geometry removed removed to save space}
{rhf}"

leads to Real times of 7575.33/6218.91/6029.07 seconds for 12/4/1 cores  respectively (i.e more cores=slower).

Relevant (i think) info:
I'm using Molpro Version 2015.1 and submitting with this PBS script:

"# batch processing commands
#PBS -l walltime=04:00:00
#PBS -lselect=1:ncpus=4:mpiprocs=4:mem=22000MB:tmpspace=400000MB
#PBS -j oe
#PBS -q pqmaterials
#PBS -m n

/home/rf614/MolPro/bin/molpro -n ${core} -d $TMPDIR -W $TMPDIR -v $PBS_O_WORKDIR/${in} > $TMPDIR/$cust_error_file"

Note changing either ncpus or mpiprocs to 1 leads to the serial code being used.

The verbose option in molpro leads to the following information being printed(for the 4 core job):
 # PARALLEL mode, HOST=cx1-130-5-20
 nodelist=4
 first   =4
 second  =
 third   =
 HOSTFILE_FORMAT: $hostname

cx1-130-5-20.cx1.hpc.ic.ac.uk
cx1-130-5-20.cx1.hpc.ic.ac.uk
cx1-130-5-20.cx1.hpc.ic.ac.uk
cx1-130-5-20.cx1.hpc.ic.ac.uk

        LD_LIBRARY_PATH=''
 export AIXTHREAD_SCOPE='s'
 export MOLPRO_PREFIX='/export131/home/rf614/MolPro/install'
 export MP_NODES='0'
 export MP_PROCS='4'
        MP_TASKS_PER_NODE=''
 export MOLPRO_NOARG='1'
 export MOLPRO_OPTIONS=' -d /var/tmp/pbs.165647.cx1 -W /var/tmp/pbs.165647.cx1 -v /work/rf614/Post_Doc/molpro_tests/cpu_time_tests/SPE_Only/pentacene_GP_PBE_TDDFT_ccpvdz_molpro_4core.com -t 4'
        MOLPRO_OPTIONS_FILE=''
        MPI_MAX_CLUSTER_SIZE=''
        MV2_ENABLE_AFFINITY=''
 export RT_GRQ='ON'
 export TMPDIR='/var/tmp/pbs.165647.cx1'
 export XLSMPOPTS='parthds=4'
/export131/home/rf614/MolPro/install/bin/mpiexec.hydra -machinefile /var/tmp/pbs.165647.cx1/procgrp.25439 -np 4 /export131/home/rf614/MolPro/install/bin/molpro.exe  -d /var/tmp/pbs.165647.cx1 -W /var/tmp/pbs.165647.cx1 -v /work/rf614/Post_Doc/molpro_tests/cpu_time_tests/SPE_Only/pentacene_GP_PBE_TDDFT_ccpvdz_molpro_4core.com -t 4
 token read from /export131/home/rf614/MolPro/install/lib//.token

I'd appreciate any help/suggestions to solve this.
Thanks in advance
Richard

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.molpro.net/pipermail/molpro-user/attachments/20170915/5c934289/attachment.html>


More information about the Molpro-user mailing list