[molpro-user] parallel task control

Mintz, Benjamin J. bm0037 at gmail.com
Mon Jan 24 17:18:22 GMT 2011


Andy,

Thank you for your response.  However, I already managed to get this working.  I did the following.

I am using openmpi, sge, and infiniband.  Openmpi has tight integration with SGE, so you don't have to specify a hostfile, but you do need to specify a -npernode flag to control the number of tasks per node.  I changed LAUNCHER from 

>> LAUNCHER=mpirun -np %n %x

to 

>> LAUNCHER=mpirun -np %n -npernode %p %x


I also modified the bin/molpro script file to include the second line below, which changes the %p in LAUNCHER to the value of MPI_MAX_CLUSTER_SIZE.

 LAUNCHER="`echo $LAUNCHER | sed -e 's/%n/$MP_PROCS/g'`"
 LAUNCHER="`echo $LAUNCHER | sed -e 's/%p/$MPI_MAX_CLUSTER_SIZE/g'`"

This allows me to do the following on a system with 8 core machines.

molpro -n 8/4:1 -v file.com

where 8 tasks are split across two nodes (4 on one node and 4 on another node). 

Benjamin J. Mintz
Research Associate, Postdoctoral
Oak Ridge Leadership Computing Facility (OLCF)
Oak Ridge National Laboratory
Phone: (865) 574-6639
Email: mintzbj at ornl.gov
Gmail: bm0037 at gmail.com


On Jan 24, 2011, at 10:55 AM, Andy May wrote:

> Benjamin,
> 
> I don't think 18/4:1 is enough information, this just lets Molpro know
> how many processes to start, but you need to let mpirun know where the
> processes should actually run. Normally this is done through a hostfile
> of some kind.
> 
> Best wishes,
> 
> Andy
> 
> On 21/01/11 17:22, Mintz, Benjamin J. wrote:
>> To Users,
>> 
>> I am trying run a parallel (mppx) job, and I want to specify the number of tasks per node.  The manual says to use -n tasks/tasks_per_node:smp_threads, but this doesn't seem to work.  I am using openmpi.  When I do molprox -v -n 18/4:1 test.com, I get the following in my stdout.  
>> 
>> # PARALLEL mode
>> nodelist=18/4:1
>> first   =18 
>> second  =4 
>> third   =1
>> .
>> .
>> export MP_NODES='0'
>> export MP_PROCS='18'
>>        MP_TASKS_PER_NODE=''
>> export MOLPRO_NOARG='1'
>> export MOLPRO_OPTIONS=' -v c5h5-caspt3-2-2a2.com'
>> export MOLPRO_OPTIONS_FILE='/scratch/bmintz/114934/molpro_options.5703'
>> export MPI_MAX_CLUSTER_SIZE='4'
>>        PROCGRP=''
>> export RT_GRQ='ON'
>> export TMPDIR='/scratch/bmintz/114934'
>> export XLSMPOPTS='parthds=1'
>> /cerebro/home/software/openmpi/intel/bin/mpirun -np 18 /cerebro/home/software/molpro-mppx/ga/bin/molprox_2010_1_Linux_x86_64_i8.exe  -v c5h5-caspt3-2-2a2.com
>> 
>> 
>> It looks like the molpro script picks up all of my arguments (i.e. 18/4:1), and sets MP_PROCS to 18 but MP_TASKS_PER_NODE is not set.  Instead, the MPI_MAX_CLUSTER_SIZE is set to my value for tasks_per_node.  Finally, the job is executed with mpirun -np 18.  I looked in the mpirun manual, and it says that I have to specify -npernode to set the number of tasks per node.  I know that I can modify LAUNCHER in my CONFIG file.  Currently it is
>> 
>> LAUNCHER=/cerebro/home/software/openmpi/intel/bin/mpirun -np %n %x
>> 
>> I could add -npernode, but I am not sure what to add after -npernode.  Is there a % option to include similar to %n for the number of tasks?  I included my CONFIG and molpro script files for reference.  Any advice would be appreciated.
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Thanks,
>> Benjamin J. Mintz
>> Research Associate, Postdoctoral
>> Oak Ridge Leadership Computing Facility (OLCF)
>> Oak Ridge National Laboratory
>> Phone: (865) 574-6639
>> Email: mintzbj at ornl.gov
>> Gmail: bm0037 at gmail.com
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Molpro-user mailing list
>> Molpro-user at molpro.net
>> http://www.molpro.net/mailman/listinfo/molpro-user







More information about the Molpro-user mailing list