[molpro-user] infiniband latency

Martin Diefenbach diefenbach at chemie.uni-frankfurt.de
Thu Jan 6 17:11:58 GMT 2011

Dear Benjamin,

Just to give you a number for comparison:
On our AMD/Opteron istanbul cluster, tuning.rc yields parameters 
comparable to yours:

--tuning-mpplat 000028
--tuning-mppspeed 000368

These were produced in a run over 2 nodes via infiniband 
(molpro2010.1p5, built with PGI/ACML/MVAPICH2).

Solely for comparison, if run on a single node, the result is (naturally) 
quite different:

--tuning-mpplat 000009
--tuning-mppspeed 000945

Dr. Martin Diefenbach
Institut für Anorganische Chemie der
Johann Wolfgang Goethe-Universität
Campus Riedberg, Frankfurt am Main

On Tue, 4 Jan 2011, Mintz, Benjamin J. wrote:

> Dear users,
> I recently built a mpp version of molpro 2010.1, and I had a question 
> about the result from mpptune.com.  Our cluster is set up with dual 
> quad-core intel nehalem processors, and we have an infiniband 
> interconnect using openmpi, openib, and SGE.  I ran the mpptune.com, 
> which put the following lines in lib/tuning.rc
> --tuning-mpplat 000035
> --tuning-mppspeed 000249
> I believe this means that molpro determined that our network has a 
> latency of 35 microseconds and a broadcast speed of 249 MB/sec.  My 
> question is wether or not this is a good latency and broadcast speed for 
> an infiniband interconnect running molpro.  I thought that the latency 
> of an infiniband interconnect was closer to 1-3 microseconds with a 
> bandwidth in the GB/sec.  I was just wondering if there is a problem 
> with our network settings, or if there might be an issue with the molpro 
> build, or if this is just as good as it gets.  I attached my CONFIG file 
> for reference.  I would appreciate any information.
> Benjamin J. Mintz
> Research Associate, Postdoctoral
> Oak Ridge Leadership Computing Facility (OLCF)
> Oak Ridge National Laboratory
> Phone: (865) 574-6639
> Email: mintzbj at ornl.gov
> Gmail: bm0037 at gmail.com

More information about the Molpro-user mailing list