[molpro-user] parallel running molpro 2009.1

Manhui Wang wangm9 at cardiff.ac.uk
Wed Feb 24 15:52:26 GMT 2010


 Hi John,
     The slater option doesn't affect the speed. If some wants to use
slater type functions, it is necessary.  -i8 option is unnecessary since
it will be added as default automatically if you run configure on a
64-bit machine.

Best wishes,
Manhui

John Travers wrote:
> Hi Manhui, 
>    thank you for the quick response. I will try to compile it again.
> Will the slater option affect the speed? We do have access to a cluster
> with Infiniband. In your configuration there is no -i8 option, does it
> matter?
> 
> best wishes
> 
> J.T.
> 
> ------------------------------------------------------------------------
> *From:* Manhui Wang <wangm9 at cardiff.ac.uk>
> *To:* John Travers <jtravers70 at yahoo.com>
> *Cc:* molpro-user at molpro.net
> *Sent:* Wed, February 24, 2010 7:28:28 AM
> *Subject:* Re: [molpro-user] parallel running molpro 2009.1
> 
> Hi John,
>     I have looked at your CONFIG file just now. It seems some arguments
> are unnecessary. Do you really use slater
> type orbital? If not, -slater is unnecessary. In addition, molpro
> configure can pick up the right blas library arguments automatically. It
> seems -var
> LIBS="-L/usr/lib64 -libverbs -lm" is something for high speed
> network library (like Infiniband). If you plan to run molpro on just one
> box and
> without such high speed network, this option is also unnecessary.
> 
> The configuration may be something like:
> ./configure -batch -icc -ifort -mpp -mppbase
> /n/home07/jtravers/apps/MYMPI/mpich2-1.3a1/include -blaspath
> "/n/sw/intel/mkl/10.1.1.019/lib/em64t
> INSTBIN=/n/home07/jtravers/apps/MOLPRO/molpro2009.1.mpich2
> -var INSTLIB=/n/home07/jtravers/apps/MOLPRO/molpro2009.1.mpich2/lib
> 
> This is for MPI-2 version of Molpro. In addition, you can also try the
> Global Arrays version of Molpro, in which GA should be built firstly.
> 
> Please refer to the benchmark results for both versions at
> http://www.molpro.net/info/bench.php?portal=visitor&choice=Benchmarks
> <http://www.molpro.net/info/bench.php?portal=visitor&choice=Benchmarks>
> 
> Best wishes,
> Manhui
> 
> Manhui Wang wrote:
>> Hi John,
>>    The parallel performance depends
>> on many factors, such as communication speed, I/O, memory, and code
>> itself. When running jobs with heavy I/O and/or large memory, the
>> bottleneck may be the file system and/or system memory. In addition,
>> some code in Molpro is partly parallelized. Please refer to the comment
>> http://www.molpro.net/pipermail/molpro-user/2006-July/001867.html
>>
>> Best wishes,
>> Manhui
>>
>> John Travers wrote:
>>> Dear All,
>>>
>>>        We had some strange situations when running molrpo (2009.1 patch
>>> 26) on a linux box with 2 quadcore Xeon E5410. molpro was compiled with
>>> intel fortran 11.1, mpich2-1.3a1 (we also tried other versions but got
>>> the same results) and MKL-10.1.1.019 (the configuration file is
>>> attached). The compilation was successful. Running a simple test
>>> calculation (close-shell CCSD(T) with a large basis set), we found the
>>> parallel scaling stopped (especially the time for the election integral)
>>> when requesting with more than 4 cpus. The mpich2 was compiled with
>>> icc/ifort, and the communication device is ch3:nemesis. Any suggestion?
>>>
>>> Thank you!
>>>
>>> Best wishes
>>>
>>> J.T.
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> Molpro-user mailing list
>>> Molpro-user at molpro.net <mailto:Molpro-user at molpro.net>
>>> http://www.molpro.net/mailman/listinfo/molpro-user
>>
> 
> -- 
> -----------
> Manhui  Wang
> School of Chemistry, Cardiff University,
> Main Building, Park Place,
> Cardiff CF10 3AT, UK
> Telephone: +44 (0)29208 76637
> 

-- 
-----------
Manhui  Wang
School of Chemistry, Cardiff University,
Main Building, Park Place,
Cardiff CF10 3AT, UK
Telephone: +44 (0)29208 76637



More information about the Molpro-user mailing list