[molpro-user] technical molpro questions

Sve N goretoffel at hotmail.de
Wed Oct 30 15:30:26 GMT 2013

Dear molpro users and developers,
I have some technical questions, concerning the usage of molpro2012.

1) Not technical, yet - are there patchnotes, anywhere? I searched the manual and the source-files, but didn't find any.

2) The TMP-files used by molpro can get quite large, which, of course, can't be avoided in general, but when performing a scan, those files seemingly are not deleted, or overwritten, between the seperate steps. For a scan including a rs2-block, the file df_T01000<PID>.TMP (where <PID> is the prozess ID plus some trailing zeros) grows with every step, till the disk is full. One can calculate the scan manually, or via external script, of course, but I guess, not all of this data is possibly needed for other calculations...

3) When molpro is compiled with global arrays (for me it's -auto-ga-mvapich2) one can use the option -S ga, to let the mpi-threads communicate via global arrays, rather than a shared file on the hdd, and -G <mem>, to set the amount of memory used by ga. Actually I have two questions here:
a) Does -G work? On one page of the manual http://www.molpro.net/info/2012.1/doc/manual/node7.html it's not written otherwise, on another http://www.molpro.net/info/2012.1/doc/manual/node9.html it says, it's not 'activated', but molpro at least writes, that it set the size according to the option, to the output.
b) When calculating bigger jobs, using mpich for parallelization, oftentimes the disk i/o becomes the bottleneck for the execution time. I thought, that using ga should improve this quite a bit, but from some benchmarks, I did, they had no impact, or sometimes made the calculation even slower. (I mainly used multi and rs2c to do those.) When I checked the output-files, I don't have the sf anymore, and the ga are used, but I suddenly have a much higher normal disk usage (for a small testcase, cas(2,2), vtz, of ethylene, I save 37MB of 'sf', but the 'disk usage' jumps from 430MB to 790MB). Why is that, and are there cases, where it's reasonable, to use ga? Or do I have to compile in another way?

4) Another way I tried to accelerate molpro, is, to use the parallel mkl as blas and lapac library. From my, probably naive point of view, this should be perfect, since all of the time consuming calculations done by molpro should be some kind of matrix-operations, which are ultimately done by blas and lapac, the parallelization is openmpi, so I don't need the n'th amount of memory, when using several threads, and especially if I have intel cpus, the parallelization should be quite good.
Unfortunately I got rather mixed results - sometimes parts of molpro run faster, even those, where the mpich-parallelization doesn't work at all (e.g. the rs2c iterations), but often it's much less, than mpich, or even not much at all (at least no negative results, here), although, when checking the cpu usage, the amount of time it is at n*100% would suggest much better results (or, as it is, much overhead). Generally it is hard to compare from the cpu times given in the output. Looking at the real times, the (by a very small amount) a combination of mpich and the parallel mkl resulted in the fastest runs - but, as I wrote, this depends very much on the type of calculation performed.
I'm very interested in why it does not work as I expected, or if someone made similar attempts with maybe more success, who could tell me exactly what he did, so I could compare to my stuff...my configure options to do this where (for the flags I used intels mkl link line advisor):
./configure -prefix $insdirpath -blas -lapack -icc -ifort -mpp -auto-ga-mvapich2 -var 'BLASLIB=-Wl,--start-group  /path/intel/mkl111/lib/intel64/libmkl_intel_ilp64.a /path/intel/mkl111/lib/intel64/libmkl_core.a /path/intel/mkl111/lib/intel64/libmkl_intel_thread.a -Wl,--end-group -lpthread -lm' -var 'LAPACKLIB=-Wl,--start-group  /path/intel/mkl111/lib/intel64/libmkl_intel_ilp64.a /path/intel/mkl111/lib/intel64/libmkl_core.a /path/intel/mkl111/lib/intel64/libmkl_intel_thread.a -Wl,--end-group -lpthread -lm' -var 'CFLAGS=-DMKL_ILP64 -openmp -I/path/intel/mkl111/include_i8' -var 'FFLAGS=-i8 -openmp -I/path/intel/mkl111/include_i8'

I'd be happy about any explanations and tips, Sven

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.molpro.net/pipermail/molpro-user/attachments/20131030/3431982f/attachment.html>

More information about the Molpro-user mailing list