[molpro-user] RS2 not scaling with # of processors

Benj FitzPatrick benjfitz at uchicago.edu
Fri Jun 8 16:05:56 BST 2007


Hello,
I noticed that RS2 calculations (single state, SS-SR, and MS-MR) do not get much
faster when going from 1 to 2 processors, and it actually gets slower (that 2p)
with 3 or 4 processors on my machines (4 core opteron 285s with 8GB of RAM).  I
hadn't seen anything in the manual or on the mailing list regarding this so I
thought I would ask (also, mcscf, mrci, and ccsd(t) appear to scale quite nicely
with # of processors).  For all of these calculations 100% of the specified
number of processors were being used during the RS2 portion of the calculation
(as viewed several times from top).

Looking at the MS-MR times it looks like memory plays a factor, but using free
-m showed that the calculation was never actually saturating the memory.  Also,
going from 2 to 3 processors always slowed the calculation (except the MS-MR
19/10), even if it had more memory than the 2/200 calculations.

I tried two different basis sets (cc-pVDZ and cc-pVTZ) and two different active
spaces (19 occupied, 10 closed and 18 occupied, 11 closed).  All of the numbers
below are for cc-pVTZ and 19/10 except for the first MS-MR-CASPT2 results.  Is
it just that these calculations don't scale well on the opteron?

I have attached several of my input files in case I mis-configured the input.

2) I also hope to follow up on the problem I noticed when using more than one
processor while running SS-SR geometry optimizations (I posted this to list and
never heard back).  Namely, multiple copies of the output text are written to
the log file, and with 3+ processors the lines appear to write over each other.
 This results in problems at some of the optimization steps because not all of
the parameters are displayed, though this is really only a problem if it happens
at the final step.

Thanks,
Benj FitzPatrick
University of Chicago

single state RS2
=====================
# proc/mem (MW)         time (s)
-------------------------------
1/800                   7841.11
2/400                   7093.20
2/200                   7442.25
4/200                   7432.27

2 state SS-SR
=====================
# proc/mem (MW)         state 1 (s)   state 2 (s)
-------------------------------------------------
1/800                   7978.62       8085.27
2/400                   7507.82       7337.20
2/200                   7312.80       7365.21
3/267                   7980.84       7483.92
4/200                   7825.61       7986.17

2 state MS-MR (18/11)
=====================
# proc/mem (MW)         time (s)
--------------------------------
1/800                   1666.58
2/400                   1458.70
2/200                   1538.70
3/267                   1661.01
4/200                   1827.48

2 state MS-MR (19/10)
=====================
# proc/mem (MW)         time (s)
--------------------------------
1/800                   28711.99
2/400                   27261.73
2/200                   28785.40
3/266                   28089.32
4/200                   28987.37

For comparison I included CCSD(T)/cc-pcVTZ calculations

CCSD(T)
=====================
# proc/mem (MW)       no core e- (s)   1s core e- (s)
-----------------------------------------------------
1/700                  41177.39         18119.21
2/350                  22875.49         10258.34
3/233                  16952.68          7462.23
4/175                  14316.32          6107.07
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mem-vs-proc-800-1_sp-mrs2-vtz-19_10-a.inp
Type: application/octet-stream
Size: 1561 bytes
Desc: not available
URL: <http://www.molpro.net/pipermail/molpro-user/attachments/20070608/0e83fdb3/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: mem-vs-proc-800-1_sp-srs2-vtz-19_10-a.inp
Type: application/octet-stream
Size: 1592 bytes
Desc: not available
URL: <http://www.molpro.net/pipermail/molpro-user/attachments/20070608/0e83fdb3/attachment-0001.obj>


More information about the Molpro-user mailing list