[molpro-user] FCI problem (molpro 2012.1)

Jussi Eloranta jmeloranta at gmail.com
Wed Oct 30 20:33:28 GMT 2013


Hi,

I am trying to run a fairly large full CI calculation and keep getting 
segmentation faults (hostname removed; XXX):

0:Segmentation Violation error, status=: 11
(rank:0 hostname:XXX pid:61421):ARMCI DASSERT fail. 
src/common/signaltrap.c:SigSegvHandler():310 cond:0
3:Segmentation Violation error, status=: 11
(rank:3 hostname:XXX pid:61424):ARMCI DASSERT fail. 
src/common/signaltrap.c:SigSegvHandler():310 cond:0
Last System Error Message from Task 0:: Inappropriate ioctl for device
application called MPI_Abort(comm=0x84000007, 11) - process 0
Last System Error Message from Task 3:: Inappropriate ioctl for device
application called MPI_Abort(comm=0x84000004, 11) - process 3
1:Segmentation Violation error, status=: 11
(rank:1 hostname:XXX pid:61422):ARMCI DASSERT fail. 
src/common/signaltrap.c:SigSegvHandler():310 cond:0
Last System Error Message from Task 1:: Inappropriate ioctl for device
application called MPI_Abort(comm=0x84000004, 11) - process 1

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   EXIT CODE: 11
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================


---

My input file is as follows:

***,He_2^{*-}
memory,4000,m

r = [1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 7.0 8.0 9.0 10.0 12.0 14.0 
16.0 18.0 20.0 25.0 30.0 40.0 50.0 100.0 200.0]

sexp=[     720.415859,     108.159574,      24.672815, 7.008060,       
2.283030,       0.809432,       0.296071, 0.088547,       
0.033435,       0.009982]
pexp=[       6.681348,       1.975533,       0.670324, 0.174403,       
0.049119,       0.014593,       0.003627]
dexp=[       7.336700,       2.521632,       0.903599, 0.194153,       
0.011633,       0.043237]

basis={
s,He,sexp(1),sexp(2),sexp(3),sexp(4),sexp(5),sexp(6),sexp(7),sexp(8),sexp(9),sexp(10)
c,1.1,1.0
c,2.2,1.0
c,3.3,1.0
c,4.4,1.0
c,5.5,1.0
c,6.6,1.0
c,7.7,1.0
c,8.8,1.0
c,9.9,1.0
c,10.10,1.0
p,He,pexp(1),pexp(2),pexp(3),pexp(4),pexp(5),pexp(6),pexp(7)
c,1.1,1.0
c,2.2,1.0
c,3.3,1.0
c,4.4,1.0
c,5.5,1.0
c,6.6,1.0
c,7.7,1.0
d,He,dexp(1),dexp(2),dexp(3),dexp(4),dexp(5),dexp(6)
c,1.1,1.0
c,2.2,1.0
c,3.3,1.0
c,4.4,1.0
c,5.5,1.0
c,6.6,1.0
}

geometry={
He1
He2, He1, r(i)
}

do i = 1,#r

{hf; wf,5,5,3,-1; closed,1;open,1.5,2.1,3.1}
{fci;core,;occ,24,12,12,5,24,12,12,5;wf,5,5,3,-1;state,1}
e1(i)=energy(1)

enddo

table,r,e1

---

In the above example, I redcued the active space but it still core 
dumps. If I reduce it further, the calculation runs OK.
Thus, I suspect that this has something to do with memory allocation. 
Here are couple of observations that may help:

1) If I request too much memory, I will see the out of memory 
termination message in the system log (dmesg). However, with the above 
input, I don't see that.
So the system is not running out of memory.

2) If I request only about 800mw or so, molpro will complain that not 
enough memory allocated. So, molpro is apparently happy with the amount 
of memory requested.

3) I get the segmentation fault even if run with just one processor.

Any clues what might be happening?

Thanks,

Jussi Eloranta






More information about the Molpro-user mailing list