ccsd(t) single point fials after magic iteration number 6

ag0020 at unt.edu ag0020 at unt.edu
Sat Jan 18 18:01:44 GMT 2003


Dear molpro users,


I have had the same problem(with ccsd(t)jobs) and I don't know how molpro 
calculate the disk space usage. I have checked the disk space usage along the 
calculation and it seems that it never used disk space, it mostly used the 
memory and the swap space. This problem is mainly occuring with molpro 2002.3. 
Reading the output shows that just for sorting integrals, molpro 2002.3 needs a 
huge space!!! I have run ccsd(t)/avqz on a 8 Gig RAM machine and still crashed. 
The error was lack of memory!!!. My question is How much memory do we need to 
run jobs at this level of theory??? Is there a way to make molpro use disk 
space instead of the whole memory available or at least switch to disk space 
when the the memory usage hit certain limit??. I have two machines one has one 
XP athlon and 1.5 Gig RAM and the other is a dual PIII with 760Mb of RAM. On 
the first I can't even run ccsd(t)/avtz(for molecules with 2 heavy atoms) geom 
optimization but the second is able to handle it although it has less RAM and 
disk space!! Could the second cpu be the difference??? I think the manual 
should detail a little bit the uasage of the RAM and disk space by molpro. 
Thanks and any suggestion is more than appreciated.  



>Dear molpro users,

>Perhaps, I put this too far in the original message:

>> >>> So I thought may be this is due to disk space.
>> >>>But when I run this on a machine with huge disks I get the same error and
>> >>>DISK USED * 7.00 GB even though the smallest scratch directory is 15GB.

>If this is not convincing or informative, more details follow: Redhat 7.3 with 
>kernel 2.4.18, molpro version 2002.3.
>If I track the size of the disk space used every minute during this failing 
>job I would see around 6th iteration:

>from "df -l" command:

>Filesystem 1k-blocks Used Available Use% Mounted on
>/dev/hda1 2015984 1375832 537744 72% /
>none 1030532 0 1030532 0% /dev/shm
>/dev/sda1 17639220 1051104 15692096 7% /scratch1
>/dev/hda3 56084596 4638416 48597220 9% /scratch2

>then just before it dies:
>Filesystem 1k-blocks Used Available Use% Mounted on
>/dev/hda1 2015984 1375844 537732 72% /
>none 1030532 0 1030532 0% /dev/shm
>/dev/sda1 17639220 1051104 15692096 7% /scratch1
>/dev/hda3 56084596 3801492 49434144 8% /scratch2

>and then finally when it is dead:
>Filesystem 1k-blocks Used Available Use% Mounted on
>/dev/hda1 2015984 1375812 537764 72% /
>none 1030532 0 1030532 0% /dev/shm
>/dev/sda1 17639220 20 16743180 1% /scratch1
>/dev/hda3 56084596 1476 53234160 1% /scratch2

>So, I wonder if this still leaves a chance to "just a full disk". Perhaps "df" 
>is not a good way to monitor the disk space in this case.
>If so would you let me know what would be more definitive way to state that 
>the disk limit is far from being reached?

>Tentatively it looks very much like a bug, may be in molpro or may be in the 
>linux kernel.
>I noticed that when this job runs, system load is substantial and you will see 
>occasionally "kswapd" and "kupdated" taking over a whole cpu.

>Anyway I am very much interested in the solution of this problem. Has anybody 
>gone over ~7GB with 2.4.18 kernel and molpro 2002.3 running uccsd(t), if so 
>could you help me?

>Best regards,
>Ilja









More information about the Molpro-user mailing list