[molpro-user] Re: ... i/o bottleneck?

Sigismondo Boschi s.boschi at cineca.it
Tue Mar 8 07:55:21 GMT 2005


subject was:
Re: [molpro-user] Re: performance on Sun ultraSparc, solaris 9: disk i/o 
bottleneck?

> 
> Now, it would seem that the devices named ssd0 and ssd3 (and also ssd2) are quite bottlenecked in terms of their service wait time and also the % time busy.  Unfortunately, I don't know how to verify that these correspond to the scratch filesystems (I suspect they do), because I can't seem to find the map that connects device id's in iostat to device files in /etc/mnttab.  Using this data, can someone on the list confirm that my suspicion that there is a disk bottleneck on these systems, and if possible, recommend to me how I can find out which iostat device id's correspond to device files on my system?  I would be very grateful for any suggestions.

Hi,

I have the same behavior with molpro, on a IBM SP p690 system with GPFS.

The deep reason of it, is that I/O is very fragmentated (block too 
small). What happend is that the developer of molpro wanted to forget 
about I/O, since in old system you had no memory an in recent systems 
caching was doing the job quite well.

I am not sure this is the only reason, but for sure is the main one.

What happend today is that computation are larger, memory is larger and 
those block-sizes are simply too small with respect to today systems. 
Expecially for GPFS (I do not know about Sun), but also local 
filesystems give quite nice problems with caching...

The point there is that years ago the "system-call overhead" implied in 
I/O was negligible. Today that part is the main part.

However this is not my main concern about molpro. Read below.

I also gave a try to the parallel version of molpro an I found that it 
is not possibile to avoid to store the integrals on disk (ccsd): even if 
I have about 2GB of memory per processor I was able to use just 200 MB.

This is the main limitation to parallel molpro.

The point with parallel computing is that you have memory, which is MUCH 
faster than disk (and with respect to gpfs, is not shared). And is 
large: running a typical task, 64 CPUs, you have at your disposal 128GBs 
of distributed memory... I have not seen often work files larger than this.

What I have to say here is simply that I did not find a way to tell to 
molpro "don't use disk, use memory. If not enought use disk". If it is 
possibile, simply the I/O problem, is not a problem!

Today computers have a memory/disk ratio much higher that once, and 
molpro - as well as all computational chemistry codes - would benefit a 
lot from it, if properly used.

Comments?

Regards,

    Sigismondo Boschi

-- 
Sigismondo Boschi, Ph.D.               tel: +39 051 6171559
CINECA (High Performance Systems)      fax: +39 051 6137273 - 6132198
via Magnanelli, 6/3                    http://instm.cineca.it
40033 Casalecchio di Reno (BO)-ITALY   http://www.cineca.it



More information about the Molpro-user mailing list