[molpro-user] File I/O on molpro2008 parallel version

Neeraj Rai neerajrai at gmail.com
Fri Jan 15 14:03:22 GMT 2010


Thanks Manhui. It helps. In my case SF files are way too small less than 1
MB while disk used is nearly 200 GB for a 8 process job. So basically it all
depends on the number of processes. Hence, a 4 process job with 2node and
2processes/node and 1 node and 4processes/node will roughly use the same
disk space.

On Fri, Jan 15, 2010 at 7:12 AM, Manhui Wang <wangm9 at cardiff.ac.uk> wrote:

> Hi Neeraj,
>     There are at least two kinds of files in the scratch, one is EAF
> (Exclusive Access Files)  which can be only accessed by a single
> process, and another is SF (Shared Files) to which multiple processes
> can read and write independently.  For a particular job, each process
> needs a certain amount of space for EAF files. This means the total
> space will increase linearly if many processes are used, but for each
> process the used space is roughly the same.
>    For many applications, EAF files may be very large. Molpro output
> also summarizes the total used space. Here is an example, where DISK
> USED indicates the total space used for EAF and related, and SF USED
> indicates the total space used for SF
> (1) job run with 2 processes on 1 node
>  DISK USED  *         9.96 MB
>  SF USED    *         3.05 MB
>  GA USED    *         0.29 MB       (max)       0.27 MB       (current)
>
> (2) job run with 4 processes on 1 node
>  DISK USED  *        19.68 MB
>  SF USED    *         3.05 MB
>  GA USED    *         0.29 MB       (max)       0.27 MB       (current)
>
> In addition, molpro2009.1 should clean up the scratch files after every
> normal termination. For your case, it depends on what type( EAF, SF?)
> the large files belong to, and how you run molpro on multiple nodes (how
> many processes each nodes?).
>
> Best wishes,
> Manhui
>
>
>
> Neeraj Rai wrote:
> > Hello all,
> >
> >      So far I have run molpro calculation on a single node with many
> > cores and lot of memory and it wrote nearly 200Gb of in the temporary
> > files on the scratch space on that particular node. If I were to run the
> > same job with multiple nodes each with much less memory (distributed
> > memory paradigm) will each node write these large files on the scratch
> > space or read/write is done on one node and then distributed across node
> > so my disk requirements will not balloon with more nodes I request. or
> > if codes writes on individual nodes will the file size be smaller?
> >
> >      Thanks.
> >
> > --
> > Regards,
> > Neeraj.
> >
> >
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > Molpro-user mailing list
> > Molpro-user at molpro.net
> > http://www.molpro.net/mailman/listinfo/molpro-user
>
> --
> -----------
> Manhui  Wang
> School of Chemistry, Cardiff University,
> Main Building, Park Place,
> Cardiff CF10 3AT, UK
> Telephone: +44 (0)29208 76637
>



-- 
Regards,
Neeraj.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.molpro.net/pipermail/molpro-user/attachments/20100115/9ad8628a/attachment.html>


More information about the Molpro-user mailing list