[molpro-user] molpro with gm-2.0.14

Tatiana Korona tania at tiger.chem.uw.edu.pl
Sat Mar 5 12:25:45 GMT 2005


Dear Kirk,

What version of Molpro do you use? I had the same messages on one computer
up to the version 2002.8, and they were gone after the patch corlsi.def
was applied. This rare problem appeared because the pointers produced by
icorr/i were negative on that box.

Best wishes,

Tatiana

Dr. Tatiana Korona http://tiger.chem.uw.edu.pl/kwanty/staff/tania/tania_en.html
Quantum Chemistry Laboratory
University of Warsaw
Pasteura 1, PL-02-093 Warsaw, POLAND

`The man who makes no mistakes does not usually make anything.'
                                       Edward John Phelps (1822-1900)

On Fri, 4 Mar 2005, Kirk Peterson wrote:

> Hi all,
>
> I wonder if anyone has some experience with parallel molpro using the
> latest myrinet GM software, 2.0.14, and its associated mpich.  We've
> transitioned our myrinet cluster to the Rocks OS and this came with the
> newest GM stuff.  Everything seems fine, but when we run parallel
> molpro, we get annoying messages such as:
>
>   ?WARNING: TOO HIGH ADDRESS IN CORLSI: LT=********  IBASE=********
> LTOP=-210605060  MEMSTACK=-180512421
>
>   ?WARNING: TOO HIGH ADDRESS IN CORLSI: LT=********  IBASE=********
> LTOP=-210605057  MEMSTACK=-180512421
>
>
> We always get a couple at the start of every job when the initial
> memory is allocated, and the MRCI code gives several of these per CI
> iteration (hence the annoyance).  I should stress that the results do
> not seem to be affected.  I'm guessing it might be a global arrays
> issue, but I'm still waiting for an answer from that camp.  Note that
> one benefit of the newer GM is that the cards now are configured for 16
> ports rather than the old limit of 8.  Since molpro needs 3 ports per
> job, one sometimes ran into trouble if you happened to overallocate a
> node.
>
> thanks in advance,
>
> -Kirk
>




More information about the Molpro-user mailing list