No subject


Fri Apr 4 12:07:02 BST 2014


--------------------------------------------------------------------
 Recomputing integrals since basis changed


 Using spherical harmonics

 Library entry F      S cc-pVDZ              selected for orbital group  1
 Library entry F      P cc-pVDZ              selected for orbital group  1
 Library entry F      D cc-pVDZ              selected for orbital group  1


 ERROR OPENING FILE 4
NAME=/home_soft/home/scheping/tmp/molpro2009.1_examples/examples/my_tests/./tmp/sf_T0400027332.TMP
IMPLEMENTATION=sf    STATUS=scratch   IERR=    35
 ? Error
 ? I/O error
 ? The problem occurs in openw
--------------------------------------------------------------------

*The other one* is script molprop_2009_1_Linux_x86_64_i8 can not invoke
parallel run *crossing different* node. When I use

molprop_2009_1_Linux_x86_64_i8* -N node1:4* test.com*, *

it is ok; but when use

molprop_2009_1_Linux_x86_64_i8 *-N node1:4, node2:4* test.com*, *

I will find only one process is running, and from the "-v" option, I can see
some message,

mpirun -machinefile some.hosts.file* -np 1 *test.com.

Some.hosts.file is correct, but the number of processes is always ONE.So far
I have to pass over this script and directly use
mpirun -np N -machinefile hosts molprop_2009_1_Linux_x86_64_i8.exe test.com.

Can you give some suggestion, especially for the first question? In our
system, writing to /tmp is alway forbidden.

Thanks.


On Sat, Oct 10, 2009 at 2:34 PM, He Ping <heping at sccas.cn> wrote:

> Dear Manhui,
>
> Thanks for your patience and explicit answer. Let me do some tests and give
> you feedback.
>
>
> On Fri, Oct 9, 2009 at 10:56 PM, Manhui Wang <wangm9 at cardiff.ac.uk> wrote:
>
>> Dear He Ping,
>>      GA may take full advantage of shared memory on an MPP node, but
>> MPI-2 doesn't. On the other hand, MPI-2 may take advantage of the
>> built-in MPI-2 library with fast connection. The performance depends on
>> lots of facts, including MPI library, machine, network etc. It is better
>> to build both versions of Molpro, and then choose the better one on that
>> machine.
>>     It doesn't seem to be hard to make GA4-2 work on machine like
>> yours. Details were shown in my previous email.
>>
>> Best wishes,
>> Manhui
>>
>> He Ping wrote:
>> > Hi Manhui,
>> >
>> > Thanks. I will try these patches first.
>> > But would you like to tell me something about GA's effect on molpro? So
>> > far I can not get a GA version molpro, so I can not know the performance
>> > of GA version. If GA version is not much better than non-GA version, I
>> > will not take much time building it.
>> > Thanks a lot
>> >
>> > On Fri, Oct 9, 2009 at 7:25 PM, Manhui Wang <wangm9 at cardiff.ac.uk
>> > <mailto:wangm9 at cardiff.ac.uk>> wrote:
>> >
>> >     Dear He Ping,
>> >         Recent patches include some bugfixes for intel compiler 11,
>> >     OopenMPI, and running molpro across nodes with InfiniBand. If you
>> have
>> >     not updated them, please do it now. It may resolve your existing
>> >     problems.
>> >
>> >     He Ping wrote:
>> >     > Dear Manhui,
>> >     >
>> >     > Thanks a lot for your detailed reply, that's very helpful. Very
>> >     sorry to
>> >     > answer later, for I have to do a lot of tests. So far, one version
>> of
>> >     > molpro2009.1 is basically ok, but I still have some questions.
>> >     >
>> >     >    1. Compile Part.
>> >     >       Openmpi 1.3.3 can pass compile and link for both w GA and
>> w/o GA
>> >     >       4.2. I do not use my own blas, so I use default, this is my
>> >     >       configure step,
>> >     >
>> >     >       ./configure -batch -ifort -icc -mppbase $MPI_HOME/include64
>> -var
>> >     >       LIBS="-L/usr/lib64 -libverbs -lm" -mpp  (in your letter, I
>> guess
>> >     >       you forget this necessary option.)
>> >     >
>> >     >       But *intelmpi failed for both*, I can show the err message
>> >     >       seperately below.
>> >     >
>> >     >       *Intelmpi w/o GA:  *
>> >     >
>> >     >       make[1]: Nothing to be done for `default'.
>> >     >       make[1]: Leaving directory
>> >     >
>> >
>> `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/utilities'
>> >     >       make[1]: Entering directory
>> >     >
>> >
>> `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/src'
>> >     >       Preprocessing include files
>> >     >       make[1]: *** [common.log] Error 1
>> >     >       make[1]: *** Deleting file `common.log'
>> >     >       make[1]: Leaving directory
>> >     >
>> >
>> `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro2009.1/src'
>> >     >       make: *** [src] Error 2
>> >     >
>> >     >       *Intelmpi w GA:*
>> >     >
>> >     >       compiling molpro_cvb.f
>> >     >       failed
>> >     >       molpro_cvb.f(1360): error #5102: Cannot open include file
>> >     >       'common/ploc'
>> >     >             include "common/ploc"
>> >     >       --------------^
>> >     >       compilation aborted for molpro_cvb.f (code 1)
>> >     >       make[3]: *** [molpro_cvb.o] Error 1
>> >     >       preprocessing perfloc.f
>> >     >       compiling perfloc.f
>> >     >       failed
>> >     >       perfloc.f(14): error #5102: Cannot open include file
>> >     'common/ploc'
>> >     >             include "common/ploc"
>> >     >       --------------^
>> >     >       perfloc.f(42): error #6385: The highest data type rank
>> permitted
>> >     >       is INTEGER(KIND=8).   [VARIAT]
>> >     >             if(.not.variat)then
>> >     >       --------------^
>> >     >       perfloc.f(42): error #6385: The highest data type rank
>> permitted
>> >     >       is INTEGER(KIND=8).
>> >
>> >     Which version of intel compilers are you using? Has your GA worked
>> fine?
>> >     We have tested Molpro2009.1 with
>> >     (1) intel/compilers/10.1.015 (11.0.074), GA 4-2 hosted by
>> intel/mpi/3.1
>> >     (3.2)
>> >     (2) without GA, intel/compilers/10.1.015 (11.0.074), intel/mpi/3.1
>> (3.2)
>> >
>> >      all work fine. CONFIG files will be helpful to see the problems.
>> >     >
>> >     >    2. No .out file when I use more than about 12 processes, but I
>> can
>> >     >       get .xml file. It's very strange, everything is ok when
>> process
>> >     >       number is less than 12, but once exceed this number, such as
>> 16
>> >     >       cpus, molpro always gets this err message,
>> >     >
>> >     >       orrtl: severe (174): SIGSEGV, segmentation fault occurred
>> >     >       Image              PC                Routine
>> >     >       Line        Source
>> >     >       libopen-pal.so.0   00002AAAAB4805C6  Unknown
>> >     >       Unknown  Unknown
>> >     >       libopen-pal.so.0   00002AAAAB482152  Unknown
>> >     >       Unknown  Unknown
>> >     >       libc.so.6          000000310FC5F07A  Unknown
>> >     >       Unknown  Unknown
>> >     >       molprop_2009_1_Li  00000000005A4C36  Unknown
>> >     >       Unknown  Unknown
>> >     >       molprop_2009_1_Li  00000000005A4B84  Unknown
>> >     >       Unknown  Unknown
>> >     >       molprop_2009_1_Li  000000000053E57B  Unknown
>> >     >       Unknown  Unknown
>> >     >       molprop_2009_1_Li  0000000000540A8C  Unknown
>> >     >       Unknown  Unknown
>> >     >       molprop_2009_1_Li  000000000053C5E5  Unknown
>> >     >       Unknown  Unknown
>> >     >       molprop_2009_1_Li  00000000004BCA5C  Unknown
>> >     >       Unknown  Unknown
>> >     >       libc.so.6          000000310FC1D8A4  Unknown
>> >     >       Unknown  Unknown
>> >     >       molprop_2009_1_Li  00000000004BC969  Unknown
>> >     >       Unknown  Unknown
>> >     >
>> >     >       Can I ignore this message?
>> >     Have you seen this on one or multiple nodes? If on multiple nodes,
>> the
>> >     problem has been fixed by recent patches. By default, both *.out and
>> >     .xml can be obtained, but you can use option --no-xml-output to
>> disable
>> >     the *xml.
>> >     In addition, OpenMPI seems to be unstable sometime. When lots of
>> jobs
>> >     are run with OpenMPI, some jobs hang up unexpectedly. This behavior
>> is
>> >     not seen for Intel MPI.
>> >
>> >     >
>> >     >    3. Script Err. For molpro openmpi version, the script
>> >     >       molpro_openmpi1.3.3/bin/molprop_2009_1_Linux_x86_64_i8 seems
>> not
>> >     >       to work.
>> >     >       When I call this script, only one process is started, even
>> if I
>> >     >       use -np 8. So I have to run it manually, such as
>> >     >       mpirun -np 8 -machinefile ./hosts
>> >     >       molprop_2009_1_Linux_x86_64_i8.exe test.com
>> >     <http://test.com> <http://test.com>
>> >     Have your ./bin/molpro worked?. For me, it works fine. In
>> ./bin/molpro,
>> >     some environmental settings are included. In the case that
>> ./bin/molpro
>> >     doesn't work properly,  you might want to directly use
>> >     molprop_2009_1_Linux_x86_64_i8.exe, then it is your responsibility
>> to
>> >     set up these environmental variables.
>> >     >    4. Molpro w GA can not cross over nodes. One node is ok, but if
>> >     cross
>> >     >       over nodes, I will get "molpro ARMCI DASSERT fail" err, and
>> >     molpro
>> >     >       can not be terminated normally. Do you know the difference
>> >     between
>> >     >       w GA and w/o GA? If GA is not better than w/o GA, I will
>> >     pass this
>> >     >       GA version.
>> >     I think this problem has been fixed by recent patches.
>> >     As the difference between molpro w GA and w/o GA, it is hard to make
>> a
>> >     simple conclusion. For calculations with a small number of
>> processes(
>> >     eg. < 8), molpro w GA might be somewhat fast, but molpro without GA
>> is
>> >     quite competitive in performance when it is run with a large number
>> of
>> >     processes.  Please refer to the benchmark
>> >     results(http://www.molpro.net/info/bench.php).
>> >     >
>> >     >
>> >     >
>> >     >       Sorry for packing up so many questions, answer is any one
>> >     question
>> >     >       will help me a lot. And I think question 1 and 2 will be
>> more
>> >     >       important to me. Thanks.
>> >     >
>> >     >
>> >     >
>> >
>> >     Best wishes,
>> >     Manhui
>> >
>> >
>> >
>> >     > On Thu, Sep 24, 2009 at 5:33 PM, Manhui Wang <
>> wangm9 at cardiff.ac.uk
>> >     <mailto:wangm9 at cardiff.ac.uk>
>> >     > <mailto:wangm9 at cardiff.ac.uk <mailto:wangm9 at cardiff.ac.uk>>>
>> wrote:
>> >     >
>> >     >     Hi He Ping,
>> >     >       Yes, you can build parallel Molpro wthout GA for 2009.1.
>> >     Please see
>> >     >     the manual A..3.3 Configuration
>> >     >
>> >     >     For the case of using the MPI-2 library, one example can be
>> >     >
>> >     >     ./configure -mpp -mppbase /usr/local/mpich2-install/include
>> >     >
>> >     >     and the -mppbase directory should contain file mpi.h. Please
>> >     ensure the
>> >     >     built-in or freshly built MPI-2 library fully supports MPI-2
>> >     standard
>> >     >     and works properly.
>> >     >
>> >     >
>> >     >     Actually we have tested molpro2009.1 on almost the same system
>> >     as what
>> >     >     you mentioned (EMT64, Red Hat Enterprise Linux Server release
>> 5.3
>> >     >     (Tikanga), Intel MPI, ifort, icc, Infiniband). For both GA and
>> >     MPI-2
>> >     >     buildings, all work fine. The configurations are shown as
>> >     follows(beware
>> >     >     of lines wrapping):
>> >     >     (1) For Molpro2009.1 built with MPI-2
>> >     >     ./configure -batch -ifort -icc -blaspath
>> >     >     /software/intel/mkl/10.0.1.014/lib/em64t
>> >     <http://10.0.1.014/lib/em64t>
>> >     >     <http://10.0.1.014/lib/em64t> -mppbase $MPI_HOME/include64
>> >     >     -var LIBS="-L/usr/lib64 -libverbs -lm"
>> >     >
>> >     >     (2) For Molpro built with GA 4-2:
>> >     >      Build GA4-2:
>> >     >           make TARGET=LINUX64 USE_MPI=y CC=icc FC=ifort COPT='-O3'
>> >     >     FOPT='-O3' \
>> >     >           MPI_INCLUDE=$MPI_HOME/include64 MPI_LIB=$MPI_HOME/lib64
>> \
>> >     >           ARMCI_NETWORK=OPENIB MA_USE_ARMCI_MEM=y
>> >     >     IB_INCLUDE=/usr/include/infiniband IB_LIB=/usr/lib64
>> >     >
>> >     >           mpirun ./global/testing/test.x
>> >     >     Build Molpro
>> >     >     ./configure -batch -ifort -icc -blaspath
>> >     >     /software/intel/mkl/10.0.1.014/lib/em64t
>> >     <http://10.0.1.014/lib/em64t>
>> >     >     <http://10.0.1.014/lib/em64t> -mppbase /GA4-2path -var
>> >     >     LIBS="-L/usr/lib64 -libverbs -lm"
>> >     >
>> >     >     (LIBS="-L/usr/lib64 -libverbs -lm" will make molpro link with
>> >     Infiniband
>> >     >     library)
>> >     >
>> >     >     (some note about MOLPRO built with MPI-2 library can also been
>> >     in manual
>> >     >     2.2.1 Specifying parallel execution)
>> >     >     Note: for MOLPRO built with MPI-2 library, when n processes
>> are
>> >     >     specified, n-1 processes are used to compute and one process
>> >     is used to
>> >     >     act as shared counter server (in the case of n=1, one process
>> >     is used to
>> >     >     compute and no shared counter server is needed). Even so, it
>> >     is quite
>> >     >     competitive in performance when it is run with a large number
>> of
>> >     >     processes.
>> >     >     If you have built both versions, you can also compare the
>> >     performance
>> >     >     yourself.
>> >     >
>> >     >
>> >     >     Best wishes,
>> >     >     Manhui
>> >     >
>> >     >     He Ping wrote:
>> >     >     > Hello,
>> >     >     >
>> >     >     > I want to run molpro2009.1 parallel version on infiniband
>> >     network.
>> >     >     I met
>> >     >     > some problems when using GA, from the manual, section 3.2,
>> there
>> >     >     is one
>> >     >     > line to say,
>> >     >     >
>> >     >     > If the program is to be built for parallel execution then
>> >     the Global
>> >     >     > Arrays toolkit *or* the
>> >     >     > MPI-2 library is needed.
>> >     >     >
>> >     >     > Does that mean I can build molpro parallel version without
>> >     GA? If so,
>> >     >     > who can tell me some more about how to configure?
>> >     >     > My system is EM64T, Red Hat Enterprise Linux Server release
>> 5.1
>> >     >     > (Tikanga), intel mpi, intel ifort and icc.
>> >     >     >
>> >     >     > Thanks a lot.
>> >     >     >
>> >     >     > --
>> >     >     >
>> >     >     > He Ping
>> >     >     >
>> >     >     >
>> >     >     >
>> >     >
>> >
>> ------------------------------------------------------------------------
>> >     >     >
>> >     >     > _______________________________________________
>> >     >     > Molpro-user mailing list
>> >     >     > Molpro-user at molpro.net <mailto:Molpro-user at molpro.net>
>> >     <mailto:Molpro-user at molpro.net <mailto:Molpro-user at molpro.net>>
>> >     >     > http://www.molpro.net/mailman/listinfo/molpro-user
>> >     >
>> >     >     --
>> >     >     -----------
>> >     >     Manhui  Wang
>> >     >     School of Chemistry, Cardiff University,
>> >     >     Main Building, Park Place,
>> >     >     Cardiff CF10 3AT, UK
>> >     >     Telephone: +44 (0)29208 76637
>> >     >
>> >     >
>> >     >
>> >     >
>> >     > --
>> >     >
>> >     > He Ping
>> >     > [O] 010-58813311
>> >
>> >     --
>> >     -----------
>> >     Manhui  Wang
>> >     School of Chemistry, Cardiff University,
>> >     Main Building, Park Place,
>> >     Cardiff CF10 3AT, UK
>> >     Telephone: +44 (0)29208 76637
>> >
>> >
>> >
>> >
>> > --
>> >
>> > He Ping
>> > [O] 010-58813311
>>
>> --
>> -----------
>> Manhui  Wang
>> School of Chemistry, Cardiff University,
>> Main Building, Park Place,
>> Cardiff CF10 3AT, UK
>> Telephone: +44 (0)29208 76637
>>
>>
>
>
> --
>
> He Ping
> [O] 010-58813311
>



-- 

He Ping
[O] 010-58813311

--00504502e13bfe6cab0475a7570b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi Manhui,<br><br>The latest version with patches does solve one of my prob=
lems, xml to out file. Thanks alot.<br>But I still have two problems.<br><b=
r><b>One</b> is to set TMPDIR path. When install, I don't set TMPDIR, n=
o error is reported. If only use one node, nothing is wrong. But when I use=
 two different nodes, it seemed that I could not set TMPDIR to a shared pat=
h. For example,<br>
<br>My working dir is a shared dir by SNFS, so ./tmp is shared from each no=
de, but node1 and node2 have their separate /tmp dir.<br><br>This command i=
s OK, molpro will use default TMPDIR, /tmp <br>
mpirun -np 2 -machinefile ./hosts molprop.exe ./<a href=3D"http://h2f_merge=
.com">h2f_merge.com</a> <br>$ cat hosts<br>node1<br>node1<br><br>This comma=
nd is wrong, molpro will use ./tmp as TMPDIR, <br>mpirun -np 2 -machinefile=
 ./hosts molprop.exe -d ./tmp ./<a href=3D"http://h2f_merge.com">h2f_merge.=
com</a><br>
$cat hosts<br>node1<br>node2<br><br>From out file, error is like this<br>--=
------------------------------------------------------------------<br>=A0Re=
computing integrals since basis changed<br><br><br>=A0Using spherical harmo=
nics<br>
<br>=A0Library entry F=A0=A0=A0=A0=A0 S cc-pVDZ=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0 selected for orbital group=A0 1<br>=A0Library entry F=A0=A0=A0=
=A0=A0 P cc-pVDZ=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 selected for orbita=
l group=A0 1<br>=A0Library entry F=A0=A0=A0=A0=A0 D cc-pVDZ=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0 selected for orbital group=A0 1<br>
<br><br>=A0ERROR OPENING FILE 4=A0 NAME=3D/home_soft/home/scheping/tmp/molp=
ro2009.1_examples/examples/my_tests/./tmp/sf_T0400027332.TMP=A0 IMPLEMENTAT=
ION=3Dsf=A0=A0=A0 STATUS=3Dscratch=A0=A0 IERR=3D=A0=A0=A0 35<br>=A0? Error<=
br>=A0? I/O error<br>=A0? The problem occurs in openw<br>
--------------------------------------------------------------------<br><br=
>
<b>The other one</b> is script molprop_2009_1_Linux_x86_64_i8 can not invok=
e parallel run <b>crossing different</b> node. When I use <br>
<br>
molprop_2009_1_Linux_x86_64_i8<b> -N node1:4</b> <a href=3D"http://test.com=
">test.com</a><b>, </b><br>
<br>
it is ok; but when use<br>
<br>
molprop_2009_1_Linux_x86_64_i8 <b>-N node1:4, node2:4</b> <a href=3D"http:/=
/test.com">test.com</a><b>, </b><br>
<br>
I will find only one process is running, and from the "-v" option=
, I can see some message,<br>
<br>
mpirun -machinefile some.hosts.file<b> -np 1 </b><a href=3D"http://test.com=
">test.com</a>. <br>
<br>
Some.hosts.file is correct, but the number of processes is always ONE.So fa=
r I have to pass over this script and directly use <br>
mpirun -np N -machinefile hosts molprop_2009_1_Linux_x86_64_i8.exe <a href=
=3D"http://test.com">test.com</a>.<br>=A0
<br>Can you give some suggestion, especially for the first question? In our=
 system, writing to /tmp is alway forbidden. <br><br>Thanks.<br><br><br><di=
v class=3D"gmail_quote">On Sat, Oct 10, 2009 at 2:34 PM, He Ping <span dir=
=3D"ltr"><<a href=3D"mailto:heping at sccas.cn">heping at sccas.cn</a>></sp=
an> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"border-left: 1px solid rgb(204, =
204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Dear Manhui,<br><=
br>Thanks for your patience and explicit answer. Let me do some tests and g=
ive you feedback.<div>
<div></div><div class=3D"h5"><br><br><div class=3D"gmail_quote">On Fri, Oct=
 9, 2009 at 10:56 PM, Manhui Wang <span dir=3D"ltr"><<a href=3D"mailto:w=
angm9 at cardiff.ac.uk" target=3D"_blank">wangm9 at cardiff.ac.uk</a>></span> =
wrote:<br>




<blockquote class=3D"gmail_quote" style=3D"border-left: 1px solid rgb(204, =
204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Dear He Ping,<br>
 =A0 =A0 =A0GA may take full advantage of shared memory on an MPP node, but=
<br>
MPI-2 doesn't. On the other hand, MPI-2 may take advantage of the<br>
built-in MPI-2 library with fast connection. The performance depends on<br>
lots of facts, including MPI library, machine, network etc. It is better<br=
>
to build both versions of Molpro, and then choose the better one on that<br=
>
machine.<br>
 =A0 =A0 It doesn't seem to be hard to make GA4-2 work on machine like<=
br>
yours. Details were shown in my previous email.<br>
<div><br>
Best wishes,<br>
Manhui<br>
<br>
He Ping wrote:<br>
</div><div>> Hi Manhui,<br>
><br>
> Thanks. I will try these patches first.<br>
> But would you like to tell me something about GA's effect on molpr=
o? So<br>
> far I can not get a GA version molpro, so I can not know the performan=
ce<br>
> of GA version. If GA version is not much better than non-GA version, I=
<br>
> will not take much time building it.<br>
> Thanks a lot<br>
><br>
> On Fri, Oct 9, 2009 at 7:25 PM, Manhui Wang <<a href=3D"mailto:wang=
m9 at cardiff.ac.uk" target=3D"_blank">wangm9 at cardiff.ac.uk</a><br>
</div><div>> <mailto:<a href=3D"mailto:wangm9 at cardiff.ac.uk" target=
=3D"_blank">wangm9 at cardiff.ac.uk</a>>> wrote:<br>
><br>
</div><div><div></div><div>> =A0 =A0 Dear He Ping,<br>
> =A0 =A0 =A0 =A0 Recent patches include some bugfixes for intel compile=
r 11,<br>
> =A0 =A0 OopenMPI, and running molpro across nodes with InfiniBand. If =
you have<br>
> =A0 =A0 not updated them, please do it now. It may resolve your existi=
ng<br>
> =A0 =A0 problems.<br>
><br>
> =A0 =A0 He Ping wrote:<br>
> =A0 =A0 > Dear Manhui,<br>
> =A0 =A0 ><br>
> =A0 =A0 > Thanks a lot for your detailed reply, that's very hel=
pful. Very<br>
> =A0 =A0 sorry to<br>
> =A0 =A0 > answer later, for I have to do a lot of tests. So far, on=
e version of<br>
> =A0 =A0 > molpro2009.1 is basically ok, but I still have some quest=
ions.<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A01. Compile Part.<br>
> =A0 =A0 > =A0 =A0 =A0 Openmpi 1.3.3 can pass compile and link for b=
oth w GA and w/o GA<br>
> =A0 =A0 > =A0 =A0 =A0 4.2. I do not use my own blas, so I use defau=
lt, this is my<br>
> =A0 =A0 > =A0 =A0 =A0 configure step,<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 =A0 ./configure -batch -ifort -icc -mppbase $MPI_=
HOME/include64 -var<br>
> =A0 =A0 > =A0 =A0 =A0 LIBS=3D"-L/usr/lib64 -libverbs -lm"=
 -mpp =A0(in your letter, I guess<br>
> =A0 =A0 > =A0 =A0 =A0 you forget this necessary option.)<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 =A0 But *intelmpi failed for both*, I can show th=
e err message<br>
> =A0 =A0 > =A0 =A0 =A0 seperately below.<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 =A0 *Intelmpi w/o GA: =A0*<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 =A0 make[1]: Nothing to be done for `default'=
.<br>
> =A0 =A0 > =A0 =A0 =A0 make[1]: Leaving directory<br>
> =A0 =A0 ><br>
> =A0 =A0 `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro200=
9.1/utilities'<br>
> =A0 =A0 > =A0 =A0 =A0 make[1]: Entering directory<br>
> =A0 =A0 ><br>
> =A0 =A0 `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro200=
9.1/src'<br>
> =A0 =A0 > =A0 =A0 =A0 Preprocessing include files<br>
> =A0 =A0 > =A0 =A0 =A0 make[1]: *** [common.log] Error 1<br>
> =A0 =A0 > =A0 =A0 =A0 make[1]: *** Deleting file `common.log'<b=
r>
> =A0 =A0 > =A0 =A0 =A0 make[1]: Leaving directory<br>
> =A0 =A0 ><br>
> =A0 =A0 `/datastore/workspace/scheping/molpro2009.1_intelmpi/molpro200=
9.1/src'<br>
> =A0 =A0 > =A0 =A0 =A0 make: *** [src] Error 2<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 =A0 *Intelmpi w GA:*<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 =A0 compiling molpro_cvb.f<br>
> =A0 =A0 > =A0 =A0 =A0 failed<br>
> =A0 =A0 > =A0 =A0 =A0 molpro_cvb.f(1360): error #5102: Cannot open =
include file<br>
> =A0 =A0 > =A0 =A0 =A0 'common/ploc'<br>
> =A0 =A0 > =A0 =A0 =A0 =A0 =A0 =A0 include "common/ploc"<b=
r>
> =A0 =A0 > =A0 =A0 =A0 --------------^<br>
> =A0 =A0 > =A0 =A0 =A0 compilation aborted for molpro_cvb.f (code 1)=
<br>
> =A0 =A0 > =A0 =A0 =A0 make[3]: *** [molpro_cvb.o] Error 1<br>
> =A0 =A0 > =A0 =A0 =A0 preprocessing perfloc.f<br>
> =A0 =A0 > =A0 =A0 =A0 compiling perfloc.f<br>
> =A0 =A0 > =A0 =A0 =A0 failed<br>
> =A0 =A0 > =A0 =A0 =A0 perfloc.f(14): error #5102: Cannot open inclu=
de file<br>
> =A0 =A0 'common/ploc'<br>
> =A0 =A0 > =A0 =A0 =A0 =A0 =A0 =A0 include "common/ploc"<b=
r>
> =A0 =A0 > =A0 =A0 =A0 --------------^<br>
> =A0 =A0 > =A0 =A0 =A0 perfloc.f(42): error #6385: The highest data =
type rank permitted<br>
> =A0 =A0 > =A0 =A0 =A0 is INTEGER(KIND=3D8). =A0 [VARIAT]<br>
> =A0 =A0 > =A0 =A0 =A0 =A0 =A0 =A0 if(.not.variat)then<br>
> =A0 =A0 > =A0 =A0 =A0 --------------^<br>
> =A0 =A0 > =A0 =A0 =A0 perfloc.f(42): error #6385: The highest data =
type rank permitted<br>
> =A0 =A0 > =A0 =A0 =A0 is INTEGER(KIND=3D8).<br>
><br>
> =A0 =A0 Which version of intel compilers are you using? Has your GA wo=
rked fine?<br>
> =A0 =A0 We have tested Molpro2009.1 with<br>
> =A0 =A0 (1) intel/compilers/10.1.015 (11.0.074), GA 4-2 hosted by inte=
l/mpi/3.1<br>
> =A0 =A0 (3.2)<br>
> =A0 =A0 (2) without GA, intel/compilers/10.1.015 (11.0.074), intel/mpi=
/3.1 (3.2)<br>
><br>
> =A0 =A0 =A0all work fine. CONFIG files will be helpful to see the prob=
lems.<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A02. No .out file when I use more than about 12 proc=
esses, but I can<br>
> =A0 =A0 > =A0 =A0 =A0 get .xml file. It's very strange, everyth=
ing is ok when process<br>
> =A0 =A0 > =A0 =A0 =A0 number is less than 12, but once exceed this =
number, such as 16<br>
> =A0 =A0 > =A0 =A0 =A0 cpus, molpro always gets this err message,<br=
>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 =A0 orrtl: severe (174): SIGSEGV, segmentation fa=
ult occurred<br>
> =A0 =A0 > =A0 =A0 =A0 Image =A0 =A0 =A0 =A0 =A0 =A0 =A0PC =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0Routine<br>
> =A0 =A0 > =A0 =A0 =A0 Line =A0 =A0 =A0 =A0Source<br>
> =A0 =A0 > =A0 =A0 =A0 libopen-pal.so.0 =A0 00002AAAAB4805C6 =A0Unkn=
own<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 libopen-pal.so.0 =A0 00002AAAAB482152 =A0Unkn=
own<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 libc.so.6 =A0 =A0 =A0 =A0 =A0000000310FC5F07A=
 =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 molprop_2009_1_Li =A000000000005A4C36 =A0Unkn=
own<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 molprop_2009_1_Li =A000000000005A4B84 =A0Unkn=
own<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 molprop_2009_1_Li =A0000000000053E57B =A0Unkn=
own<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 molprop_2009_1_Li =A00000000000540A8C =A0Unkn=
own<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 molprop_2009_1_Li =A0000000000053C5E5 =A0Unkn=
own<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 molprop_2009_1_Li =A000000000004BCA5C =A0Unkn=
own<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 libc.so.6 =A0 =A0 =A0 =A0 =A0000000310FC1D8A4=
 =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 > =A0 =A0 =A0 molprop_2009_1_Li =A000000000004BC969 =A0Unkn=
own<br>
> =A0 =A0 > =A0 =A0 =A0 Unknown =A0Unknown<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 =A0 Can I ignore this message?<br>
> =A0 =A0 Have you seen this on one or multiple nodes? If on multiple no=
des, the<br>
> =A0 =A0 problem has been fixed by recent patches. By default, both *.o=
ut and<br>
> =A0 =A0 .xml can be obtained, but you can use option --no-xml-output t=
o disable<br>
> =A0 =A0 the *xml.<br>
> =A0 =A0 In addition, OpenMPI seems to be unstable sometime. When lots =
of jobs<br>
> =A0 =A0 are run with OpenMPI, some jobs hang up unexpectedly. This beh=
avior is<br>
> =A0 =A0 not seen for Intel MPI.<br>
><br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A03. Script Err. For molpro openmpi version, the scr=
ipt<br>
> =A0 =A0 > =A0 =A0 =A0 molpro_openmpi1.3.3/bin/molprop_2009_1_Linux_=
x86_64_i8 seems not<br>
> =A0 =A0 > =A0 =A0 =A0 to work.<br>
> =A0 =A0 > =A0 =A0 =A0 When I call this script, only one process is =
started, even if I<br>
> =A0 =A0 > =A0 =A0 =A0 use -np 8. So I have to run it manually, such=
 as<br>
> =A0 =A0 > =A0 =A0 =A0 mpirun -np 8 -machinefile ./hosts<br>
> =A0 =A0 > =A0 =A0 =A0 molprop_2009_1_Linux_x86_64_i8.exe <a href=3D=
"http://test.com" target=3D"_blank">test.com</a><br>
</div></div>> =A0 =A0 <<a href=3D"http://test.com" target=3D"_blank">=
http://test.com</a>> <<a href=3D"http://test.com" target=3D"_blank">h=
ttp://test.com</a>><br>
<div><div></div><div>> =A0 =A0 Have your ./bin/molpro worked?. For me, i=
t works fine. In ./bin/molpro,<br>
> =A0 =A0 some environmental settings are included. In the case that ./b=
in/molpro<br>
> =A0 =A0 doesn't work properly, =A0you might want to directly use<b=
r>
> =A0 =A0 molprop_2009_1_Linux_x86_64_i8.exe, then it is your responsibi=
lity to<br>
> =A0 =A0 set up these environmental variables.<br>
> =A0 =A0 > =A0 =A04. Molpro w GA can not cross over nodes. One node =
is ok, but if<br>
> =A0 =A0 cross<br>
> =A0 =A0 > =A0 =A0 =A0 over nodes, I will get "molpro ARMCI DAS=
SERT fail" err, and<br>
> =A0 =A0 molpro<br>
> =A0 =A0 > =A0 =A0 =A0 can not be terminated normally. Do you know t=
he difference<br>
> =A0 =A0 between<br>
> =A0 =A0 > =A0 =A0 =A0 w GA and w/o GA? If GA is not better than w/o=
 GA, I will<br>
> =A0 =A0 pass this<br>
> =A0 =A0 > =A0 =A0 =A0 GA version.<br>
> =A0 =A0 I think this problem has been fixed by recent patches.<br>
> =A0 =A0 As the difference between molpro w GA and w/o GA, it is hard t=
o make a<br>
> =A0 =A0 simple conclusion. For calculations with a small number of pro=
cesses(<br>
> =A0 =A0 eg. < 8), molpro w GA might be somewhat fast, but molpro wi=
thout GA is<br>
> =A0 =A0 quite competitive in performance when it is run with a large n=
umber of<br>
> =A0 =A0 processes. =A0Please refer to the benchmark<br>
> =A0 =A0 results(<a href=3D"http://www.molpro.net/info/bench.php" targe=
t=3D"_blank">http://www.molpro.net/info/bench.php</a>).<br>
> =A0 =A0 ><br>
> =A0 =A0 ><br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 =A0 Sorry for packing up so many questions, answe=
r is any one<br>
> =A0 =A0 question<br>
> =A0 =A0 > =A0 =A0 =A0 will help me a lot. And I think question 1 an=
d 2 will be more<br>
> =A0 =A0 > =A0 =A0 =A0 important to me. Thanks.<br>
> =A0 =A0 ><br>
> =A0 =A0 ><br>
> =A0 =A0 ><br>
><br>
> =A0 =A0 Best wishes,<br>
> =A0 =A0 Manhui<br>
><br>
><br>
><br>
> =A0 =A0 > On Thu, Sep 24, 2009 at 5:33 PM, Manhui Wang <<a href=
=3D"mailto:wangm9 at cardiff.ac.uk" target=3D"_blank">wangm9 at cardiff.ac.uk</a>=
<br>
> =A0 =A0 <mailto:<a href=3D"mailto:wangm9 at cardiff.ac.uk" target=3D"_=
blank">wangm9 at cardiff.ac.uk</a>><br>
</div></div><div><div></div><div>> =A0 =A0 > <mailto:<a href=3D"ma=
ilto:wangm9 at cardiff.ac.uk" target=3D"_blank">wangm9 at cardiff.ac.uk</a> <m=
ailto:<a href=3D"mailto:wangm9 at cardiff.ac.uk" target=3D"_blank">wangm9 at card=
iff.ac.uk</a>>>> wrote:<br>





> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 Hi He Ping,<br>
> =A0 =A0 > =A0 =A0 =A0 Yes, you can build parallel Molpro wthout GA =
for 2009.1.<br>
> =A0 =A0 Please see<br>
> =A0 =A0 > =A0 =A0 the manual A..3.3 Configuration<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 For the case of using the MPI-2 library, one exam=
ple can be<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 ./configure -mpp -mppbase /usr/local/mpich2-insta=
ll/include<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 and the -mppbase directory should contain file mp=
i.h. Please<br>
> =A0 =A0 ensure the<br>
> =A0 =A0 > =A0 =A0 built-in or freshly built MPI-2 library fully sup=
ports MPI-2<br>
> =A0 =A0 standard<br>
> =A0 =A0 > =A0 =A0 and works properly.<br>
> =A0 =A0 ><br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 Actually we have tested molpro2009.1 on almost th=
e same system<br>
> =A0 =A0 as what<br>
> =A0 =A0 > =A0 =A0 you mentioned (EMT64, Red Hat Enterprise Linux Se=
rver release 5.3<br>
> =A0 =A0 > =A0 =A0 (Tikanga), Intel MPI, ifort, icc, Infiniband). Fo=
r both GA and<br>
> =A0 =A0 MPI-2<br>
> =A0 =A0 > =A0 =A0 buildings, all work fine. The configurations are =
shown as<br>
> =A0 =A0 follows(beware<br>
> =A0 =A0 > =A0 =A0 of lines wrapping):<br>
> =A0 =A0 > =A0 =A0 (1) For Molpro2009.1 built with MPI-2<br>
> =A0 =A0 > =A0 =A0 ./configure -batch -ifort -icc -blaspath<br>
> =A0 =A0 > =A0 =A0 /software/intel/mkl/<a href=3D"http://10.0.1.014/=
lib/em64t" target=3D"_blank">10.0.1.014/lib/em64t</a><br>
> =A0 =A0 <<a href=3D"http://10.0.1.014/lib/em64t" target=3D"_blank">=
http://10.0.1.014/lib/em64t</a>><br>
> =A0 =A0 > =A0 =A0 <<a href=3D"http://10.0.1.014/lib/em64t" targe=
t=3D"_blank">http://10.0.1.014/lib/em64t</a>> -mppbase $MPI_HOME/include=
64<br>
> =A0 =A0 > =A0 =A0 -var LIBS=3D"-L/usr/lib64 -libverbs -lm&quot=
;<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 (2) For Molpro built with GA 4-2:<br>
> =A0 =A0 > =A0 =A0 =A0Build GA4-2:<br>
> =A0 =A0 > =A0 =A0 =A0 =A0 =A0 make TARGET=3DLINUX64 USE_MPI=3Dy CC=
=3Dicc FC=3Difort COPT=3D'-O3'<br>
> =A0 =A0 > =A0 =A0 FOPT=3D'-O3' \<br>
> =A0 =A0 > =A0 =A0 =A0 =A0 =A0 MPI_INCLUDE=3D$MPI_HOME/include64 MPI=
_LIB=3D$MPI_HOME/lib64 \<br>
> =A0 =A0 > =A0 =A0 =A0 =A0 =A0 ARMCI_NETWORK=3DOPENIB MA_USE_ARMCI_M=
EM=3Dy<br>
> =A0 =A0 > =A0 =A0 IB_INCLUDE=3D/usr/include/infiniband IB_LIB=3D/us=
r/lib64<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 =A0 =A0 =A0 mpirun ./global/testing/test.x<br>
> =A0 =A0 > =A0 =A0 Build Molpro<br>
> =A0 =A0 > =A0 =A0 ./configure -batch -ifort -icc -blaspath<br>
> =A0 =A0 > =A0 =A0 /software/intel/mkl/<a href=3D"http://10.0.1.014/=
lib/em64t" target=3D"_blank">10.0.1.014/lib/em64t</a><br>
> =A0 =A0 <<a href=3D"http://10.0.1.014/lib/em64t" target=3D"_blank">=
http://10.0.1.014/lib/em64t</a>><br>
> =A0 =A0 > =A0 =A0 <<a href=3D"http://10.0.1.014/lib/em64t" targe=
t=3D"_blank">http://10.0.1.014/lib/em64t</a>> -mppbase /GA4-2path -var<b=
r>
> =A0 =A0 > =A0 =A0 LIBS=3D"-L/usr/lib64 -libverbs -lm"<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 (LIBS=3D"-L/usr/lib64 -libverbs -lm" wi=
ll make molpro link with<br>
> =A0 =A0 Infiniband<br>
> =A0 =A0 > =A0 =A0 library)<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 (some note about MOLPRO built with MPI-2 library =
can also been<br>
> =A0 =A0 in manual<br>
> =A0 =A0 > =A0 =A0 2.2.1 Specifying parallel execution)<br>
> =A0 =A0 > =A0 =A0 Note: for MOLPRO built with MPI-2 library, when n=
 processes are<br>
> =A0 =A0 > =A0 =A0 specified, n-1 processes are used to compute and =
one process<br>
> =A0 =A0 is used to<br>
> =A0 =A0 > =A0 =A0 act as shared counter server (in the case of n=3D=
1, one process<br>
> =A0 =A0 is used to<br>
> =A0 =A0 > =A0 =A0 compute and no shared counter server is needed). =
Even so, it<br>
> =A0 =A0 is quite<br>
> =A0 =A0 > =A0 =A0 competitive in performance when it is run with a =
large number of<br>
> =A0 =A0 > =A0 =A0 processes.<br>
> =A0 =A0 > =A0 =A0 If you have built both versions, you can also com=
pare the<br>
> =A0 =A0 performance<br>
> =A0 =A0 > =A0 =A0 yourself.<br>
> =A0 =A0 ><br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 Best wishes,<br>
> =A0 =A0 > =A0 =A0 Manhui<br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 He Ping wrote:<br>
> =A0 =A0 > =A0 =A0 > Hello,<br>
> =A0 =A0 > =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 > I want to run molpro2009.1 parallel version =
on infiniband<br>
> =A0 =A0 network.<br>
> =A0 =A0 > =A0 =A0 I met<br>
> =A0 =A0 > =A0 =A0 > some problems when using GA, from the manual=
, section 3.2, there<br>
> =A0 =A0 > =A0 =A0 is one<br>
> =A0 =A0 > =A0 =A0 > line to say,<br>
> =A0 =A0 > =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 > If the program is to be built for parallel e=
xecution then<br>
> =A0 =A0 the Global<br>
> =A0 =A0 > =A0 =A0 > Arrays toolkit *or* the<br>
> =A0 =A0 > =A0 =A0 > MPI-2 library is needed.<br>
> =A0 =A0 > =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 > Does that mean I can build molpro parallel v=
ersion without<br>
> =A0 =A0 GA? If so,<br>
> =A0 =A0 > =A0 =A0 > who can tell me some more about how to confi=
gure?<br>
> =A0 =A0 > =A0 =A0 > My system is EM64T, Red Hat Enterprise Linux=
 Server release 5.1<br>
> =A0 =A0 > =A0 =A0 > (Tikanga), intel mpi, intel ifort and icc.<b=
r>
> =A0 =A0 > =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 > Thanks a lot.<br>
> =A0 =A0 > =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 > --<br>
> =A0 =A0 > =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 > He Ping<br>
> =A0 =A0 > =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 ><br>
> =A0 =A0 ><br>
> =A0 =A0 --------------------------------------------------------------=
----------<br>
> =A0 =A0 > =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 > ____________________________________________=
___<br>
> =A0 =A0 > =A0 =A0 > Molpro-user mailing list<br>
> =A0 =A0 > =A0 =A0 > <a href=3D"mailto:Molpro-user at molpro.net" ta=
rget=3D"_blank">Molpro-user at molpro.net</a> <mailto:<a href=3D"mailto:Mol=
pro-user at molpro.net" target=3D"_blank">Molpro-user at molpro.net</a>><br>
</div></div>> =A0 =A0 <mailto:<a href=3D"mailto:Molpro-user at molpro.ne=
t" target=3D"_blank">Molpro-user at molpro.net</a> <mailto:<a href=3D"mailt=
o:Molpro-user at molpro.net" target=3D"_blank">Molpro-user at molpro.net</a>>&=
gt;<br>




<div>> =A0 =A0 > =A0 =A0 > <a href=3D"http://www.molpro.net/mailma=
n/listinfo/molpro-user" target=3D"_blank">http://www.molpro.net/mailman/lis=
tinfo/molpro-user</a><br>
> =A0 =A0 ><br>
> =A0 =A0 > =A0 =A0 --<br>
> =A0 =A0 > =A0 =A0 -----------<br>
> =A0 =A0 > =A0 =A0 Manhui =A0Wang<br>
> =A0 =A0 > =A0 =A0 School of Chemistry, Cardiff University,<br>
> =A0 =A0 > =A0 =A0 Main Building, Park Place,<br>
> =A0 =A0 > =A0 =A0 Cardiff CF10 3AT, UK<br>
> =A0 =A0 > =A0 =A0 Telephone: +44 (0)29208 76637<br>
> =A0 =A0 ><br>
> =A0 =A0 ><br>
> =A0 =A0 ><br>
> =A0 =A0 ><br>
> =A0 =A0 > --<br>
> =A0 =A0 ><br>
> =A0 =A0 > He Ping<br>
> =A0 =A0 > [O] 010-58813311<br>
><br>
> =A0 =A0 --<br>
> =A0 =A0 -----------<br>
> =A0 =A0 Manhui =A0Wang<br>
> =A0 =A0 School of Chemistry, Cardiff University,<br>
> =A0 =A0 Main Building, Park Place,<br>
> =A0 =A0 Cardiff CF10 3AT, UK<br>
> =A0 =A0 Telephone: +44 (0)29208 76637<br>
><br>
><br>
><br>
><br>
> --<br>
><br>
> He Ping<br>
> [O] 010-58813311<br>
<br>
</div>--<br>
<div><div></div><div>-----------<br>
Manhui =A0Wang<br>
School of Chemistry, Cardiff University,<br>
Main Building, Park Place,<br>
Cardiff CF10 3AT, UK<br>
Telephone: +44 (0)29208 76637<br>
<br>
</div></div></blockquote></div><br><br clear=3D"all"><br></div></div>-- <br=
><div><div></div><div class=3D"h5"><br>He Ping<br>[O] 010-58813311<br>
</div></div></blockquote></div><br><br clear=3D"all"><br>-- <br><br>He Ping=
<br>[O] 010-58813311<br>

--00504502e13bfe6cab0475a7570b--



More information about the Molpro-user mailing list