[molpro-user] Compiling Molpro with OpenMPI

Andy May MayAJ1 at cardiff.ac.uk
Mon Dec 17 11:53:49 GMT 2012


Lasse,

Yes, I see this problem with Molpro 2012.1.0. If you use the 2012.1 
'nightly' tarball instead then this particular problem should be fixed. 
However, we have not fully finished the Intel 13 port, so I would not 
recommend for now using this compiler since there are an number of known 
problems. The Intel 12 compilers are working fine. The list of systems 
which we are testing regularly is listed here:

http://www.molpro.net/supported/

Regarding your bonus question, I would recommend to use the same 
compiler for both openmpi and Molpro where possible, even though it 
should not be required. If you are building parallel Molpro on a 
standalone machine then you can ask Molpro configure to build openmpi 
with the same compilers automatically with, for example:

./configure -batch -mpp -auto-openpmi

Best wishes,

Andy

On 16/12/12 11:35, Lasse Nielsen wrote:
> Hi
>
> I have tried to compile Molpro 2012.1 using Intels
> composer_xe_2013.1.117 (cc, fortran and mkl-library) and OpenMPI, both
> versions 1.6.2 and 1.4.3. However, every time I run the testjobs I get
> the same error message, see below.
> Both Molpro and OpenMPI was compiled on the frontend.
>
> I configured Molpro this way:
> ./configure -icc -i8 -mpp -openmpi -mppbase
> /software/kemi/openmpi-1.4.3/include/ -prefix /kemi/obessal/MOLPRO -batch
> and then used Gnu Make 3.81
> then make test
>
> How can I solve this problem??
>
>
> Running job aims.test
> CMA: no RDMA devices found
>
>   GLOBAL ERROR fehler on processor   0
>      0: fehler 0 (0).
>      0: In mpi_utils.c [MPIGA_Error]: now exiting...
> --------------------------------------------------------------------------
> [[44089,1],0]: A high-performance Open MPI point-to-point messaging module
> was unable to find any relevant network interfaces:
>
> Module: OpenFabrics (openib)
>    Host: fend03.dcsc.ku.dk
>
> Another transport will be used instead, although this may result in
> lower performance.
> --------------------------------------------------------------------------
> Received signal 11 Segmentation violation
>      0: fehler 0 (0).
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 0.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 0 with PID 11981 on
> node fend03.dcsc.ku.dk exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
> **** PROBLEMS WITH JOB aims.test
> aims.test: ERRORS DETECTED: non-zero return code ... inspect output
> **** For further information, look in the output file
> **** /users/kemi/obessal/MOLPRO/Molpro/testjobs/aims.errout
>
>
> Bonus question:
> Also, is it important to compile Molpro and OpenMPI with the same compiler?
>
>
> Best Regards
> - Lasse
>
>
>
> _______________________________________________
> Molpro-user mailing list
> Molpro-user at molpro.net
> http://www.molpro.net/mailman/listinfo/molpro-user
>



More information about the Molpro-user mailing list