[molpro-user] version 2012 with intel 13.1.1 and openmpi 1.6.4
jhammond at alcf.anl.gov
Fri May 17 15:49:17 BST 2013
It is an incorrect inference that just because a problem occurs only
when nproc>1, the problem must be in MPI. There are an uncountable
number of ways for an application to fail in parallel that have
nothing to do with MPI. For example, if domain decomposition leaves
one process with no data but the application does not work around the
fact that a 0-byte malloc can return NULL, the application will
segfault when it tries to use the empty data. I'm not suggesting that
Molpro is doing this, but it is a common mistake I observe.
On the other hand, if you discover that one implementation of MPI
fails and another does not, e.g. OpenMPI 1.6.4 crashes but MPICH 3.0.5
does not, then it is merely plausible that MPI is to blame. However,
even then it is possible for applications to use MPI incorrectly but
such that one implementation will tolerate it and another will not.
For example, you can use MPI incorrectly by calling it simultaneously
from multiple threads when using only MPI_THREAD_FUNNELED and OpenMPI
will be fine but MPICH will crash; this is because OpenMPI rounds up
the thread-safety support to MPI_THREAD_MULTIPLE in this case while
MPICH does not (this was true the last time I checked; if OpenMPI has
changed since then, I apologize).
I apologize for not solving your problem and providing this extraneous
information about the use of MPI in parallel computing.
On Fri, May 17, 2013 at 4:29 AM, Shachar Klaiman
<shachar.klaiman at pci.uni-heidelberg.de> wrote:
> Dear Developers and Users,
> Has anyone encountered problem installing Molpro v2012 using intel compilers
> and mkl version 13.1.1 and openmpi version 1.6.4?
> The compilation proceeded without any problems but when running the tests,
> while the output says no errors (the calculation itself ended) it doesn't
> proceed and an errout file is formed where it says that a segmentation fault
> occurred. I also experienced when running make tuning that the program
> suddenly hangs at the end of the calculation rather than proceeding. I guess
> this is a problem with the mpi since it did not occur when I run with 1
> Any ideas?
> Thanks in advance.
> Molpro-user mailing list
> Molpro-user at molpro.net
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
ALCF docs: http://www.alcf.anl.gov/user-guides
More information about the Molpro-user