Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
running_molpro_on_parallel_computers [2021/04/27 11:37] qianli [Specifying parallel execution] |
running_molpro_on_parallel_computers [2021/06/02 08:46] qianli |
||
---|---|---|---|
Line 7: | Line 7: | ||
There are different GA implementation options (runtimes), and there are advantages and disadvantages for using one or the other implementation (see [[GA Installation]]). | There are different GA implementation options (runtimes), and there are advantages and disadvantages for using one or the other implementation (see [[GA Installation]]). | ||
- | The Molpro | + | **Since |
- | **Failing | + | The behavior of previous versions can be recovered by the '' |
- | It is therefore very important to read and understand [[#memory specifications]] before trying to run large scale parallel calculations | + | However, '' |
+ | Preallocating GA is not required | ||
===== GA Installation notes ===== | ===== GA Installation notes ===== | ||
Line 58: | Line 60: | ||
* **'' | * **'' | ||
- | * **'' | + | * **'' |
* **'' | * **'' | ||
* **'' | * **'' | ||
Line 72: | Line 74: | ||
Since version 2021.1, Molpro can use MPI files instead of GlobalArrays to store large global data. This option can be enabled globally by setting the environment variable '' | Since version 2021.1, Molpro can use MPI files instead of GlobalArrays to store large global data. This option can be enabled globally by setting the environment variable '' | ||
+ | Since version 2021.2 the disk option is made the default in single-node calculations. | ||
Some programs in Molpro including DF-HF, DF-KS, (DF-)MULTI, DF-TDDFT, and PNO-LCCSD also support an input option '' | Some programs in Molpro including DF-HF, DF-KS, (DF-)MULTI, DF-TDDFT, and PNO-LCCSD also support an input option '' | ||
The file system for these MPI files must be accessible by all processors. | The file system for these MPI files must be accessible by all processors. | ||
Line 89: | Line 92: | ||
The numerical computation of gradients or Hessians, or the automatic generation of potential energy surfaces, requires many similar calculations at different (displaced) geometries. An automatic parallel computation of the energy and/or gradients at different geometries is implemented for the gradient, hessian, and surf programs. In this so-called mppx-mode, each processing core runs an independent calculation in serial mode. This happens automatically using the '' | The numerical computation of gradients or Hessians, or the automatic generation of potential energy surfaces, requires many similar calculations at different (displaced) geometries. An automatic parallel computation of the energy and/or gradients at different geometries is implemented for the gradient, hessian, and surf programs. In this so-called mppx-mode, each processing core runs an independent calculation in serial mode. This happens automatically using the '' | ||
+ | ===== Options for developers ===== | ||
+ | |||
+ | ==== Debugging options ==== | ||
+ | |||
+ | * **'' | ||
+ | * **'' | ||
- | ===== Options for pure MPI-based PPIDD build ===== | + | ==== Options for pure MPI-based PPIDD build ==== |
This section is **not** applicable if the Molpro binary release is used, or when Molpro is built using the GlobalArrays toolkit (which we recommend). | This section is **not** applicable if the Molpro binary release is used, or when Molpro is built using the GlobalArrays toolkit (which we recommend). |