Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
running_molpro_on_parallel_computers [2025/01/15 16:41] – no more sockets mayrunning_molpro_on_parallel_computers [2025/02/07 12:55] (current) – typo doll
Line 19: Line 19:
 The following additional options for the ''molpro'' command may be used to specify and control parallel execution. In addition, appropriate memory specifications (''%%-m, -M, -G%%'') are important, see section [[running Molpro on parallel computers#memory specifications|memory specifications]]. The following additional options for the ''molpro'' command may be used to specify and control parallel execution. In addition, appropriate memory specifications (''%%-m, -M, -G%%'') are important, see section [[running Molpro on parallel computers#memory specifications|memory specifications]].
  
-  * **''-n'' $|$ ''%%--tasks%%'' //tasks/tasks_per_node//** //tasks// specifies the number of parallel processes to be set up, and defaults to 1. //tasks_per_node// sets the number of GA(or MPI-2) processes to run on each node, where appropriate. The default is installation dependent. In some environments (e.g., IBM running under Loadleveler; PBS batch job), the value given by ''-n'' is capped to the maximum allowed by the environment; in such circumstances it can be useful to give a very large number as the value for ''-n'' so that the control of the number of processes is by the batch job specification. Either of these components may be omitted+  * **''-n'' $|$ ''%%--tasks%%'' //tasks//** specifies the number of parallel processes to be set up, and defaults to 1. 
-  * **''-N'' $|$ ''%%--task-specification%%'' //node1:tasks1,node2:tasks2$\dots$//** //node1, node2// etc. specify the host names of the nodes on which to run. On most parallel systems, node1 defaults to the local host name, and there is no default for node2 and higher. On Cray T3E and IBM SP systems, and on systems running under the PBS batch system, if -N is not specified, nodes are obtained from the system in the standard way. //tasks1, tasks2// etc. may be used to control the number of tasks on each node as a more flexible alternative to ''-n'' / //tasks_per_node//. If omitted, they are each set equal to ''-n'' / //tasks_per_node//. Most of these parameters may be omitted in favour of the usually sensible default values. +  * **''-N'' $|$ ''%%--task-specification%%'' //node1:tasks1,node2:tasks2$\dots$//** //node1, node2// etc. specify the host names of the nodes on which to run. //tasks1, tasks2// etc. specify the number of tasks on each node 
-  * **''%%--ga-impl%%'' //method//** specifies the method by which large data structure are held in parallel. Available options are ''GA'' (GlobalArrays, default) for ''disk'' (MPI Files, see [[#disk option]]). This option is most relevant on the more recent programs such as Hartree-Fock, DFT, MCSCF/CASSCF, and the PNO programs.+  * **''%%--ga-impl%%'' //method//** specifies the method by which large data structure are held in parallel. Available options are ''GA'' (GlobalArrays, default) or ''disk'' (MPI Files, see [[#disk option]]). This option is most relevant on the more recent programs such as Hartree-Fock, DFT, MCSCF/CASSCF, and the PNO programs.
   * **''-D'' $|$ ''%%--global-scratch%%'' //directory//** specifies the scratch directory for the program which is accessible by all processors in multi-node calculations. This only affects parallel calculations with the [[running Molpro on parallel computers#disk option|disk option]].    * **''-D'' $|$ ''%%--global-scratch%%'' //directory//** specifies the scratch directory for the program which is accessible by all processors in multi-node calculations. This only affects parallel calculations with the [[running Molpro on parallel computers#disk option|disk option]]. 
   * **''%%--all-outputs%%''** produces an output file for each process when running in parallel.   * **''%%--all-outputs%%''** produces an output file for each process when running in parallel.