@file m_mpi_proxy.f90 @brief Contains module m_mpi_proxy @author S. Bryngelson, K. Schimdmayer, V. Coralic, J. Meng, K. Maeda, T. Colonius @version 1.0 @date JUNE 06 2019 @brief The module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the purpose of the proxy is to harness basic MPI commands into more complicated procedures as to accomplish the communication goals for the simulation.
| Type | Visibility | Attributes | Name | Initial | |||
|---|---|---|---|---|---|---|---|
| real, | public | :: | s_time | @} |
|||
| real, | public | :: | e_time | @} |
|||
| real, | public | :: | compress_time | ||||
| real, | public | :: | mpi_time | ||||
| real, | public | :: | decompress_time | ||||
| integer, | public | :: | nCalls_time | = | 0 |
The subroutine intializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.
The subroutine terminates the MPI execution environment.
The subroutine that initializes MPI data structures @param q_cons_vf Conservative variables
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| type(scalar_field), | intent(in), | dimension(sys_size) | :: | q_cons_vf |
Halts all processes until all have reached barrier.
The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.
Since only the processor with rank 0 reads and verifies the consistency of user inputs, these are initially not available to the other processors. Then, the purpose of this subroutine is to distribute the user inputs to the remaining processors in the communicator.
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| real(kind=kind(0d0)), | intent(inout), | dimension(0:num_procs - 1) | :: | proc_time | ||
| real(kind=kind(0d0)), | intent(inout) | :: | time_avg |
The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.
The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions. @param mpi_dir MPI communication coordinate direction @param pbc_loc Processor boundary condition (PBC) location
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| integer, | intent(in) | :: | mpi_dir | |||
| integer, | intent(in) | :: | pbc_loc |
The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor. @param icfl_max_loc Local maximum ICFL stability criterion @param vcfl_max_loc Local maximum VCFL stability criterion @param ccfl_max_loc Local maximum CCFL stability criterion @param Rc_min_loc Local minimum Rc stability criterion @param icfl_max_glb Global maximum ICFL stability criterion @param vcfl_max_glb Global maximum VCFL stability criterion @param ccfl_max_glb Global maximum CCFL stability criterion @param Rc_min_glb Global minimum Rc stability criterion
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| real(kind=kind(0d0)), | intent(in) | :: | icfl_max_loc | |||
| real(kind=kind(0d0)), | intent(in) | :: | vcfl_max_loc | |||
| real(kind=kind(0d0)), | intent(in) | :: | ccfl_max_loc | |||
| real(kind=kind(0d0)), | intent(in) | :: | Rc_min_loc | |||
| real(kind=kind(0d0)), | intent(out) | :: | icfl_max_glb | |||
| real(kind=kind(0d0)), | intent(out) | :: | vcfl_max_glb | |||
| real(kind=kind(0d0)), | intent(out) | :: | ccfl_max_glb | |||
| real(kind=kind(0d0)), | intent(out) | :: | Rc_min_glb |
The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor. @param var_loc Some variable containing the local value which should be reduced amongst all the processors in the communicator. @param var_glb The globally reduced value
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| real(kind=kind(0d0)), | intent(in) | :: | var_loc | |||
| real(kind=kind(0d0)), | intent(out) | :: | var_glb |
The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor. @param var_loc Some variable containing the local value which should be reduced amongst all the processors in the communicator. @param var_glb The globally reduced value
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| real(kind=kind(0d0)), | intent(in) | :: | var_loc | |||
| real(kind=kind(0d0)), | intent(out) | :: | var_glb |
The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor. @param var_loc Some variable containing the local value which should be reduced amongst all the processors in the communicator. @param var_glb The globally reduced value
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| real(kind=kind(0d0)), | intent(in) | :: | var_loc | |||
| real(kind=kind(0d0)), | intent(out) | :: | var_glb |
The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors. @param q_cons_vf Cell-average conservative variables @param mpi_dir MPI communication coordinate direction @param pbc_loc Processor boundary condition (PBC) location
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| type(scalar_field), | intent(inout), | dimension(sys_size) | :: | q_cons_vf | ||
| integer, | intent(in) | :: | mpi_dir | |||
| integer, | intent(in) | :: | pbc_loc |
Module deallocation and/or disassociation procedures
The subroutine finalizes the MPI execution environment.