m_mpi_proxy Module

@file m_mpi_proxy.f90 @brief Contains module m_mpi_proxy @author S. Bryngelson, K. Schimdmayer, V. Coralic, J. Meng, K. Maeda, T. Colonius @version 1.0 @date JUNE 06 2019 @brief The module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the purpose of the proxy is to harness basic MPI commands into more complicated procedures as to accomplish the communication goals for the simulation.


Uses

  • module~~m_mpi_proxy~~UsesGraph module~m_mpi_proxy m_mpi_proxy module~m_global_parameters m_global_parameters module~m_mpi_proxy->module~m_global_parameters m_derived_types m_derived_types module~m_mpi_proxy->m_derived_types mpi mpi module~m_mpi_proxy->mpi module~m_global_parameters->m_derived_types module~m_global_parameters->mpi openacc openacc module~m_global_parameters->openacc

Used by

  • module~~m_mpi_proxy~~UsedByGraph module~m_mpi_proxy m_mpi_proxy module~m_bubbles m_bubbles module~m_bubbles->module~m_mpi_proxy module~m_variables_conversion m_variables_conversion module~m_bubbles->module~m_variables_conversion module~m_variables_conversion->module~m_mpi_proxy program~p_main p_main program~p_main->module~m_mpi_proxy program~p_main->module~m_variables_conversion module~m_derived_variables m_derived_variables program~p_main->module~m_derived_variables module~m_start_up m_start_up program~p_main->module~m_start_up module~m_qbmm m_qbmm program~p_main->module~m_qbmm module~m_time_steppers m_time_steppers program~p_main->module~m_time_steppers module~m_riemann_solvers m_riemann_solvers program~p_main->module~m_riemann_solvers module~m_rhs m_rhs program~p_main->module~m_rhs module~m_derived_variables->module~m_mpi_proxy module~m_derived_variables->module~m_time_steppers module~m_start_up->module~m_mpi_proxy module~m_start_up->module~m_variables_conversion module~m_qbmm->module~m_mpi_proxy module~m_qbmm->module~m_variables_conversion module~m_time_steppers->module~m_mpi_proxy module~m_time_steppers->module~m_bubbles module~m_time_steppers->module~m_rhs module~m_riemann_solvers->module~m_mpi_proxy module~m_riemann_solvers->module~m_bubbles module~m_riemann_solvers->module~m_variables_conversion module~m_rhs->module~m_mpi_proxy module~m_rhs->module~m_bubbles module~m_rhs->module~m_variables_conversion module~m_rhs->module~m_qbmm module~m_rhs->module~m_riemann_solvers

Contents


Variables

TypeVisibilityAttributesNameInitial
real, public :: s_time

@}

real, public :: e_time

@}

real, public :: compress_time
real, public :: mpi_time
real, public :: decompress_time
integer, public :: nCalls_time =0

Subroutines

public subroutine s_mpi_initialize()

The subroutine intializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.

Arguments

None

public subroutine s_mpi_abort()

The subroutine terminates the MPI execution environment.

Arguments

None

public subroutine s_initialize_mpi_data(q_cons_vf)

The subroutine that initializes MPI data structures @param q_cons_vf Conservative variables

Arguments

TypeIntentOptionalAttributesName
type(scalar_field), intent(in), dimension(sys_size):: q_cons_vf

public subroutine s_mpi_barrier()

Halts all processes until all have reached barrier.

Arguments

None

public subroutine s_initialize_mpi_proxy_module()

The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.

Arguments

None

public subroutine s_mpi_bcast_user_inputs()

Since only the processor with rank 0 reads and verifies the consistency of user inputs, these are initially not available to the other processors. Then, the purpose of this subroutine is to distribute the user inputs to the remaining processors in the communicator.

Arguments

None

public subroutine mpi_bcast_time_step_values(proc_time, time_avg)

Arguments

TypeIntentOptionalAttributesName
real(kind=kind(0d0)), intent(inout), dimension(0:num_procs - 1):: proc_time
real(kind=kind(0d0)), intent(inout) :: time_avg

The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.

Arguments

None

public subroutine s_mpi_sendrecv_grid_variables_buffers(mpi_dir, pbc_loc)

The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions. @param mpi_dir MPI communication coordinate direction @param pbc_loc Processor boundary condition (PBC) location

Arguments

TypeIntentOptionalAttributesName
integer, intent(in) :: mpi_dir
integer, intent(in) :: pbc_loc

public subroutine s_mpi_reduce_stability_criteria_extrema(icfl_max_loc, vcfl_max_loc, ccfl_max_loc, Rc_min_loc, icfl_max_glb, vcfl_max_glb, ccfl_max_glb, Rc_min_glb)

The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor. @param icfl_max_loc Local maximum ICFL stability criterion @param vcfl_max_loc Local maximum VCFL stability criterion @param ccfl_max_loc Local maximum CCFL stability criterion @param Rc_min_loc Local minimum Rc stability criterion @param icfl_max_glb Global maximum ICFL stability criterion @param vcfl_max_glb Global maximum VCFL stability criterion @param ccfl_max_glb Global maximum CCFL stability criterion @param Rc_min_glb Global minimum Rc stability criterion

Arguments

TypeIntentOptionalAttributesName
real(kind=kind(0d0)), intent(in) :: icfl_max_loc
real(kind=kind(0d0)), intent(in) :: vcfl_max_loc
real(kind=kind(0d0)), intent(in) :: ccfl_max_loc
real(kind=kind(0d0)), intent(in) :: Rc_min_loc
real(kind=kind(0d0)), intent(out) :: icfl_max_glb
real(kind=kind(0d0)), intent(out) :: vcfl_max_glb
real(kind=kind(0d0)), intent(out) :: ccfl_max_glb
real(kind=kind(0d0)), intent(out) :: Rc_min_glb

public subroutine s_mpi_allreduce_sum(var_loc, var_glb)

The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor. @param var_loc Some variable containing the local value which should be reduced amongst all the processors in the communicator. @param var_glb The globally reduced value

Arguments

TypeIntentOptionalAttributesName
real(kind=kind(0d0)), intent(in) :: var_loc
real(kind=kind(0d0)), intent(out) :: var_glb

public subroutine s_mpi_allreduce_min(var_loc, var_glb)

The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor. @param var_loc Some variable containing the local value which should be reduced amongst all the processors in the communicator. @param var_glb The globally reduced value

Arguments

TypeIntentOptionalAttributesName
real(kind=kind(0d0)), intent(in) :: var_loc
real(kind=kind(0d0)), intent(out) :: var_glb

public subroutine s_mpi_allreduce_max(var_loc, var_glb)

The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor. @param var_loc Some variable containing the local value which should be reduced amongst all the processors in the communicator. @param var_glb The globally reduced value

Arguments

TypeIntentOptionalAttributesName
real(kind=kind(0d0)), intent(in) :: var_loc
real(kind=kind(0d0)), intent(out) :: var_glb

public subroutine s_mpi_sendrecv_conservative_variables_buffers(q_cons_vf, mpi_dir, pbc_loc)

The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors. @param q_cons_vf Cell-average conservative variables @param mpi_dir MPI communication coordinate direction @param pbc_loc Processor boundary condition (PBC) location

Arguments

TypeIntentOptionalAttributesName
type(scalar_field), intent(inout), dimension(sys_size):: q_cons_vf
integer, intent(in) :: mpi_dir
integer, intent(in) :: pbc_loc

public subroutine s_finalize_mpi_proxy_module()

Module deallocation and/or disassociation procedures

Arguments

None

public subroutine s_mpi_finalize()

The subroutine finalizes the MPI execution environment.

Arguments

None