MPI checks in configure

Hi, the MPI-checks in configure are not sufficient for new exchanger.

I have tested new exchanger(master branch) with OpenMPI as ICON mo_communication on levante gpu partition. There were two versions of OpenMPI used, they were openmpi-4.1.2-hzabdh and openmpi-4.1.4-3qb4sy. They both passed YAXT MPI-checks in configuration, however, the new exchanger worked only with openmpi-4.1.4-3qb4sy in ICON. The issue when using openmpi-4.1.2-hzabdh was:

Bus error: nonexistent physical address
==== backtrace (tid: 125677) ====
15:  0 0x0000000000012c20 .annobin_sigaction.c()  sigaction.c:0
15:  1 0x0000000000002c30 memcpy_uncached_load_sse41()  /home/k/k202066/.spack/stage/spack-stage-gdrcopy-2.2-5dxzbgq35iriw3n2zewaxri6q2d65ffl/spack-src/src/memcpy_sse41.c:76

log file when using openmpi-4.1.2-hzabdh is /work/k20200/k202149/icon-base-libs-yaxt_new_exchanger/run/LOG.exp.qubicc_r02b07.yaxt_new_exchanger.run.3620955.o

modification in YAXT source code to use new exchanger:

diff --git a/src/xt_config.c b/src/xt_config.c
index f352c3e9..b6fdfc3c 100644
--- a/src/xt_config.c
+++ b/src/xt_config.c
@@ -68,7 +68,7 @@
 #include "core/ppm_xfuncs.h"

 struct Xt_config_ xt_default_config = {
-  .exchanger_new = xt_exchanger_mix_isend_irecv_new,
+  .exchanger_new = xt_exchanger_irecv_isend_ddt_packed_new,
   .exchanger_team_share = NULL,
   .idxv_cnv_size = CHEAP_VECTOR_SIZE,
   .flags = 0,

build script for YAXT:

module --force purge
spack unload -a
module load nvhpc/22.5-gcc-11.2.0 git patch
# spack load openmpi@4.1.2%nvhpc
spack load openmpi@4.1.4%nvhpc/3qb4sy

SW_ROOT='/sw/spack-levante'
CUDA_ROOT="${SW_ROOT}/nvhpc-22.5-v4oky3/Linux_x86_64/22.5/cuda"
# MPI_ROOT="${SW_ROOT}/openmpi-4.1.2-hzabdh"
MPI_ROOT="${SW_ROOT}/openmpi-4.1.4-3qb4sy"

# nordc is needed if OpenACC is used in a shared library
# -lnvToolsExt for profiling
# build yaxt with CUDA, OpenACC directives on GPU
# libcuda path and -lcuda shouldn't be provided to ld(1), dlopen() is used at runtime for dynamical link.
${sourcedir}/configure \
  CC="${MPI_ROOT}/bin/mpicc" FC="${MPI_ROOT}/bin/mpif90" \
  CFLAGS="-O2 -g -I${CUDA_ROOT}/include -acc=gpu -gpu=cc80,nordc -Minfo" \
  LDFLAGS="-lnvToolsExt -acc=gpu -gpu=cc80,nordc" \
  MPI_LAUNCH="/usr/bin/srun -p gpu -A k20200 -N 1 " \
  --with-idxtype=long --disable-static

make
make check
make install

And ICON I used was yaxt_new_exchanger-levante_phase2_gpu

Edited by Xingran Wang