-
Notifications
You must be signed in to change notification settings - Fork 10
Description
Dear DiffNets developers,
Thanks a lot for developing this amazing tool! I am very interested in applying this framework in my research and I have installed the packing exactly following the steps in the user guide. However, I have a question about running this tool using supercomputing resources.
Specifically, I was trying to import the package in an interactive GPU node requested from PSC Bridges-2 where CUDA was enabled. On the node, I loaded in openmpi/3.1.6-gcc10.2.0 so that the job can be parallelized. However, when importing the package (import diffnets) in a Python console, I got the following error below:
[r004.ib.bridges2.psc.edu:99803] OPAL ERROR: Not initialized in file pmix3x_client.c at line 112
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:
version 16.05 or later: you can use SLURM's PMIx support. This
requires that you configure and build SLURM --with-pmix.
Versions earlier than 16.05: you must use either SLURM's PMI-1 or
PMI-2 support. SLURM builds PMI-1 by default, or you can manually
install PMI-2. You must then build Open MPI using --with-pmi pointing
to the SLURM PMI library location.
Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
[r004.ib.bridges2.psc.edu:99803] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
Since this error is the same as the one returned if GROMACS commands are not executed using mpirun -np xx gmx_mpi but only gmx_mpi, I assumed that this had something to do with the fact that Bridges-2 was trying to run an MPI process but somehow was not able to call OpenMPI. Looking at the source code, I was not entirely sure which part of the code was related to this issue. I know that this issue might be more related to the configurations adopted by Bridges-2 rather than the package itself, but the PSC support team seems to not be able to solve the issue. Therefore, I'm wondering if you could provide some insights into this issue or information about how we should start troubleshooting. Thanks a lot!
Update: I later looked into the module as best as I could. My understanding is that the diffnets (or enspara) was not able to initialize the MPI library because it was not running using srun or mpirun. It seems that either of them triggered MPI_Init but for some reason it just failed ...