SimGrid  3.15
Versatile Simulation of Distributed Systems
SMPI: Simulate real MPI applications

Programming environment for the simulation of MPI applications.

This programming environment enables the study of MPI application by emulating them on top of the SimGrid simulator. This is particularly interesting to study existing MPI applications within the comfort of the simulator. The motivation for this work is detailed in the reference article (available at http://hal.inria.fr/inria-00527150).

Our goal is to enable the study of unmodified MPI applications, and even if some constructs and features are still missing, we consider SMPI to be stable and usable in production. For further scalability, you may modify your code to speed up your studies or save memory space. Improved simulation accuracy requires some specific care from you.

Using SMPI

If you're absolutely new to MPI, you should first take our online SMPI CourseWare, and/or take a MPI course in your favorite university. If you already know MPI, SMPI should sound very familiar to you: Use smpicc instead of mpicc, and smpirun instead of mpirun, and you're almost set. Once you get a virtual platform description (see Describing the virtual platform), you're good to go.

Compiling your code

For that, simply use smpicc as a compiler just like you use mpicc with other MPI implementations. This script still calls your default compiler (gcc, clang, ...) and adds the right compilation flags along the way.

Alas, some building infrastructures cannot cope with that and your ./configure may fail, reporting that the compiler is not functional. If this happens, define the SMPI_PRETEND_CC environment variable before running the configuration. Do not define it when using SMPI!

SMPI_PRETEND_CC=1 ./configure # here come the configure parameters
make
Warning
Again, make sure that SMPI_PRETEND_CC is not set when you actually compile your application. It is just a work-around for some configure-scripts and replaces some internals by "return 0;". Your simulation will not work with this variable set!

Executing your code on the simulator

Use the smpirun script as follows for that:

smpirun -hostfile my_hostfile.txt -platform my_platform.xml ./program -blah

smpirun accepts other parameters, such as -np if you don't want to use all the hosts defined in the hostfile, -map to display on which host each rank gets mapped of -trace to activate the tracing during the simulation. You can get the full list by running

smpirun -help

Simulating collective operations

MPI collective operations are crucial to the performance of MPI applications and must be carefully optimized according to many parameters. Every existing implementation provides several algorithms for each collective operation, and selects by default the best suited one, depending on the sizes sent, the number of nodes, the communicator, or the communication library being used. These decisions are based on empirical results and theoretical complexity estimation, and are very different between MPI implementations. In most cases, the users can also manually tune the algorithm used for each collective operation.

SMPI can simulate the behavior of several MPI implementations: OpenMPI, MPICH, STAR-MPI, and MVAPICH2. For that, it provides 115 collective algorithms and several selector algorithms, that were collected directly in the source code of the targeted MPI implementations.

You can switch the automatic selector through the smpi/coll_selector configuration item. Possible values:

Available algorithms

You can also pick the algorithm used for each collective with the corresponding configuration item. For example, to use the pairwise alltoall algorithm, one should add –cfg=smpi/alltoall:pair to the line. This will override the selector (if any) for this algorithm. It means that the selected algorithm will be used

Warning: Some collective may require specific conditions to be executed correctly (for instance having a communicator with a power of two number of nodes only), which are currently not enforced by Simgrid. Some crashes can be expected while trying these algorithms with unusual sizes/parameters

MPI_Alltoall

Most of these are best described in STAR-MPI

MPI_Allreduce

MPI_Reduce_scatter

MPI_Allgather

_Allgatherv

MPI_Bcast

Automatic evaluation

(Warning: This is experimental and may be removed or crash easily)

An automatic version is available for each collective (or even as a selector). This specific version will loop over all other implemented algorithm for this particular collective, and apply them while benchmarking the time taken for each process. It will then output the quickest for each process, and the global quickest. This is still unstable, and a few algorithms which need specific number of nodes may crash.

Adding an algorithm

To add a new algorithm, one should check in the src/smpi/colls folder how other algorithms are coded. Using plain MPI code inside Simgrid can't be done, so algorithms have to be changed to use smpi version of the calls instead (MPI_Send will become smpi_mpi_send). Some functions may have different signatures than their MPI counterpart, please check the other algorithms or contact us using SimGrid developers mailing list.

Example: adding a "pair" version of the Alltoall collective.

Tracing of internal communications

By default, the collective operations are traced as a unique operation because tracing all point-to-point communications composing them could result in overloaded, hard to interpret traces. If you want to debug and compare collective algorithms, you should set the tracing/smpi/internals configuration item to 1 instead of 0.

Here are examples of two alltoall collective algorithms runs on 16 nodes, the first one with a ring algorithm, the second with a pairwise one:


What can run within SMPI?

You can run unmodified MPI applications (both C and Fortran) within SMPI, provided that you only use MPI calls that we implemented. Global variables should be handled correctly on Linux systems.

MPI coverage of SMPI

Our coverage of the interface is very decent, but still incomplete; Given the size of the MPI standard, we may well never manage to implement absolutely all existing primitives. Currently, we have a very sparse support for one-sided communications, and almost none for I/O primitives. But our coverage is still very decent: we pass a very large amount of the MPICH coverage tests.

The full list of not yet implemented functions is documented in the file include/smpi/smpi.h, between two lines containing the FIXME marker. If you really need a missing feature, please get in touch with us: we can guide you though the SimGrid code to help you implementing it, and we'd glad to integrate it in the main project afterward if you contribute them back.

Global variables

Concerning the globals, the problem comes from the fact that usually, MPI processes run as real UNIX processes while they are all folded into threads of a unique system process in SMPI. Global variables are usually private to each MPI process while they become shared between the processes in SMPI. This point is rather problematic, and currently forces to modify your application to privatize the global variables.

We tried several techniques to work this around. We used to have a script that privatized automatically the globals through static analysis of the source code, but it was not robust enough to be used in production. This issue, as well as several potential solutions, is discussed in this article: "Automatic Handling of Global Variables for Multi-threaded MPI Programs", available at http://charm.cs.illinois.edu/newPapers/11-23/paper.pdf (note that this article does not deal with SMPI but with a competing solution called AMPI that suffers of the same issue).

SimGrid can duplicate and dynamically switch the .data and .bss segments of the ELF process when switching the MPI ranks, allowing each ranks to have its own copy of the global variables. This feature is expected to work correctly on Linux and BSD, so smpirun activates it by default. As no copy is involved, performance should not be altered (but memory occupation will be higher).

If you want to turn it off, pass -no-privatize to smpirun. This may be necessary if your application uses dynamic libraries as the global variables of these libraries will not be privatized. You can fix this by linking statically with these libraries (but NOT with libsimgrid, as we need SimGrid's own global variables).

Adapting your MPI code for further scalability

As detailed in the reference article (available at http://hal.inria.fr/inria-00527150), you may want to adapt your code to improve the simulation performance. But these tricks may seriously hinder the result quality (or even prevent the app to run) if used wrongly. We assume that if you want to simulate an HPC application, you know what you are doing. Don't prove us wrong!

Reducing your memory footprint

If you get short on memory (the whole app is executed on a single node when simulated), you should have a look at the SMPI_SHARED_MALLOC and SMPI_SHARED_FREE macros. It allows to share memory areas between processes: The purpose of these macro is that the same line malloc on each process will point to the exact same memory area. So if you have a malloc of 2M and you have 16 processes, this macro will change your memory consumption from 2M*16 to 2M only. Only one block for all processes.

If your program is ok with a block containing garbage value because all processes write and read to the same place without any kind of coordination, then this macro can dramatically shrink your memory consumption. For example, that will be very beneficial to a matrix multiplication code, as all blocks will be stored on the same area. Of course, the resulting computations will useless, but you can still study the application behavior this way.

Naturally, this won't work if your code is data-dependent. For example, a Jacobi iterative computation depends on the result computed by the code to detect convergence conditions, so turning them into garbage by sharing the same memory area between processes does not seem very wise. You cannot use the SMPI_SHARED_MALLOC macro in this case, sorry.

This feature is demoed by the example file examples/smpi/NAS/dt.c

Toward faster simulations

If your application is too slow, try using SMPI_SAMPLE_LOCAL, SMPI_SAMPLE_GLOBAL and friends to indicate which computation loops can be sampled. Some of the loop iterations will be executed to measure their duration, and this duration will be used for the subsequent iterations. These samples are done per processor with SMPI_SAMPLE_LOCAL, and shared between all processors with SMPI_SAMPLE_GLOBAL. Of course, none of this will work if the execution time of your loop iteration are not stable.

This feature is demoed by the example file examples/smpi/NAS/ep.c

Ensuring accurate simulations

Out of the box, SimGrid may give you fairly accurate results, but there is a plenty of factors that could go wrong and make your results inaccurate or even plainly wrong. Actually, you can only get accurate results of a nicely built model, including both the system hardware and your application. Such models are hard to pass over and reuse in other settings, because elements that are not relevant to an application (say, the latency of point-to-point communications, collective operation implementation details or CPU-network interaction) may be irrelevant to another application. The dream of the perfect model, encompassing every aspects is only a chimera, as the only perfect model of the reality is the reality. If you go for simulation, then you have to ignore some irrelevant aspects of the reality, but which aspects are irrelevant is actually application-dependent...

The only way to assess whether your settings provide accurate results is to double-check these results. If possible, you should first run the same experiment in simulation and in real life, gathering as much information as you can. Try to understand the discrepancies in the results that you observe between both settings (visualization can be precious for that). Then, try to modify your model (of the platform, of the collective operations) to reduce the most preeminent differences.

If the discrepancies come from the computing time, try adapting the smpi/host-speed: reduce it if your simulation runs faster than in reality. If the error come from the communication, then you need to fiddle with your platform file.

Be inventive in your modeling. Don't be afraid if the names given by SimGrid does not match the real names: we got very good results by modeling multicore/GPU machines with a set of separate hosts interconnected with very fast networks (but don't trust your model because it has the right names in the right place either).

Finally, you may want to check this article on the classical pitfalls in modeling distributed systems.