Intel® MPI Library Developer Reference for Linux* OS
Intel® MPI Library provides loadable shared modules to provide native support for the following file systems:
Panasas* ActiveScale* File System (PanFS)
Lustre* File System
IBM* General Parallel File System* (GPFS*)
Set the I_MPI_EXTRA_FILESYSTEM environment variable to on to enable parallel file system support. Set the I_MPI_EXTRA_FILESYSTEM_LIST environment variable to request native support for the specific file system. For example, to request native support for Panasas* ActiveScale* File System, do the following:
$ mpirun -env I_MPI_EXTRA_FILESYSTEM=on -env I_MPI_EXTRA_FILESYSTEM_LIST=panfs -n 2 ./test
Turn on/off native parallel file systems support.
Syntax
I_MPI_EXTRA_FILESYSTEM=<arg>
Arguments
<arg> |
Binary indicator |
enable | yes | on | 1 |
Enable native support for parallel file systems. |
disable | no | off | 0 |
Disable native support for parallel file systems. This is the default value. |
Description
Set this environment variable to enable parallel file system support. The I_MPI_EXTRA_FILESYSTEM_LIST environment variable must be set to request native support for the specified file system.
Select specific file systems support.
Syntax
I_MPI_EXTRA_FILESYSTEM_LIST=<fs>[,<fs>,...,<fs>]
Arguments
<fs> |
Define a target file system |
panfs |
Panasas* ActiveScale* File System |
lustre |
Lustre* File System |
gpfs |
IBM* General Parallel File System* (GPFS*) |
Description
Set this environment variable to request support for the specified parallel file system. This environment variable is handled only if I_MPI_EXTRA_FYLESYSTEM is enabled. Intel® MPI Library will try to load shared modules to support the file systems specified by I_MPI_EXTRA_FILESYSTEM_LIST.
Enable/disable an alternative algorithm for MPI-IO collective read operation on the Lustre* file system.
Syntax
I_MPI_LUSTRE_STRIPE_AWARE=<arg>
Arguments
<arg> |
Binary indicator |
enable | yes | on | 1 |
Enable the stripe-aware algorithm. |
disable | no | off | 0 |
Use the default algorithm. This is the default value. |
Description
By default, when ROMIO* collective buffering is enabled, Intel® MPI Library uses the following algorithm for MPI-IO collective reading on the Lustre* file system: a single rank is selected on each computation node for reading data from the file system and then redistributing the data among its local peers. However, this may lead to I/O contention, if several I/O processes access the same object storage target (OST) at the same time. The new algorithm limits the number of I/O processes to the number of OSTs that contain the data, when this information is available. The algorithm also ensures that the single I/O rank is communicating with one OST at most.
The default algorithm may still be more efficient in some cases, for example when a single I/O process cannot saturate the bandwidth of a single OST.
Using this algorithm requires that you also do the following:
Set the ROMIO settings striping_unit and striping_factor according to the layout of the file read.
Set the environment variables: I_MPI_EXTRA_FILESYSTEM=on, I_MPI_EXTRA_FILESYSTEM_LIST=lustre.