File System Support

Intel® MPI Library provides loadable shared modules to provide native support for the following file systems:

Set the I_MPI_EXTRA_FILESYSTEM environment variable to on to enable parallel file system support. Set the I_MPI_EXTRA_FILESYSTEM_LIST environment variable to request native support for the specific file system. For example, to request native support for Panasas* ActiveScale* File System, do the following:

$ mpirun -env I_MPI_EXTRA_FILESYSTEM=on -env I_MPI_EXTRA_FILESYSTEM_LIST=panfs -n 2 ./test

I_MPI_EXTRA_FILESYSTEM

Turn on/off native parallel file systems support.

Syntax

I_MPI_EXTRA_FILESYSTEM=<arg>

Arguments

<arg>

Binary indicator

enable | yes | on | 1

Enable native support for parallel file systems.

disable | no | off | 0

Disable native support for parallel file systems. This is the default value.

Description

Set this environment variable to enable parallel file system support. The I_MPI_EXTRA_FILESYSTEM_LIST environment variable must be set to request native support for the specified file system.

I_MPI_EXTRA_FILESYSTEM_LIST

Select specific file systems support.

Syntax

I_MPI_EXTRA_FILESYSTEM_LIST=<fs>[,<fs>,...,<fs>]

Arguments

<fs>

Define a target file system

panfs

Panasas* ActiveScale* File System

lustre

Lustre* File System

gpfs

IBM* General Parallel File System* (GPFS*)

Description

Set this environment variable to request support for the specified parallel file system. This environment variable is handled only if I_MPI_EXTRA_FYLESYSTEM is enabled. Intel® MPI Library will try to load shared modules to support the file systems specified by I_MPI_EXTRA_FILESYSTEM_LIST.

I_MPI_LUSTRE_STRIPE_AWARE

Enable/disable an alternative algorithm for MPI-IO collective read operation on the Lustre* file system.

Syntax

I_MPI_LUSTRE_STRIPE_AWARE=<arg>

Arguments

<arg>

Binary indicator

enable | yes | on | 1

Enable the stripe-aware algorithm.

disable | no | off | 0

Use the default algorithm. This is the default value.

Description

By default, when ROMIO* collective buffering is enabled, Intel® MPI Library uses the following algorithm for MPI-IO collective reading on the Lustre* file system: a single rank is selected on each computation node for reading data from the file system and then redistributing the data among its local peers. However, this may lead to I/O contention, if several I/O processes access the same object storage target (OST) at the same time. The new algorithm limits the number of I/O processes to the number of OSTs that contain the data, when this information is available. The algorithm also ensures that the single I/O rank is communicating with one OST at most.

Note

The default algorithm may still be more efficient in some cases, for example when a single I/O process cannot saturate the bandwidth of a single OST.

Using this algorithm requires that you also do the following: