GitXplorerGitXplorer
h

simpleMPI-with-zerocopy

public
0 stars
0 forks
0 issues

Commits

List of commits on branch master.
Unverified
94f4f78f0df046fe0e407bd2d887c2d9a3d7b687

Added mpi rank information

hhaanjack committed 3 years ago
Unverified
462c57e15355f7cf069e5377eb687d737ae2bf7c

Updated README.md

hhaanjack committed 3 years ago
Unverified
333ad1c59c75052f5cdb184eb8397891d72b63e9

Tested zero copy operation

hhaanjack committed 3 years ago
Unverified
253b767455e48bf6d45aa703e6b8f555be57b6d3

first commit

hhaanjack committed 3 years ago

README

The README file for this repository.

simpleMPI with zero-copy

Description

Simple example demonstrating how to use MPI in combination with CUDA.

This code is modified from NVIDIA's simpleMPI example to for my Jetson cluster's evaluation with zero-copy. But this code operation is not limited to the jetson only, so I determined to remain most of NVIDIA's codes.

In general, mpi awares NVIDIA GPU (CUDA). When the hardware supports GPU Direct, GPU RDMA provides efficient data transfer between the GPUs. However, Jetson's networking does not provide such environment. Instead, we can utilize integrated memory architecture to minimize data transfer between the CPU and the GPU.

Key Concepts

CUDA Systems Integration, MPI, Multithreading, Zero-copy

Supported SM Architectures

SM 3.5 SM 3.7 SM 5.0 SM 5.2 SM 6.0 SM 6.1 SM 7.0 SM 7.2 SM 7.5 SM 8.0 SM 8.6

When you are using Jetson (like my use case)

SM 7.2: Jetson Xavier NX, Jetson AGX Xavier SM 6.2: Jetson TX2 SM 5.2: Jetson Nano, Jetson TX1

Supported OSes

Linux

Supported CPU Architecture

x86_64, ppc64le, armv7l

CUDA APIs involved

cudaMallco, cudaFree, cudaMemcpy

Dependencies needed to build/run

MPI

Prerequisites

Download and install the CUDA Toolkit 11.5 for your corresponding platform. Make sure the dependencies mentioned in Dependencies section above are installed.

Build and Run

Linux

The Linux samples are built using makefiles. To use the makefiles, change the current directory to the sample directory you wish to build, and run make:

$ cd <sample_dir>
$ make

The samples makefiles can take advantage of certain options:

  • TARGET_ARCH= - cross-compile targeting a specific architecture. Allowed architectures are x86_64, ppc64le, armv7l. By default, TARGET_ARCH is set to HOST_ARCH. On a x86_64 machine, not setting TARGET_ARCH is the equivalent of setting TARGET_ARCH=x86_64.
    $ make TARGET_ARCH=x86_64
    $ make TARGET_ARCH=ppc64le
    $ make TARGET_ARCH=armv7l
    See here for more details.
  • dbg=1 - build with debug symbols
    $ make dbg=1
    
  • SMS="A B ..." - override the SM architectures for which the sample will be built, where "A B ..." is a space-delimited list of SM architectures. For example, to generate SASS for SM 50 and SM 60, use SMS="50 60".
    $ make SMS="50 60"
    
  • If your target is Jetson devices, you should provide SM architectures as follow,
    $ make SMS="72 62 52"
    
    Because, current version of Jetpack SDK 4.6 provides CUDA 10.2 and it does not support SM 8.0+ CUDA architectures.
  • HOST_COMPILER=<host_compiler> - override the default g++ host compiler. See the Linux Installation Guide for a list of supported host compilers.
    $ make HOST_COMPILER=g++

References (for more details)