Cisco InfiniBand 4x User's Guide Page 51

  • Download
  • Add to my manuals
  • Print
  • Page
    / 174
  • Table of contents
  • BOOKMARKS
  • Rated. / 5. Based on customer reviews
Page view 50
Chapter 3. InfiniBand technology 37
underlying iSCSI-like mechanism. This does not preclude emulating a Fibre Channel
connection, while enabling a much more functional mode, where applications can have
visibility to mixed SAN fabric environments without having to understand and account for the
differences between those fabrics.
3.2.4 User-level Device Access Programming Layer
The Direct Access Programming Layer (DAPL) protocol for user-space applications (uDAPL),
and for kernel mode use (kDAPL) are relatively new industry standards for high-efficiency,
low-latency server-to-server communications. User-level Device Access Programming Layer
(uDAPL) is one of the lowest-latency, highest-bandwidth, and highest-efficiency standard
protocols available on InfiniBand. MPI is currently slightly better then uDAPL, but additional
performance tuning can put uDAPL in first place. uDAPL is the preferred interface for new
development in low-latency commercial applications. Database vendors (such as Oracle®
with RAC 10i) will be using uDAPL for scale-out database clustering.
uDAPL has also been used as the underlying support for other protocol implementations.
Specifically, Scali has delivered MPI support for InfiniBand by layering their MPI stack on top
of uDAPL on InfiniBand. Scali is counting on uDAPL being a small, simple, common layer on
top of which MPI can still provide significant performance and efficiency benefits.
kDAPL is a similar interface for use from kernel mode applications or from the operating
system itself. Primary use is aimed at the file system interface. NFS/RDMA, iSER, and other
storage interfaces could go directly through kDAPL.
IT API is an industry-standard initiative (also involving IBM) working to standardize a common
API for commercial applications. At this time, uDAPL is providing this function. This effort
might, at some point, come up with another interface. However, with the momentum behind
uDAPL, it is likely that the
new interface will be based on what we have now.
3.2.5 Message Passing Interface
Message Passing Interface (MPI) is the standard protocol for scientific and technical
high-performance clustering (HPC). MPI can be used over many different cluster fabrics,
including Ethernet, Myrinet, Quadrics, and InfiniBand. MPI provides the lowest latency,
highest bandwidth, and highest efficiency of all of the standard protocols available on
InfiniBand.
Most HPC Linux environments utilize MPI protocols. Much of this market is led by work done
at national labs and research universities around the world. The MPI stack (MPICH)
developed at Ohio State University is the most commonly recommended MPI implementation
with the Topspin InfiniBand solution.
Page view 50
1 2 ... 46 47 48 49 50 51 52 53 54 55 56 ... 173 174

Comments to this Manuals

No comments