Intersect360 Survey Shows Continued InfiniBand Dominance
There were few surprises in Intersect360 Research’s just released report on interconnect use in advanced scale computing. InfiniBand and Ethernet remain the dominant protocols across all segments (system, storage, LAN) and Mellanox and Cisco lead the supplier pack. The big question is when or if Intel’s Omi-Path fabric will break through. Less than one percent of the sites surveyed (system and storage interconnect) reported using Omni-Path.
“Although this share trails well behind Mellanox, Intel will move toward the integration of its interconnect technologies into its processor roadmap with the introduction of Omni-Path. This potentially changes the dynamics of the system fabric business considerably, as Intel may begin to market the network as a feature extension of the Intel processing environment,” according to the Intersect360 Research report.
“For its part, Mellanox has preemptively responded both strategically and technologically, surrounding itself with powerful partners in the OpenPOWER Foundation and coming to market with features such as multi-host technologies, which argue for keeping host-bus adapter technology off-chip.”
Indeed Mellanox and Intel have waged a war of words over the past two years surrounding the direction of network technology and the merits of off-loading networking instructions from the host CPU and distributing more processing power throughout the network. Of course, these are still early days for Omni-Path. Battling benchmarks aside, InfiniBand remains firmly entrenched at the high end although 100 Gigabit Ethernet is also gaining attraction.
It is important to note the Intersect360 data is from its 2016+ HPC site survey. This most recent and ninth consecutive Intersect360 survey was conducted in the second and third quarter of 2016 and received responses from 240 sites. Combined with entries from the prior two surveys, 487 HPC sites are represented in Site Census reports in 2016. In total, 474 sites reported interconnect and network characteristics for 723 HPC systems, 633 storage systems, and 638 LANs. The next survey should provide stronger directional guidance for Omni-Path.
Among key highlights from the report are:
- Over 30% of system interconnect and LAN installations reported using 1 Gigabit Ethernet. “We believe that these slower technologies are often used as secondary administrative connections on clusters, and as primary interconnect for small throughput-oriented clusters. In the LAN realm, we see Gigabit Ethernet as still in use for smaller organizations and/or for subnetworks supporting departments/workgroups within larger organizations. Still, the tenacity of this technology surprises us.”
- About 72% of Gigabit Ethernet was mentioned as a secondary interconnect, not primary. Gigabit Ethernet comes on many systems as a standard cluster interconnect, contributing to its high use in distributed memory systems.
- InfiniBand continues to be the preferred high-performance system interconnect. “If we exclude Ethernet 1G (dubbed high-performance interconnect), installations of InfiniBand are about two times the combined installations of Ethernet (10G, 40G, and 100G). Within InfiniBand installations, InfiniBand 40G continues to be the most installed. However, for systems acquired since 2014, InfiniBand 56G is the most popular choice for systems.”
- Ten Gigabit Ethernet is used more for storage and LAN installations than any other protocol. Installations of 10 Gigabit Ethernet account for 35% of all storage networks reported and 35% of all LANs reported. InfiniBand has been gradually increasing its share of storage networks, increasing from to 34% from 31% with almost all of this coming from InfiniBand 56G.
Two main drivers of the overall market, reports Intersect360, are 1) the growth in data volume and stress it puts on interconnect along and 2) a persistent “if it’s not broke don’t fix it” attitude with regard to with switching to new technologies. Ethernet is benefiting from the latter.
Parallelization of code is another major influence. “Architecting interconnects for parallel applications performance has long been a major concern for MPP systems which are built around proprietary interconnects, and supercomputer-class clusters which tend to use the fastest general-purpose network technology. We believe that the trend towards greater application parallelization at all levels will drive requirements for network performance down market into high-end and midrange computing configurations,” according to Intersect360.
The report is best read in full: Here’s a brief excerpt from the report’s conclusion:
“The transition to the latest or faster interconnect appears to be occurring at about the same rate as the life cycle of servers – every two to three years. With each system refresh, the latest or best price/performance interconnect is chosen. Ultimately, though, application needs drive what system performance requirements are needed. The cost of components limit the rate of adoption. Our data suggests most academic and government sites, along with some of the commercial sites, particularly energy, large manufacturing, and bio-science sites, value the performance of InfiniBand for system interconnects. Many of the applications in these areas support and users leverage multi-processing, GPUs, and multi-core architectures.”
Perhaps not surprising, Mellanox was the top supplier for system interconnects (42% of mentions) and storage networks (35% of mentions) – in fact Intersect360 reports Mellanox gained market share in all segments its 2015 showing. Cisco continues to be the leading supplier for the LAN market, with 46% of the mentions, according to Intersect360.
Link to report summary: http://www.intersect360.com/industry/reports.php?id=149