Advanced Computing in the Age of AI | Friday, March 29, 2024

Mellanox Unveils ConnectX-4 VPI Adapter 

Mellanox Technologies, Ltd. today announced the ConnectX-4 single/dual-port 100Gb/s Virtual Protocol Interconnect (VPI) adapter, the final piece to the industry’s first complete end-to-end 100Gb/s InfiniBand interconnect solution. Doubling the throughput of the previous generation, the ConnectX-4 adapter delivers the consistent, high-performance and low latency required for high performance computing (HPC), cloud, Web 2.0 and enterprise applications to process and fulfill requests in real-time.

Mellanox’s ConnectX-4 VPI adapter delivers 10, 20, 25, 40, 50, 56 and 100Gb/s throughput supporting both the InfiniBand and the Ethernet standard protocols, and the flexibility to connect any CPU architecture – x86, GPU, POWER, ARM, FPGA and more. With world-class performance at 150 million messages per second, latency of 0.7usec, and smart acceleration engines such as RDMA, GPUDirect and SR-IOV, ConnectX-4 will enable the most efficient compute and storage platforms.

“Large-scale clusters have incredibly high demands and require extremely low latency and high bandwidth,” said Jorge Vinals, director at Minnesota Supercomputing Institute of the University of Minnesota. “Mellanox’s ConnectX-4 will provide us with the node-to-node communication and real-time data retrieval capabilities we needed to make our EDR InfiniBand cluster the first of its kind in the U.S. With 100Gb/s capabilities, the EDR InfiniBand large-scale cluster will become a critical contribution to research at the University of Minnesota.”

“IDC expects the use of 100Gb/s interconnects to begin ramping up in 2015,” said Steve Conway, research vice president for high-performance computing at IDC. “Most HPC data centers need high bandwidth, low latency and strong overall interconnect performance to remain competitive in today’s increasingly data-driven world. The introduction of 100Gb/s interconnects will help organizations keep up with the escalating demands for data retrieval and processing, and will enable unprecedented performance on mission-critical applications.”

“Cloud infrastructures are becoming a more mainstream way of building compute and storage networks. More corporations and applications target the vast technological and financial improvements that utilization of the cloud offers,” said Eyal Waldman, president and CEO of Mellanox. “With the exponential growth of data, the need for increased bandwidth and lower latency becomes a necessity to stay competitive. The same applies to the high-performance computing and the Web 2.0 markets. We have experienced the pull for 100Gb/s interconnects for over a year, and now with ConnectX-4, we will have a full end-to-end 100Gb/s interconnect that will provide the lowest latency, highest bandwidth and return-on-investment in the market.”

ConnectX-4 adapters provide enterprises with a scalable, efficient and high-performance solution for cloud, Web 2.0, HPC and storage applications. The new adapter supports the new RoCE v2 (RDMA) specification, the full variety of overlay networks technologies – NVGRE (Network Virtualization using GRE), VXLAN (Virtual Extensible LAN), GENEVE (Generic Network Virtualization Encapsulation), and MPLS (Multi-Protocol Label Switching), and storage offloads such as T10-DIF and RAID offload, and more.

ConnectX-4 adapters will begin sampling with select customers in Q1 2015. With ConnectX-4, Mellanox will offer a complete end-to-end 100Gb/s InfiniBand solution, including the EDR 100Gb/s Switch-IB InfiniBand switch and LinkX 100Gb/s copper and fiber cables. For Ethernet-based data centers, Mellanox’s ConnectX-4 provides the complete link speed options of 10, 25, 40, 50 and 100Gb/s. Supporting these speeds, Mellanox offers a complete copper and fiber cable options. Leveraging Mellanox network adapters, cables and switches, users can ensure world-leading reliability, applications performance, and highest return-on-investment.

“For HPC and for Big Data, latency is a key contributor to application efficiency, and users pay more and more attention to time-to-solution,” said Pascal Barbolosi, VP Extreme Computing at Bull. “Bull is ready to incorporate Mellanox’s new end-to-end 100Gb/s interconnect solution in the bullx server ranges, to deliver ever more performance and continue to reduce latency.”

“Supermicro’s end-to-end Green Computing server, storage and networking solutions provide highly scalable, high performance, energy efficient server building blocks to support the most compute and data intensive supercomputing applications,” said Tau Leng, Vice President of HPC at Supermicro. “Our collaborative efforts with Mellanox to integrate ConnectX-4 100Gb/s adapters across our extensive range of solutions advances HPC to the next level for Scientific and Research communities with increased flexibility, scalability, lower latency and higher bandwidth interconnectivity.”

EnterpriseAI