Inside Advanced Scale Challenges|Tuesday, October 23, 2018
  • Subscribe to EnterpriseTech Weekly Updates: Subscribe by email

GIGABYTE Releases Next Generation of RACKLUTION-OP OCP Server Products 

TAIPEI, Taiwan, Oct. 4, 2018 -- GIGABYTE is proud to officially announce the next generation of RACKLUTION-OP, our product line based on the Open Compute Project (OCP) Open Rack Standards. This new generation of products features Compute and GPU nodes equipped with the latest Intel Xeon Scalable Purley Generation processors. GIGABYTE can combine and integrate these nodes into complete OCP compliant racks for fast and easy deployment into your data center.

OCP Server Overview & Benefits

What are the Open Rack Standards? They are a set of open source hardware design guidelines initially developed by Facebook and then transformed in 2011 into an open source project. Being open source means that anyone can contribute, so a large number and variety of different organizations involved in building data centers have contributed their expertise and experience in designing hardware that is faster and easier to deploy, less expensive, and have just the right features needed for scale and efficiency, with a design primarily geared around space and power savings.

The Open Rack Standards design can achieve this in several ways. Firstly, the rack width is 21” compared with a traditional 19” rack, with a server unit height of 1OU (1.89’’ compared with the 1U height of 1.75” for a traditional rack), allowing for more horizontal and vertical space in each tray for more compute, networking and storage density or for better airflow or cabling space. Secondly, the power supply for each server rackmount is removed and consolidated in a separate, central unit. This not only frees up more space for other components but also allows for better cooling and maintenance efficiency of the consolidated power supply unit. Power supply is instead supplied to compute, storage and GPU nodes directly through a “bus-bar” system running along the rear of the rack.

In addition, the server nodes are designed like Lego bricks, which are small enough to be easily handled by a single person. This also adds to the design’s ease of scalability: each node is available individually and can be ordered later in time to add capacity to existing infrastructure. The Open Compute Project states that these design features were found to be 38% more energy efficient to build* and 24% less expensive to run* than traditional 19” rack infrastructure for the first adopters of the Open Rack Standards.

https://www.opencompute.org/about

RACKLUTION-OP

GIGABYTE’s RACKLUTION-OP OCP rack solutions product line-up features two different power supply designs based on two different versions of the Open Rack Standards: OCP Version 1.0 is a rack design that features power supply via three vertical 12V bus-bars, while OCP Version 2.0 is a rack design where power is supplied via a single vertical 12V bus-bar only. Each bus-bar connector can directly supply up to 960W (80A x 12V), therefore an OCP Version 1.0 rack (three bus-bars) can supply up to 2,880W per shelf (960W x 3 bus-bar connectors), making it suitable for a rack with GPU nodes that have heavy power consumption requirements, while an OCP Version 2.0 rack (single bus-bar) can supply up to 480W per node, making it a more cost efficient choice for compute and storage nodes where power requirements are not as heavy. Both versions co-exist with each other and customers can chose their design based on their specific system requirements.

GIGABYTE’s next generation of RACKLUTION-OP products comprise the following:

TO22-C20TO22-C21TO22-C22: Intel Xeon Scalable Compute Nodes

These 2OU height nodes feature dual Intel Xeon Scalable processors, with 6 channels of memory and 8 x DIMM slots per socket (16 x DIMM slots per node), and are future ready for Intel’s upcoming Octane DC Persistent Memory (Apache Pass). Each node features dual 10GbE Base-T ports (Intel X550), as well as a dedicated MLAN port for remote management. Expansion wise, all nodes feature two low profile half-length PCIe slots (one PCIe x16 and one PCIe x8) as well as one OCP mezzanine slot (PCIe x16).

The difference between these three nodes is in storage capacity and configuration. All three nodes feature 4 x 2.5” frontal hot / swappable SSD / HDD drive bays, available in the following configurations:

TO22-C20 & TO22-C21: A maximum of 2 x NVMe drives + 2 x SATA drives (or up to 4 x SATA drives instead of NVMe)

TO22-C22: A maximum of 4 x NVMe drives (or up to 4 x SATA drives instead of NVMe)

TO22-C20 and TO22-C22 also add further internal capacity with another 4 x NVMe internal 2.5” drive bays (for a total capacity of 8 x 2.5” NVMe drives for TO22-C22 or 6 x 2.5” NVMe drives for TO22-C20). 

These nodes are compatible with both the OCP Version 1.0 design (fitting into a TO20-BT1 node tray and compatible with our 41OU DO20-ST0 or DO20-ST1 rack or our 12OU DO60-MR0 mini-rack) and OCP Version 2.0 design (by fitting into a T021-BT0 node tray and compatible with our 41OU DO21-ST0 or DO21-ST1 racks).

T181-G20T181-G23T181-G24: Intel Xeon Scalable GPU Server Nodes

These 1OU height full width GPU server nodes feature dual Intel Xeon Scalable processors, with 6 channels of memory and 12 x DIMM slots per socket (24 x DIMM slots per system). Each server trays feature dual 1GbE ports (Intel i350) as standard, and a dedicated MLAN port for remote management. All models also feature 4 x frontal 2.5” hot swappable SATA SSD / HDD drive bays for storage, as well as two low profile half-length PCIe x16 expansion slots.

GPU wise, T181-G20 features capacity for 4 x NVIDIA SXM2 modules (such as V100) with ultra-fast GPU to GPU NVLink interconnect, while T181-G23 and T181-G24 feature a capacity for 4 x PCIe type GPGPU cards. The GPUs in T181-G23 are connected via a single-root architecture (1 CPU controls all of the GPUs) via a PCIe switch to minimize GPU to GPU latency and maximize performance, suitable for a smaller deployment of nodes for DNN training (where GPU to GPU intercommunication is more frequent), while the GPUs in T181-G24 are connected in a dual-root architecture (1 CPU controls half of the GPUs) for a balance of performance and cost efficiency, suitable for a larger deployment of nodes for DNN training (where frameworks can minimize the need for GPU to GPU intercommunication) or for HPC applications (where GPU to CPU latency is also important).

Due to the higher power requirements of GPUs, these server nodes are suitable only for an OCP Version 1.0system (compatible with our 41OU DO20-ST0 or DO20-ST1 rack or our 12OU DO60-MR0 mini-rack).


Source: GIGABYTE

Add a Comment

Do NOT follow this link or you will be banned from the site!
Share This