Advanced Computing in the Age of AI | Friday, April 19, 2024

Cray Melds AI App Development with HPC 

PureSolution/Shutterstock

Cray Inc. is coming to the aid of harried application developers with a new workflow software suite designed to accelerate AI implementation by combining analytics and artificial intelligence capabilities with existing HPC and emerging AI environments.

The supercomputer vendor (Nasdaq: CRAY) unveiled its Urika-CS AI and analytics software suite during this week’s International Supercomputer Conference in Frankfurt, Germany. The AI and analytics suite provides access to AI and analytics tools and frameworks on its CPU- and GPU-based Cray CS series cluster supercomputers, including Apache Spark, BigDL, Dask, Jupyter, Notebook, TensorBoard and TensorFlow.

Cray said it is targeting increased demand for efficient AI workflows that account for infrastructure complexity and evolving DevOps technologies. Hence, Cray’s accompanying AI reference configurations are intended to expand the ability of IT administrators to meet the requirements of AI application developers.

“Today’s AI solutions are either too narrowly focused on deep learning or too ad hoc in their design approaches to meet the needs of AI and IT teams developing and delivering AI applications,” said Fred Kohout, Cray’s senior vice president of products and chief marketing officer.

Cray said Monday (June 25) it is addressing those hurdles with expanded access to AI workflow tools. Specifically, the Urika-CS suite includes access to: Apache Spark and Dask data analytics platforms; the Anaconda distribution of the data science libraries for the Python programming language; the BigDL and TensorFlow machine and deep learning frameworks; and data visualization tools such as Juputer, Nootebook and TensorBoard.

The suite also includes a distributed model training framework geared to TensorFlow intended to accelerate the training of deep learning models via Cray’s supercomputing platforms. The AI and analytics suite also integrates a suite of tools designed to reduce the time required for downloading, installing, testing and debugging AI tools and frameworks.

Meanwhile, AI reference configurations designed to ease deployment of computing infrastructure for AI workflows include prototyping and production versions. A single-rack version for smaller AI teams consists of a combination of Cray’s CS500 CPU nodes and CS-Storm GPU-based nodes for data preparation and analytics along with machine and deep learning model training. The Urika-CS suite is meant to optimize the use of a heterogeneous computing and hybrid storage configuration, the company said.

In March, Cray announced updates to its line of CS-Storm GPU-based servers along with the Accel line of AI configurations. The upgrades included a new four-GPU version that combines Nvidia’s Volta GPUs with a pair of Intel Xeon CPUs. The combination is geared to AI models and HPC applications requiring “lower GPU-to-CPU ratios for optimal performance,” the company said.

The multiple-rack production configuration includes separate racks for CPU- and GPU-based processing along with sufficient storage to allow scaling as AI development expands.

Cray said its Urika-CS AI and analytics suite along with updated Accel AI reference configurations would be available during the third quarter of 2018. Seeking to “take the pain and guesswork out of AI deployments,” the company added it also would provide support and product updates.

About the author: George Leopold

George Leopold has written about science and technology for more than 30 years, focusing on electronics and aerospace technology. He previously served as executive editor of Electronic Engineering Times. Leopold is the author of "Calculated Risk: The Supersonic Life and Times of Gus Grissom" (Purdue University Press, 2016).

EnterpriseAI