Power Couple: IBM Joins PowerAI Hardware with Watson; Launches Multi-Hybrid AI Services
As IBM kicked off its annual Think conference, a high-production extravaganza this week in San Francisco, the company this morning made a series of AI-related announcements, including the merging of its PowerAI integrated CPU-GPU server line with its cognitive computing flagship IBM Watson, to create what it calls the Watson Machine Learning Accelerator (WML Accelerator), which Big Blue claims accelerates machine learning training 46X.
The objective of the Watson-Power marriage, IBM said, is to create a converged, integrated AI solution “that brings together the best software for AI, Watson, with the best hardware for AI, IBM Power Systems” – the ultimate goal being to ease enterprise AI adoption. As part of that strategy, the company this week will demonstrate new benchmarks for SnapML, IBM’s ML framework for simplified model selection and hyper-parameter tuning, which usually requires the specialized skills of data scientists.
“By scaling out across a cluster, as well as scaling up across many-core CPUs and powerful modern GPUs, SnapML is capable of identifying a highly accurate model and its hyper-parameter configuration extremely quickly,” said Sumit Gupta, VP of AI and HPC, IBM Cognitive Systems, in a blog.
Like many other AI-related vendor announcements over the past year, IBM’s Power AI-Watson convergence is aimed at easing the AI related skills shortage, cited by 54 percent as a barrier to AI adoption in a recent Gartner CIO survey, along with a skills scarcity related to integrating AI into existing infrastructures, cited by 27 percent of those in the Gartner surveyed by Gartner. WML Accelerator is designed to help enterprises train and deploy ML models built in IBM Watson Studio and monitored with Watson OpenScale.
At IBM Think last year, the company demonstrated the SnapML library running on Power Systems servers and reported that it beat Google Cloud running ML on an advertising-focused dataset by 46x. Since then, the company has integrated new automation features that scale out across a cluster and scale up across many-core CPUs and GPUs, making “SnapML… capable of identifying a highly accurate model and its hyper-parameter configuration extremely quickly.”
“Many users don’t realize how vast the open source machine learning catalogue is, and it can be quite challenging to identify the right tool for your particular data or desired outcome,” said Simon Thompson, Research Computing Infrastructure Architect at the University of Birmingham. “The automated model and library selection capabilities of SnapML greatly reduce the time required to parse through all of these tools, allowing users to begin ML training much more quickly.”
In addition, IBM announced a pan-environment AI capability with IBM Cloud Private for Data (ICP for Data) on IBM Power Systems with IBM Storage, enabling Watson services, including Watson Assistant (for building conversational interfaces into applications and devices) and Watson OpenScale (an open AI platform for managing multiple instances of AI regardless if where they were developed) to run on premises or any private, public or hybrid multi-cloud, according to the company. IBM said ICP for Data “enable(es) businesses to apply AI to data wherever it is hosted. Businesses will be able to infuse AI into their apps, regardless of where they reside. The flexibility this affords can remove one of the major obstacles to scaling AI, since businesses can now leave data in secure or preferred environments and take Watson to that data.”
“Businesses have largely been limited to experimenting with AI in siloes due to the limitations caused by cloud provider lock-in of their data,” said Rob Thomas, GM, IBM Data and AI. “With most large organizations storing data across hybrid cloud environments, they need the freedom and choice to apply AI to their data wherever it is stored. By breaking open that siloed infrastructure we can help businesses accelerate their transformation through AI.”