Power Couple: IBM Joins PowerAI Hardware with Watson; Launches Multi-Hybrid AI Services
As IBM kicked off its annual Thinkconference, a high-production extravaganza this week in San Francisco, the company this morning made a series of AI-related announcements, including the merging of its PowerAI integrated CPU-GPU server line with its cognitive computing flagship IBM Watson, to create what it calls the Watson Machine Learning Accelerator (WML Accelerator), which Big Blue claims accelerates machine learning training 46X.
The objective of the Watson-Power marriage, IBM said, is to create a converged, integrated AI solution “that brings together the best software for AI, Watson, with the best hardware for AI, IBM Power Systems” – the ultimate goal being to ease enterprise AI adoption. As part of that strategy, the company this week will demonstrate new benchmarks for SnapML, IBM’s ML framework for simplified model selection and hyper-parameter tuning, which usually requires the specialized skills of data scientists.
“By scaling out across a cluster, as well as scaling up across many-core CPUs and powerful modern GPUs, SnapML is capable of identifying a highly accurate model and its hyper-parameter configuration extremely quickly,” said Sumit Gupta, VP of AI and HPC, IBM Cognitive Systems, in a blog.
Like many other AI-related vendor announcements over the past year, IBM’s Power AI-Watson convergence is aimed at easing the AI related skills shortage, cited by 54 percent as a barrier to AI adoption in a recent Gartner CIO survey, along with a skills scarcity related to integrating AI into existing infrastructures, cited by 27 percent of those in the Gartner surveyed by Gartner. WML Accelerator is designed to help enterprises train and deploy ML models built in IBM Watson Studio and monitored with Watson OpenScale.
At IBM Think last year, the company demonstrated the SnapML library running on Power Systems servers and reported that it beat Google Cloud running ML on an advertising-focused dataset by 46x. Since then, the company has integrated new automation features that scale out across a cluster and scale up across many-core CPUs and GPUs, making “SnapML… capable of identifying a highly accurate model and its hyper-parameter configuration extremely quickly.”
“Many users don’t realize how vast the open source machine learning catalogue is, and it can be quite challenging to identify the right tool for your particular data or desired outcome,” said Simon Thompson, Research Computing Infrastructure Architect at the University of Birmingham. “The automated model and library selection capabilities of SnapML greatly reduce the time required to parse through all of these tools, allowing users to begin ML training much more quickly.”
You may read the rest of this story at EnterpriseTech.