Snap ML Bests TensorFlow in Benchmark, IBM Says
There are already many machine learning libraries available in the world – some say too many. But there might be room for one more: IBM’s Snap Machine Learning (Snap ML), which it says ran 46 times faster than TensorFlow in a recent logistic regression test.
IBM unveiled the benchmark showing Snap ML – which IBM Research in Zurich has been developing the last two years – completely trouncing TensorFlow at its inaugural THINK conference, which is taking place in Las Vegas this week.
The Criteo benchmark cited by IBM tests how long it takes machine learning libraries to train a logistic regression model using an advertising dataset with more than 4 billion training samples. IBM says it took its Snap ML system 91.5 seconds compared to the best result reported by Google for TensorFlow, which trained the same model in 70 minutes.
On the hardware front, Snap ML ran on four Power9 servers with 16 Nvidia Tesla V100 GPUs. The TensorFlow system ran on 89 machines inside the Google Cloud Compute platform.
IBM is obviously bullish on its new Power9 hardware, which it hopes will make a big splash for AI workloads. IBM says that when datasets grow into billions of training examples or features, the training of “even relatively simple models becomes prohibitively time consuming.”
IBM says there are three characteristics that separate Snap ML from the rest of the pack, including distributed training, support for GPU acceleration, and support for sparse data structures. While the software excels at training models using large data sets, IBM says it can be useful wherever training time can become a bottleneck.
You can read more about IBM Snap ML’s benchmark test results on the IBM Research blog.