Follow Datanami:

Tag: TPU

Google’s New Switch Transformer Model Achieves 1.6 Trillion Parameters, Efficiency Gains

Last year, OpenAI wowed the world with its eerily human language generator, GPT-3. The autoregressive model stood at a then-staggering 175 billion parameters, ten times higher than its predecessors. Now, Google is upping Read more…

Machine Learning Hits a Scaling Bump

Based on our reporting over the last year, you might conclude that machine learning technology has entered the mainstream with countless workloads in production. While developers are indeed moving up the machine-learning Read more…

In Search of a Common Deep Learning Stack

Web serving had the LAMP stack, and big data had its SMACK stack. But when it comes to deep learning, the technology gods have yet to give us a standard suite of tools and technologies that are universally accepted. Desp Read more…

Training Time Slashed for Deep Learning

Fast.ai, an organization offering free courses on deep learning, claimed a new speed record for training a popular image database using Nvidia GPUs running on public cloud infrastructure. A pair of researchers trained Read more…

Google Unleashes TPUs on Cloud ML Engine

As the amount of machine learning training data soars, so too does demand for new tools that will accelerate the process. With that in mind, Google Cloud announced the beta release of a new feature that allows users to s Read more…

H2O Ups AI Ante Via Nvidia GPU Integration

H2O.ai, the machine-learning specialist that unveiled its "Driverless AI" platform this past summer, is upgrading the system via integration with GPU specialist Nvidia's AI development system. The Mountain View, Calif Read more…

Nvidia’s Huang Sees AI ‘Cambrian Explosion’

The processing power and cloud access to developer tools used to train machine-learning models are making artificial intelligence ubiquitous across computing platforms and data framework, insists Nvidia CEO Jensen Huang. Read more…

Datanami