Follow Datanami:
August 22, 2013

This is Your Brain on GPUs

Alex Woodie

By now, we were told, we’d each have an intelligent robot assistant who would perform all the boring and repetitive tasks for us, freeing us to live a life of leisure. While that Jeston-esque future never quite materialized, recent breakthroughs in machine learning and GPU performance are enhancing our lives in other ways.

The fields of machine learning and artificial intelligence have been fraught with false starts and failed projects. After all, researchers don’t yet fully understand how the human brain works, so what makes you think they could replicate its processes in a machine?

While biological scientists create increasingly refined and accurate models of the human brain, computer scientists have been hard at work developing new approaches to machine learning, the field of artificial intelligence in which machines essentially program themselves based on the data they collect.

According to a recent blog post on Nvidia.com, these approaches are already paying dividends in the real world. “Thanks to a combination of recent algorithmic breakthroughs and the high performance of GPUs,” writes Nvidia product management intern Yanning Li, “researchers have seen dramatic improvements in accuracy for machine learning problems for services that are more than just lab experiments.”

Li presents several examples of how these “artificial brains” powered by GPUs are helping human kind. Google is at the forefront of using GPUs, and the company is reported to use GPU-equipped servers to deliver a variety of services, including its Web search, Google Maps Street View, and Android’s voice-recognition app.

Earlier this year, Baidu, considered the “Chinese Google,” rolled out a new visual search service that enables users to search the Web using images alone. The service uses GPUs to train its neural networks, even though it uses traditional CPUs to actually serve the visual searches.

Nuance, the company that has been at the forefront of voice recognition, is also using GPUs to power its service. In June, the company announced that it’s working with researchers at Stanford University to build the world’s largest neural network to model how the human brain learns.

Nuance says it trains its neural network models to understand users’ speech by using terabytes of audio data. “Once the models are trained, they can then recognize the pattern of spoken words by relating them to the patterns that the model learned earlier,” the company says in a June blog post.

Microsoft also used a combination of GPUs and advanced algorithms to develop the body detection capabilities of Kinect, a controller-free interface for its Xbox 360 video game console that lets users interact with the console using a wave of a hand.

“Kinect takes a stream of images coming off the camera and quickly works out where the joints in your body are in 3-D. It can use that to animate characters and to manipulate objects on the screen,” said Jamie Shotten of Microsoft Research in a 2011 blog post. 

Related items:

GPUs Push Big Data’s Need for Speed 

Fuzzy Thinking about GPUs for Big Data 

The GPU “Sweet Spot” for Big Data 

Datanami