Follow Datanami:
June 8, 2017

5 Things AI Is Better At Than You

(maxuser/Shutterstock)

Your mother was right: you are special. While each of us is a perfect little snowflake in our own right, that doesn’t necessarily mean we possess world-shaking skills. But back in the lab, data scientists are cranking out algorithms that exceed human capability on a regular basis.

About a year ago, Facebook CEO Mark Zuckerberg predicted that artificial intelligence (AI) would generally surpass humans in core sensory capabilities (like seeing and hearing) in about five to 10 years. AI still can’t “actually look at the photo and deeply understand what’s in it or look at the videos and understand what’s in it,” he said at the time.

Zuck is right: deep learning approaches still can’t match humans step for step across the board. But in a few targeted areas, AI has already left humans in the dust. Here are five of them:

Predicting Heart Attacks

According to a May story in IEEE Spectrum, AI can be better at predicting heart attacks than standard doctor’s methods.

Researchers at the University of Nottingham in the UK found that their machine learning models were more accurate at predicting which patients would have a heart attack within the next 10 years.

Doctors may soon rely on AI to help detect conditions (Guschenkova/Shutterstock)

A neural network was trained on actual patient healthcare records, which contained data on individual’s medical conditions, prescription drugs, lab results, hospital visits, and demographics. Given a data set composed of records for nearly 400,000 individuals, the researchers used 75% for training the model while reserving 25% for testing the accuracy.

About 7,400 patients in the test dataset had heart attacks. Out of those patients, the model accurate predicted the heart attack among about 5,000 patients. The standard method relied upon by doctors predicted 355 fewer heart attacks, which means the AI was about 5% more accurate than the human.

Playing Poker

Poker may seem like a decidedly human game, one that hinges on the believability of one’s “poker face” and the ability to see through a bluff. Surely, no faceless algorithm could win when some of the cards are face down on the table?

Is AI poised to win all the chips? (Fer Gregory/Shutterstock)

Actually, machine learning excels in this vein, too. According to an April Bloomberg story, an AI system powered by a supercomputer beat some of the best human poker players in a five-day tournament of no-limit Texas Hold ‘Em in a Chinese casino, claiming $290,000 in prize money.

The AI, dubbed Lengpudashi (or “cold poker master” – whoever said computer scientists lack a sense of humor?), completely annihilated its challenger over 36,000 hands of poker. It wasn’t even close.

“People think that bluffing is very human — it turns out that’s not true,” said Carnegie Mellon computer science PhD student Noam Brown, who developed Lengpudashi with computer science professor Tuomas Sandholm. “A computer can learn from experience that if it has a weak hand and it bluffs, it can make more money.”

Lengpudashi was an upgraded version of Libratus, another Poker-winning AI developed at Carnegie Mellon that won tournaments. The prize money will go to Strategic Machine, a firm founded by Sandholm and Brown.

Detecting Musical Genres

Machine learning algorithms are wonderful at detecting very subtle patterns buried in the data that may correlate with real-world phenomenon that impact humans. When enough of these correlations pile up, we start believing in their predictive power. This cold, scientific approach works very all kinds of data-rich environments, but surely it doesn’t translate into the world of art, right?

Deep learning exceeds classical methods of detecting musical genres (image courtesy Cambridge Consultants)

But it turns out the algorithms may have a finer eye or ear than we sometimes give them credit for. Recently the machine learning outfit Cambridge Consultants developed a deep learning model that it says can differentiate among musical genres more accurately than a human.

The music-loving neural network created by Cambridge Consultants was better at classifying whether the music a live pianist was playing fell into the baroque, classical, ragtime, and jazz categories. Thet company says it “overwhelmingly outperformed conventional hand-coded software, painstakingly written by humans.”

“I think the deep learning system performs better because it’s had a dispassionate look at quite a lot of audio material,” Monty Barlow, director of Machine Learning at Cambridge Consultants, says in a video posted on the company’s website. “It’s found the best ways of detecting one genre from another without any prejudice or bias. It’s strangely more human-like in its capabilities than our human engineers are with their classical engineering approach.”

Reading Lips

Deep learning is powering huge gains in the field of computer vision, driven largely by research into autonomous vehicles that can “see” the world around them. We’re also seeing progress in facial recognition, too.

LipNet showed the power and power of automated lipreading (image courtesy Google, CIFAR, and the University of Oxford )

While computers don’t exceed their human counterparts in street-sign reading face-matching capability, they are outpacing us in one related area: lipreading.

The truth is that humans aren’t particularly good at lipreading out of the box. The average person can correctly identify words only about 20% of the time. People with hearing impairment, however, are much better lip readers, and have an error rate of around 50%.

Could a deep learning approach improve on that? The answer may be yes.

Researchers from Google, CIFAR, and the University of Oxford recently published a paper on LipNet, a deep neural network designed to identify the words spoken by people based on an analysis of a video of the person speaking.

According to the researchers, LipNet’s “greatly outperforms” the human reading baseline, delivering an error rate of only about 5%, representing a nearly 3X advantage over the previous top-of-the-line, word-level identification system. Next, the researchers plan to extend LipNet’s visual research by combining it with speech recognition systems based on audio, to create a jointly trained audio-visual system.

Identifying Tattoos

We all know that neural networks are great at identifying relatively simple objects, like pictures of cats or dogs. The difficult part has been getting the computer vision to work in real-world conditions. Tattoo detection, it would seem, would be a perfect testbed for more advanced AI.

The jury is still out on automated tattoo detection (Dean Drobot/Shutterstock)

Driven by the Federal Bureau of Investigation’s need for better tattoo tracking of criminals, suspects, and witnesses, in 2015 the National Institute of Standards and Technology (NIST) initiated a test to develop better tattoo-detection algorithms.

Dubbed the Tattoo Recognition Technology Program, NIST has held a series of tattoo detection tests using a database of tattoo pictures taken by prisoners. Early results show some promise. The algorithms generally scored about 90% accuracy in some tests, such as detecting whether a given image contained a tattoo and identifying the same tattoo on the same person over a span of time.

Eventually, the FBI could have an automated tattoo identification and tracking system, which could conceivably improve its law enforcement mission; the civil rights impact of such a system is already being debated.

There’s still work to be done with the tats, however. The algorithms generally fared poorly at matching similar tattoos on different people and matching tattoos across different media. It would appear that humans may not be replaced by the machines so soon, after all.

Related Items:

Deep Learning Is About to Revolutionize Sports Analytics. Here’s How

Machine Learning, Deep Learning, and AI: What’s the Difference?

AI to Surpass Human Perception in 5 to 10 Years, Zuckerberg Says

 

 

Datanami