Follow Datanami:
September 2, 2021

Unlocking the True Potential of ML: How Self-Supervised Learning in Language Can Beat Human Performance

Pieter Buteneers

(Elnur/Shutterstock)

A core goal for many organizations using artificial intelligence (AI) systems is to have them mirror human language and intelligence. However, mimicking human language and mastering its unique complexities continues to be one of AI’s biggest challenges.

According to IBM’s Global AI Adoption Index, nearly one in three IT professionals say their business is now using AI, with 43%reporting their company has accelerated their rollout of AI due to the pandemic. As more businesses implement AI systems, the technology’s limitations are also being realized–including the amount of data required to train machine learning (ML) algorithms and the flexibility of these algorithms in understanding human language.

Today, many AI applications in customer service utilize ML algorithms which have proven to be essential as consumer behavior continues to shift. ML algorithms have the capabilities to process information and automate conversations, increasing businesses’ ability to have conversations with their customers anytime and anywhere. As businesses start to shift away from high-frequency, one-way communications and toward two-way conversations, these algorithms will play an important role in the customer journey. However, gaining a deeper understanding of human language will be essential as organizations look to improve their interactions with customers.

It’s my belief that if AI systems can get a deeper understanding beyond the traditional means of analyzing data, they’ll exceed human performance in language tasks. This will bring AI one step closer to human-level intelligence and transform how we engage with brands, businesses and organizations on a global scale. Thanks to self-supervised learning, ML techniques now have the power to change this.

Self-supervised machine learning reduces the need for high-quality, labeled data (Swill Klitch/Shutterstock)

What Is Self-Supervised Learning?

As babies, we learn about the world mainly through observation and trial and error. This paves the way for us to develop common sense and the ability to learn complex tasks such as driving a car. But how is it that humans can learn from just observing a few examples of a given task, and machine learning algorithms can’t? This is where self-supervised learning can help.

The technique typically involves taking an input dataset and concealing part of it. The self-supervised learning algorithm must then analyze visible data, enabling it to predict the remaining hidden data. As a result, this process creates the labels that will allow the system to learn. This opens up a huge opportunity for better utilizing unlabelled data and helping organizations streamline data processes. With self-supervised learning, there isn’t a need to have a person manually go through extreme amounts of data and label it. Self-supervised learning creates a data-efficient AI system that can analyze and process data without the need for human intervention, eliminating the need for full “supervision.”

Our brains, and certainly the brains of young children, are constantly trying to make sense of the world by predicting what will happen next. If the prediction doesn’t match the reality, we are surprised and we learn. In a similar fashion, ML algorithms learn to fill in the gaps using semi-supervised learning. ML algorithms trained using self-supervised learning seem to pick up on common human cues and are able to beat human performance in language tasks.

Breakthroughs in Self-Supervised Learning: How Will This Revolutionize Deep Learning?

 Self-supervised learning creates a data-efficient AI system that can analyze and process data without the need for human intervention (charles taylor/Shutterstock)

Self-supervised learning approaches have allowed for major advancements in natural language processing (NLP), which gives computers the ability to understand, write and speak languages like humans do. The real breakthrough in NLP came when Google introduced the BERT model in 2018. Engineers recycled an architecture typically used for machine translation and made it learn the meaning of a word in relation to its context in a sentence.

NLP continues to break record after record in understanding human language: in the last two years, there have been more breakthroughs in NLP than in the past four decades. These AI algorithms now beat human performance in understanding the topic of a text and finding the answer to a random question, doing so in more than 100 languages at once. Today, many chatbots utilize NLP technology to better meet customers’ needs. Due to the increase in mobile messaging, more companies are turning to chatbots and virtual assistants to answer customer questions in real-time and increase engagement.

Deep learning algorithms, a subset of ML, have evolved to recognize faces with the same, if not better, accuracy as humans. However, it took until 2015 to build an algorithm that could recognize faces with an accuracy comparable to humans. For example, Facebook’s DeepFace is 97.4% accurate, just shy of the 97.5% human performance. And the FBI’s facial recognition algorithm only reaches 85% accuracy, meaning it is still wrong in more than one out of every seven cases.

While deep learning is a critical aspect of AI systems and has made significant strides in recent years, it requires large amounts of data in order to have useful outputs. Self-supervised learning will play a critical role as we look to further reduce AI’s data-dependency and move beyond the limitations of deep learning. More importantly, it will give AI systems the ability to act more human-like and understand language without the need for intervention. Achieving this milestone will unlock infinite possibilities in the world of ML – it’s just a matter of time.

About the author: Pieter Buteneers is an industrial and ICT-electronics engineer. He started his career in academia, first as a PhD student and later as a postdoc, where he did research on Machine Learning, Deep Learning, Brain Computer Interfaces and Epilepsy. He won the first prize in the biggest Deep Learning competition of 2015 together with a team of machine learners from Ghent University: the National Data Science Bowl hosted on kaggle.com. In the same year he gave a TEDx talk on Brain Computer Interfaces. In 2019 he became the CTO of Chatlayer.ai, a platform to build multilingual chatbots ‘who’ communicate on a human level. In 2020 Chatlayer.ai was acquired by Sinch and now Pieter leads all Machine Learning efforts at Sinch as Director of Engineering in ML & AI.

Related Items:

Experts Disagree on the Utility of Large Language Models

One Model to Rule Them All: Transformer Networks Usher in AI 2.0, Forrester Says

Three Tricks to Amplify Small Data for Deep Learning

Datanami