Follow Datanami:
April 18, 2019

Will Neural Nets Replace Science Writers?

(Vasilyev Alexandr/Shutterstock)

“Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.”

The sentence above was written by a natural language processing technique known as a rotational unit of memory, or RUM.

Welcome to the New New Journalism.

As AI algorithms are put to the test, most are found wanting. This is especially true in domains like natural language processing, which tends to be repetitive, brittle and a long way from the mellifluous tones of HAL, the malevolent exascale computer of 2001: A Space Odyssey infamy (aka, Canadian actor Douglas Rain).

The aforementioned researcher team toils at the Massachusetts Institute of Technology (which should have been mentioned in RUM’s summary!). Among them is Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

While developing neural networks to assist with physics problems, the investigators realized their natural language processing approach to the physical world might also be useful for time-consuming tasks like scanning and summarizing scientific papers. What they came up with, and what they describe in the journal Transactions of the Association for Computational Linguistics, is RUM.

“We would notice that every once-in-awhile there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics,” MIT physics professor Marin Soljačić, told human science writer, David L. Chandler. “We noticed that, hey, if we use that, it could actually help with this or that particular AI algorithm.”

The traditional neural networking approach to NLP usually involves techniques like gated recurrent units (GRU) and long short-term memory networks (LSTM). While those techniques have been tweaked, the mimicry of convolutional neural networks continues to fall short for applications like natural language processing.

Enter RUM. While GRU and LSTM rely on the multiplication of matrices to mimic the way humans learn, the MIT researchers point out that neural networks continue to struggle correlating information in large data sets.

In their physics research, they sought to overcome this neural net weakness by instead using a system based on what they described as “rotating vectors in multidimensional space.” In an NLP application, that meant representing each word of text as a vector, and a string of words swung the vector in a different direction in a theoretical space with many dimensions.

The final set of vectors can then be translated into a corresponding string of words. “RUM helps neural networks to do two things very well,” the researchers found. “It helps them to remember better, and it enables them to recall information more accurately.”

The output from RUM is the lead paragraph of this story.

We accept the challenge. Decide for yourself if you prefer RUM’s to our summation, presented in the true spirit of what we journalists call “burying the lead,” to wit:

MIT physics researchers have come across a way to improve AI algorithms through a variation on recurrent neural networks that promises to improve natural language processing for applications like scanning and summarizing scientific papers.

Recent items:

Deep Learning is Great, But Use Cases Remain Narrow

Mining New Opportunities in Text Analytics

Datanami