Follow Datanami:
September 4, 2019

Google Adds ‘Structured Signals’ to Model Training

An effort to bring structure and meaning to huge volumes of varied data is being used to improve training of neural networks.

The technique, dubbed Neural Structured Learning (NSL) attempts to leverage what developers call “structured signals.” In model training, those signals represent the connections or similarities among labeled and unlabeled data samples. The ability to capture those signals during neural network training is said to boost model accuracy, especially when labeled data is lacking.

NSL developers at Google (NASDAQ: GOOGL) reported this week their framework can be used to build more accurate models for machine vision, language translation and predictive analytics.

Structuring signals used in NSL can be in the form of explicit graphs or implicit in the guise of adversarial machine learning. That technique is used to train neural networks to spot intentionally deceptive data or behaviors. The latter technique, also known as adversarial perturbation, is known to strengthen models against malware designed to mislead model predictions.

Along with adversarial learning, NSL developers said their approach also used neural graph learning. The framework running on TensorFlow includes APIs and other tools intended to train models with structured signals. The toolkit includes Keras APIs that enable explicit and implicit training along with TensorFlow functions and tools used to build graphs for training.

Since structured signals are incorporated only during training, the basic inference workflow remains unchanged.

NSL has been widely used at Google for improving model performance, including a technique called learning image semantic embedding. That approach involves using graphs to parse granular semantic labels applied to data.

The graphs used in NSL to train neural networks come in various forms, ranging from knowledge graphs to genomic data and medical records. “NSL allows TensorFlow users to easily incorporate various structured signals for training neural networks, and works for different learning scenarios: supervised, semi-supervised and unsupervised (representation) settings,” Google researchers noted in a blog post.

In some cases, the developers noted, no explicit structure such as graphs will be available for model training. In that case, NSL tools would allow developers to construct graphs from unstructured data. The frameworks also include APIs to “induce” adversarial examples in the form of implicit structured signals.

Intentionally confusing the model during training is said to build up callouses that make the model more resistance to malware. The developers also assert this implicit form of model training using adversarial training can improve model accuracy when malicious but subtle perturbations are introduced.

Details of the NSL framework are available on the TensorFlow library used to develop and train machine learning models. Code for NSL on TensorFlow is here.

Recent items:

Scrutinizing the Inscrutability of Deep Learning

Facing Up to Image Fakery

Datanami