Follow Datanami:
September 6, 2017

Giving Machine Learning Freer Rein to Design Next-Generation Communications Protocols

James Kobielus

(ProStockStudio/Shutterstock)

Protocol engineering is an abstruse but essential art. So much depends on a well-tuned protocol. If engineers build inefficiencies into the conversational framework that computers use to interoperate, those issues afflict every application that relies on it.

Communication protocols can facilitate any level of computer-to-computer communication. When designed for the application layer, protocols tend to assume the shape and substance of whatever real-world outcome humans are trying to achieve, whether it be to send and receive email, navigate a distributed hypermedia store, or upload and download files. For protocols to become ubiquitous in these applications, the marketplace needs to have settled on a common core of functionality that every deployment implements and for which a consensus interoperability framework can be identified.

As people begin to adopt virtual digital assistants, also known as chatbots, it would not be strange to see standard communication protocols in this area as well. Chatbots have already become pervasive in such disparate application domains as messaging, mobility, customer service, call center, and help desk. Of course, it may not be easy to crystallize these myriad use cases down to a consensus interoperability framework that does functional justice to all.

Nevertheless, there is a strong need for standard design practices applicable to chatbot development, especially as users start to balk at the awkwardness of disparate conversational interface styles. Likewise, the more popular platforms may foster a lingua-franca RESTful API for developers to hook their applications into conversational user interfaces. And there may even be a practical need for a standard chatbot-to-chatbot protocol for those applications in which the bots need to communicate with each other in order to better serve their human users.

Facebook’s chatbots improvised on English in a research setting, but they didn’t create their own language (denvitruk/Shutterstock)

With that in mind, we should expect to see chatbot protocol standards emerge in e-commerce applications that involve multi-party transactions. It’s not inconceivable that chatbots will become primary agents for enabling buyers, sellers, and brokers to automatically negotiate the best deals for themselves, their employers, and their clients.

That’s why it was no surprise to learn that Facebook researchers were experimenting with application-layer chatbot negotiation protocols. Considering Facebook’s innovative R&D in machine learning (ML), it’s fascinating to study how they had developed an ML-driven prototype protocol to help chatbots dynamically learn the best negotiating tactics to achieve optimal outcomes for their users. One noteworthy finding from the research was that, under some circumstances, chatbots may even dynamically improvise their own English-based dialect to facilitate negotiation.

What was unsettling was how the world reacted to this innovation. Some observers called Facebook’s prototype “creepy,” some cited the science-fiction specter of “machines [that] may rise up and turn against humans,” and a few overstated this as “robots creating their own language.” In many circles, Facebook’s cancellation of the R&D project was regarded as a tacit admission that, Frankenstein-style, they’d spawned an evil race of bots that are conspiring amongst themselves to enslave their human creators.

All of that is patently ridiculous. Much of the overreaction came from the same circles that stigmatize AI in general, while some expressed the same anti-Zuckerberg snark that surrounded 2014’s “Facebook mood manipulation” controversy.

Be that as it may, Facebook’s project cancellation had nothing to do with some unhinged fear of “robots running amok.” Instead, it stemmed from the fact that the chatbots’ ML-improvised dialect had become opaque to its developers. As the ML-driven evolution of the chatbot dialogue drifted into gibberish, it had become useless in its core function of representing its users’ interests in negotiating contexts. There’s no point in calling something a “personal assistant” if we can’t decipher what it’s ostensibly doing on our behalf.

Actually, Facebook’s project had a lot of merit, perhaps as a harbinger of a new era in ML-assisted protocol engineering. If they had kept the project alive, it occurs to me that they could or should have leveraged its findings in any of the following possible follow-on initiatives:

“Hal-oo, pleased ter meet you.” Cockney slang’s emergence provides a model for bots’ tendency to experiment with language  (Anneka/Shutterstock)

  • Tweak the chatbot-language-generation ML, perhaps by revising the underlying reward function to keep bots steered away from exploiting non-transparent dialectal options in pursuit of intended outcomes
  • Allow chatbots to improvise their own language, albeit with an added layer of ML-driven logic designed to maintain full-fidelity translation of bots’ made-up dialect into plain human-readable language
  • Reframe chatbot dialectal opacity as a steganographic confidentiality feature to be exploited by teams of partnering bots in the service of a common purpose within a public environment (i.e., similar to how Cockney rhyming slang got its start long ago in London’s East End)
  • Explore how a ML-driven approaches might serve as the foundation for adaptive protocols that trade-off conversational transparency and apriori structure in pursuit of some outcome-related reward function

It would be interesting to see what sort of protocols ML-driven chatbots would develop on their own for the lower layers of the communications stack. There’s the off-chance that, if trained with an outcome-focused reward function (e.g., finding the lowest latency path between any two intelligent edge devices under various end-to-end network-loading scenarios while maintaining payload transparency and message traceability) that the bots would improvise solutions (e.g., adaptive syntaxes, ad-hoc message flows, etc.) that might never have occurred to human protocol engineers.

Just as we’re going to entrust ML-driven bots to drive our cars, there may come a time when the human race will give bots freer rein to steer the protocols that drive our clouds.

About the author: James Kobielus is SiliconANGLE Wikibon‘s lead analyst for Data Science, Deep Learning, and Application Development.

Related Items:

Training Your AI With As Little Manually Labeled Data As Possible

Automating Development and Optimization of Machine Learning Models

How Spark Illuminates Deep Learning

Datanami