Follow Datanami:
December 10, 2018

The Smart Object Ecosystem Is The New AI Workbench

James Kobielus

AI is becoming so interwoven into the physical world that it’s less and less of a surprise when some inanimate object spontaneously interacts with us as if it had a mind of its own.

Autonomous operation is AI-driven practical magic of the highest order. Of course, we’re seeing more of it in the real world in conjunction with self-driving vehicles, smart drones, android-like robots, and intelligent consumer goods in the Internet of Things (IoT) that embed Alexa, Bixby, Siri, and other AI-powered digital assistants.

As this trend intensifies, more data scientists will begin to litter their physical workspace with a menagerie of AI-infused devices for demonstration, prototyping, and even production development purposes. We’re moving toward a world in which IoT edge devices become the predominant workbench for advanced AI applications that can operate autonomously.

Going forward, the AI DevOps ecosystem will evolve to accelerate the DevOps workflow that graduates smart objects into production deployments. AI practitioners will shift toward new machine learning-oriented workbenches that execute all or most DevOps pipeline functions—including distributed training–in the smart objects themselves.

What’s driving this trend are the following characteristics of the cloud-to-edge ecosystem in which the IoT has taken root:

  • Constrained network capacity: Many IoT devices remain constrained by slow and bandwidth-poor wireless connections, hence will need to rely heavily on their local AI resources for autonomous and other AI-facilitated operations.

    (elenabsl/Shutterstock)

  • Distributed edge capacity: More consumer devices have sufficient embedded processing, storage, and memory resource for many AI workloads, while building, training, and deployment of AI models can greatly reduce the amount of information that needs to be sent to the cloud or to other servers or edge devices.
  • Improving edge-processing price-performance: It’s often faster, better, and cheaper for embedded AI frameworks to process terabytes of streaming video and other sensed data locally on the device than sending it up to the cloud.

These thoughts went through my head at Amazon Web Services’ recent re:Invent 2018 conference. What I found most noteworthy in the blizzard of product announcements there was AWS’ deepening investment in the tooling, platforms, libraries, and even physical devices needed to develop autonomous AI devices of all types.

(VectorVid/Shutterstock)

At the heart of many of AWS’ AI announcements was a theme of reinforcement learning (RL). This term refers to a methodology, algorithms, and workflows that have historically been applied to robotics, gaming, and other development initiatives where AI is built and trained in a simulator. In addition to its central role in gaming, robotics, and other use cases, RL is being used to supplement supervised and unsupervised learning in many deep learning initiatives.

To see the emerging shape of the RL-driven AI development ecosystem, let’s consider the relevant solution announcements that AWS made at its most recent re:Invent:

  • RL in edge-AI modeling and training: To support developers who may never have applied RL to an AI project, AWS announced general availability of SageMaker RL, a new module for their data-science toolchain managed service. AWS’ launch of SageMaker RL shows that the mainstreaming of RL is picking up speed. This new fully managed service from AWS is the cloud’s first managed RL offering for AI development and training pipelines. It enables any SageMaker user to build, train, and deploy robotics and other AI models through any several built-in RL frameworks, including Intel Coach and Ray RL. SageMaker RL leverages any of several simulation environments, including SimuLink and MatLab.
  • RL in edge-AI simulation: SageMaker RL integrates with the newly announced AWS RoboMaker managed service, which provides a simulation platform for RL in intelligent robotics projects. It provides an AWS Cloud9-based robotics integrated development environment for modeling and large-scale parallel simulation. It extends the open-source Robot Operating System with connectivity to such AWS services as machine learning, monitoring, and analytics. It enables robots to stream data, navigate, communicate, comprehend, and learn. It works with the OpenGym RL environment as well as with Amazon’s Sumerian mixed-reality solution.

    AWS DeepRacer

  • RL in AI DevOps: With RoboMaker, AI robotics developers can start application development with a single click in the AWS Management Console, with the service automatically provisioning trained models into production in the target robotics environment in the edge or IoT infrastructure. AWS RoboMaker supports over-the-air robotics fleet application deployment, update, and management in integration with AWS Greengrass. AWS RoboMaker cloud extensions for ROS include Amazon Kinesis Video Streams ingestion, Amazon Rekognition image and video analysis, Amazon Lex speech recognition, Amazon Polly speech generation, and Amazon CloudWatch logging and monitoring. The new AWS IoT SiteWise, available in preview, is a managed service that collects data from distributed devices, structures and labels the data, and generates real time key performance indicators and metrics to driven better decisions at the edge.
  • RL in edge-AI device-level prototyping and benchmarking: Whereas at last year’s re:Invent AWS released a smart camera—DeepLens—for AI developer prototyping, this year they announced a tiny, but highly functional, AI-driven autonomous vehicle called DeepRacer. Now in limited preview and available for pre-order, AWS DeepRacer is a fully autonomous toy race car. It comes equipped with all-wheel drive, monster truck tires, a high-definition video camera, and on-board compute. The AI model that powers DeepRacer’s autonomous operation was programmed, built, and trained in SageMaker RL. AWS also launched what it calls “the world’s first global autonomous racing league” so that DeepRacer developers can benchmark their RL-powered prototypes against each other.
  • RL in edge-AI cross-edge application composition: The new AWS IoT Things Graph, available in preview, enables developers to build IoT applications by representing devices and cloud services—such as training workflows in SageMaker RL–as reusable models that can be combined through a visual drag-and-drop interface, instead of writing low-level code. IoT Things Graph provides a visual way to represent complex real-world systems. It deploys IoT applications to the edge on devices running AWS Greengrass so that applications can respond more quickly, even if not connected to the Internet.

Increasingly, data scientists and other developers are being called on to pour data-driven algorithmic intelligence into a wide range of interconnected smart objects. Distributed graph models will become an essential development canvas for developing the AI that animates complex, multi-device edge and robotics deployments.

Without graph technology, developers will find it difficult to compose and monitor the distributed RL training workflows needed to yoke fleets of smart objects into coordinated collectives.

About the author: James Kobielus is SiliconANGLE Wikibon‘s lead analyst for Data Science, Deep Learning, and Application Development.

Related Items:

AWS Bolsters Machine Learning Services at Re:Invent

Kubernetes Is a Prime Catalyst in AI and Big Data’s Evolution

Datanami