Follow Datanami:
March 28, 2017

Meet Ray, the Real-Time Machine-Learning Replacement for Spark

(Sergey Nivens/Shutterstock)

Researchers at UC Berkeley’s RISELab have developed a new distributed framework designed to enable Python-based machine learning and deep learning workloads to execute in real-time with MPI-like power and granularity. Called Ray, the framework is ostensibly a replacement for Spark, which is seen as too slow for some real-world AI applications, and should be ready for production use in less than a year.

Ray is one of the first technologies to emerge from RISELab, the research group at Berkeley that followed highly successful AMPLab, which generated a host of compelling distributed technologies that have impacted the field of high performance and enterprise computing alike, including Spark, Mesos, Tachyon, and others.

One of the advisors for the old AMPLab and the current RISELab, Computer Science Professor Michael Jordan, discussed the core principles and drivers behind Ray during the recent Strata + Hadoop World conference in San Jose, California.

“Spark was developed because my students were complaining about Hadoop,” Jordan said during a keynote address on March 16. “They complained about the long delay in latency that was needed every time they did an iteration in something like a logistic regression.

“Matei Zaharia was one of the students in the lab and heard them complaining, and said ‘I’ll help you with that. I’ll build some kind of caching system that means you don’t have to keep going out to disk all the time,'” Jordan continued. “Such was the genesis of Spark.”

His students noticed how “important and famous” Zaharia suddenly became for creating Spark; Zaharia has since gone on to co-found Databricks and accept an assistant professorship at rival Stanford University.

“So the next generation,” Jordan said, “they said, ‘we’re not just going to just give a project to the systems people. We’re going to do it ourselves.’ So this next project done by machine learning students, but wearing a systems hat, and developing an attempt to replace Spark.”

Ray was created by two Ph.D. students in the RISELab, Philipp Moritz and Robert Nishihara. The researchers sought to build a framework that would combine many of the various elements that are needed to actually run machine learning or deep learning-based applications in the real world. The researchers broke the application down into their constituent parts to figure out how to actually build an end-to-end system that allowed real-time decision making, according to Jordan.

“You need flexibility. You need to…put together not just things like neural nets but planning and search and simulation,” Jordan said. “This creates all kinds of complex task dependencies. It’s not very easy simply to write a MapReduce kind of paradigm. You can write it but it’s not going to execute very effectively when you have very heterogeneous workloads and tasks. It needs to be adapted to the performance of the algorithms, as the system is learning, to change its flow, its tasks.”

Jordan was clearly assigning Spark to the MapReduce-style of programming. While Spark is monumentally faster that MapReduce, it still retains some core elements of MapReduce’s batch-oriented workflow paradigm. He said Ray avoids the “block synchronous” paradigm that Spark uses in favor of something faster.
“There was a block step and there was a wait step where everything synchronizes, and that was effective for a certain kind of task where you as a designer can design it such that each task took the same amount of time and needed about the same amount of resources, but it’s just not true for many emerging machine learning and AI paradigms,” he said. “You need something much more like a just-in time, data-flow type architecture, where a task goes and all the tasks it depends on are ready and finished.
“So building a system that supports that, and retains all the desirable features of Hadoop and Spark, is the goal of our project called Ray,” he said.

Ray is fast, with micro-second latencies for individual tasks, according to Jordan. It can also handle heterogeneous hardware, where some application workload is being executed on CPUs while others are running on GPUs. Ray has a number of schedulers that can bring all of these together.

It will also borrow task-dependency attributes from MPI, the low-level distributed programming environment that our friends in the high performance computing (HPC) world use to build modeling and simulation workloads run very fast. “We really want to reach down to that level as well,” Jordan said. “We’re not trying to replace MPI. But at some point we want to get as good as performance as MPI with much more simplicity and robustness and kind of friendless to distributed platforms.”

Jordan demonstrated how Ray can help a digital robot (not the one pictured) learn how to run during his Strata demo (Sarah Holmlund/Shutterstock)

Ray will maintain state of computation among the various nodes in the cluster, but there will be as little state as possible, which will maximize robustness, said  Jordan. “But there’s going to be attention paid to stateful computation that can be shared among the tasks,” he added. “Then we obviously want fault tolerance and we’re going to also run serialized so that we can share data readily.”

The Ray framework is currently working, although it’s not yet a finished project. Interested parties can check out the pre-alpha code at the project’s home on Github.

Ray will be useful for building an array of applications that require fast decision-making on real-world data, such as what’s required for autonomous driving or some emerging forms of AI-assisted medicine. Jordan – who’s been cheekily called “The Michael Jordan of machine learning” for his contributions to the space – sees Ray having its biggest impact in the field of reinforcement learning, as opposed to the supervised learning systems that have become popular with the resurgence of deep learning and neural networks for solving computer vision and classification problems.

“When you start to get closer to actual decisions, you’re not simply trying to mimic a human but you’re trying to find the best decisions,” he said. “That’s the reinforcement learning paradigm. There really is not good systems level support for reinforcement learning.”

Ray was written in C++ and is basically aimed at accelerating the execution of machine learning algorithms developed in Python.

While Ray is still in its early phase and is not yet ready for production, it should be ready within a year, Jordan said. “We’re really going to make this as industrial strength as we can, but stay close to the boundary of what academies are trying to do in terms of exciting ML directions,” he said.

Related Items:

Big Data’s Relentless Pace Exposes Old Tensions and New Risks in the Enterprise

What’s In the Pipeline for Apache Spark?

RISELab Takes Flight at UC Berkeley

 

Datanami