Follow Datanami:
March 14, 2019

Rethinking Architecture at Massive Scale

via Shutterstock

Scale is the eye of the beholder. You might think your cluster is a powerful beast, but it likely pales in comparison to the system built by the folks at The Trade Desk, which spends $90 million year on hardware and broke Rack Space’s cloud. Twice.

Dave Pickles and Jeff Green co-founded The Trade Desk in 2009 to build a real-time bidding (RTB) platform for advertising buyers to bid on auctions. Ad buyers need a way to quickly determine how much money to bid on available ad impressions, and the best way to do that is to crunch a lot of data in a short window of time.

The RTB system has to juggle a bunch of variables – What is the ad? Who is the viewer? Where are they located? What time is it? What device are they on? And have they seen the ad before?  — and spit out an answer in a matter of milliseconds. If buyers bid too high, they waste money. If they bid too low, they lose the auction.

It’s a classic optimization problem, and so many companies in the ad tech space have tried to solve it with machine learning. A decision tree is one way to get an optimal answer given a bunch of variables. It’s an established approach that many ad tech firms built their systems upon, and they’re still in widespread use today.

Taking the same approach as everybody else didn’t appeal to Green and Pickles. “If you’re running an auction, you don’t want to have the same algorithm that everybody else has,” Pickles says. “That’s a pretty good way to fail.”

The advantage that decision trees bring is precision. Over time, as the AI system sees more data, it learns what combinations of factors lead to the optimal bid. But there was a big downside to the use of decision trees: speed.

“For a year we pushed really hard,” Pickles tells Datanami. “We threw everything away at one point because I was just obsessed with precision. We finally found a combination that worked.”

An Un-AI

The Trade Desk’s solution to the RTB optimization problem is deceptively straightforward: simple math, albeit at massive scale.

“I finally just said ‘Let’s pretend there’s no co-variant, just for a second,'” Pickles says. “I know that’s wrong. Of course it’s wrong. There is dependence between variables. Let’s just break it apart and let’s value what is the effect of what site you’re on to the value of this impression, what’s the effect of the audience, what’s the effect of the frequency. Let’s just multiply and see what happens.”

What happened, it turned out, was surprisingly good. Taking a math-based approach allowed The Trade Desk to optimize its clients’ ad spending without the complexity and latency involved with running a decision tree. The precision with this “bid factoring” approach was not quite as good as what a decision tree could deliver, but the advantages gained in speed and increased diversity of bids more than offset the loss in precision, according to Pickles.

“What we did was basically an approximation of that [decision tree] that’s much faster,” Pickles says. “But instead of having a very good answer a very small amount of the time, we have a decent answer all of the time. Decent has become much, much better over time, but decent was all we needed at the beginning to be able to create better performance.”

The Implementation

When The Trade Desk first implemented bid factoring in the 2011 timeframe, the RTB universe consisted of 40,000 transactions per second, which was the available stream of ad impressions going out to bid. Decisions had to be made in 300 milliseconds. The pace of RTB is exponentially faster now and the window is smaller, but even back then, getting such a system up and running posed a major technical challenge.

“Our approach scared everybody else because they were really worried about the complexities in real time of running that math,” said Pickles, who architected the AdECN ad exchange, which is still running at Microsoft. “But I made a bet that it’s easier to scale CPU than it is to scale RAM, because all of those other systems relied heavily on having tons of RAMs on all those servers, which gets expensive fast.”

Pickles led the development of the bid factoring application, which was developed in C#. “We’re probably the only C# bidder in the world,” he admits. “It’s not everybody’s first choice. It’s been a great choice for us. .NET is a high-productivity, low-defect environment.”

The database layer also is another critical aspect to The Trade Desk’s scalability advantage. Pickles looked at various NoSQL databases and eventually selected Aerospike‘s key-value store to back-end the bid factoring system.

“Aerospike is by far the best for what we do,” Pickles says. “We love the ACID component of Aerospike. It makes a real big difference for our ability to execute when we’re trying to follow a customer along the customer journey and they’re transitioning between states. If you’re an extra 50 milliseconds behind, it can cost you putting the right message in front of them.”

Pickles is a big believer that startups should consider outsourcing tasks that are outside of their core competency. To that end, the company initially implemented its trading system in Microsoft data centers and eventually moved to Amazon. But soon it discovered the cloud couldn’t scale to meet its demands.

“We just couldn’t get any kind of network out of it, so we moved out into Rackspace, which we broke a couple times as well,” he said. “We took down their whole cloud in a blink.”

The Trade Desk was forced to move out of the cloud to get the performance it required. The company built its own custom data center, and moved all of its storage to SSD drives, which was not commonly done in 2011. “It was all fairly cutting edge to be able to do that processing,” Pickles says.

Breaking Things

Since it created a market differentiation with its bid factoring breakthrough, The Trade Desk has scaled up in a big way. Today the company’s system assesses 9 million ad impression transactions per second, touching billions of users around the globe for tens of thousands of ad buyers. The latency is down to 140 milliseconds. When you subtract the time needed for data transit over the network, it leaves The Trade Desk’s system with 40 milliseconds to make a decision.

Dave Pickles, CTO and co-founder of The Trade Desk

In addition to processing quadrillions of pieces of data per day, The Trade Desk provides its customers with lots of analytical services. Not too long ago, the company ran a large Hadoop cluster with about 7PB of data and relied on Pig and MapReduce to process the data. But those days are over.

“We just turned off that stack after so many years of pain! It’s very operationally difficult to deal with,” Pickles says. “It was a necessary thing for us to go through. We’re on our way to NewSQL. In the end you’ll have a database, and you’ll be able to run SQL and R on it and all will be well. And you won’t have an army of super-specialized people maintaining it.”

Some of the Hadoop workload has moved to Apache Spark, use of which is growing at The Trade Desk. Much of the heavy SQL workloads have moved to Vertica, which is now owned by Micro Focus. The Trade Desk was the primary beta test site for the new Eon Mode of Vertica that separates compute and storage.

“Vertica said ‘Are you guys willing to break it for us?’ And we’re like, ‘No problem! Can do!'” the Ventura, California resident says. “We broke 25 versions of it. But they kept dialing it in and it got real good by the end of it, so we’re really happy how that turned out.”

The company is in the process of migrating its custom ETL environment to Apache Kafka, which Pickles is convinced is ready for prime time. “I talked to the team at Netflix to make sure, is this real? We tend to break tools. We’re really good at breaking tools,” Pickles says. “I don’t want to invest in this thing if it’s going to fall over. I’ve been assured, and our tests have shown, that [Kafka] is a capable piece of equipment.”

The new Kafka cluster is likely to be “big,” Pickles says. Then again, The Trade Desk doesn’t seem to do anything small. “Every time I bring up a new tool set, that means I have to put a couple of petabytes of data into it and it gets kind of expensive to deal with the big data sets,” he says.

Cheaper by the Dozen

The company, which went public in 2016, has invested hundreds of millions of dollars into hardware and software. Most of that gear resides in custom-leased data centers. But it’s starting to move some of its applications back into the Amazon cloud, which just recently became fast enough, responsive enough, and cost effective enough to justify investment, according to Pickles. In China, The Trade Desk runs on Alibaba, which “has a really nice cloud,” he says.

Just the same, the bulk of the company’s IT spending goes into maintaining its advantage in the scalability of its on-premise systems. It has such a big lead thanks to bid factoring system that it seems unlikely that anybody will be able to catch it, Pickles says. Instead of competing directly with the company, most would-be competitors are content to consume its APIs and try to add value elsewhere, he says.

“We spend $90 million a year on hardware,” Pickles says. “How are you going to do that as a startup? The only [way] is if you’re part of a big company and have deep pockets. Even there, that’s a tough sell to senior management. ‘Hey I need to carve out $250 million so I can go try to compete with The Trade Desk.'”

Don’t hold your breath waiting for that to happen.

Related Items:

Getting Ready for Real-Time Decisioning

Real-Time Data: The Importance of Immediacy in Today’s Digital Economy

The Real-Time Future of ETL

 

Datanami