Matillion Raises $150M in Quest to Turn Raw Data Ore Into Informational Steel
ETL vendor Matillion is riding high today after unveiling another nine-digit round of funding, this one a $150 million Series E round to go along with the $100 million Series D round earlier this year. Flush with capital, Matillion is intent building an informational refinery that is befitting its Northern England roots as the birthplace of the Industrial Revolution.
“Data is changing for the better every aspect of how we work, live, and play. That’s happening now everywhere, and really quickly,” says Matthew Scullion, Matillion’s CEO and co-founder. “But the speed at which organizations can make data useful bring data together, synchronize, transformant, embellish it, and make it ready as the fuel for data analytics, AI and machine learning projects–the speed at which the world can do that is highly constrained.”
“And what Matillion is doing is widening that pipe,” Scullion continues. “We’re making the world able to make data useful faster.”
With $250 million raised since the start of the year and a valuation of $1.5 billion, Matillion is well-positioned to continue building data integration solutions that companies use to feed their analytics, AI, and machine learning use case.
The company, which is dual-headquartered in Denver, Colorado, and Manchester, UK, boasts more than 1,000 customers across 40 countries for its extract, transform, and load (ETL) solutions, which are primarily used for moving data from source systems into cloud-based data lakes, data warehouses, and data lakehouses.
Two-thirds of the company’s revenues come from large enterprises, such as Sony, Fox, Accenture, and Siemens (all Matillion customers), and Scullion plans to continue targeting large accounts with the most complex data integration challenges.
“What we are continuing to build out is a data operating system for the enterprise,” Scullion says, “where large businesses can use the Matillion platform for all aspects of how they want to load, move, transform, synchronize, and orchestrate data, all in the quest of fueling analytics, AI and ML use cases.”
The 200 largest Matillion customers are working with an average of 1,000 sources of data, according to a company survey earlier this year. However, the process of transforming that raw data into something that can be used is a bottleneck.
“This is a huge problem,” Scullion says. “The real key to doing that right now is to provide the depth and sophistication that enterprises need and expect, but to do it in a way that’s not so complicated that only a small number of people can do it.”
Matillion is attacking the problem in several ways, with a handful of products, all of which run in the cloud. For starters, customers can get going with a free tool for extracting and landing data, but without the transformation. Customers who move up to the full-featured Matillion ETL product can enjoy access to more than 100 native connectors for the sources and destinations for the data pipelines that Matillion’s software instantiates.
Matillion also takes pains in the ETL tool to ensure that data transformations are conducted in a way that the data format matches what the data warehouses expect. It can also support transformations conducted in the target database, or the ELT method of data integration. Being “ETL Switzerland” is especially important as companies try out different cloud data warehouses, such as Amazon Redshift and Google BigQuery, or even adopt headless query engines that expect to work on S3 or a similar BLOB store, such as Presto, Trino, and Dremio.
The company also leverages its close partnership with Snowflake, with whom it has over 400 joint customers (Matillion was Snowflake’s partner of the year), to ensure that data landed in Snowflake adheres to Snowflake’s unique data format, which is something other cloud ETL vendors sometimes struggle to do, Scullion told Datanami earlier this year.
At the end of the day, it’s all about embracing a wider audience of users to work in the data integration process, which should ultimately break that data bottleneck and unleash the true power of big data, Scullion says.
“The real trick as you build out the platform is to build the depth and sophistication, but make it easy enough to use through our low-code, no-code, or code-optional user experience that our wider audience of users can participate,” the CEO says this week. “Those people are closer to the business problem, and there’s more of them, and therefore you can go faster.”
Scullion sees a historical parallel in what happened over century ago in his hometown of Manchester, when it was the center of the steel-making world, and what is happening today in the big data world.
Before the Industrial Revolution, the world knew how to turn iron ore into steel, but it could only be done in small batches, and only a small number of very skilled people could do it, Scullion says.
“That meant we didn’t make many things out of steel,” he says. “In the Industrial Revolution, we figured out how to widen that aperture, how to turn iron ore into steel much faster. And then we started building bridges and ships and buildings out of the stuff, and powering the innovation of the world.”
That analogy plays perfectly to data, he says. “We’ve got a huge pile of ore. There’s no shortage of data,” he says. “There is no shortage of demand to use data to effectively improve every aspect of how we work, live, and play. But the way the world has been, is only a small number of highly skilled artisans can do it in small batches. And what Matillion is doing is allowing the world to make data useful, post Industrial Revolution-levels of iron refinery.”