Follow Datanami:
January 22, 2014

How Data Analytics Can Help Frackers Find Oil

Alex Woodie

The dirty little secret about fracking–besides the environmental concerns–is how inefficient it often is. According to industry estimates, only 20 percent of the oil is recovered in the average well drilled with hydraulic fracturing techniques. With an oil and gas boom underway in the United States, drilling and exploration outfits are increasingly turning to data analytic technologies to help them recover more of the black gold.

Computer technology is nothing new in the $6 trillion global oil and gas business. For decades, energy giants like Exxon, Chevron, and Royal Dutch Shell have been running seismic records through supercomputers to figure out where to drill. Billions have been spent on high performance computing (HPC) resources to bolster oil exploration and extraction.

In the new world of unconventional oil and gas, companies are finding new ways to explore what lies beneath the ground. The seismic readings that traditionally informed conventional oil and gas projects are also being used in the new world of unconventional oil and fracking. But to ramp up the efficiency of fracking, drillers are finding that bringing new technology to bear can lead to better results.

One IT firm that’s developing big data analytics technology for frackers is Ayata. The company was founded in 2003 as the result of a Canadian research project, and set out building “prescriptive” analytics tools that can harness hybrid datasets to generate recommendations. A couple of years ago, the company realized that its technology could be applied to the burgeoning world of fracking, according to Ayata CEO Atanu Basu.

The idea is to use Ayata’s prescriptive analytics technology to bolster the efficiency of fracking and the unconventional horizontal drilling methods used today. In fracking today, about 80 percent of the oil comes from 20 percent of the frack “stages,” which are individual segments along the horizontal well shaft. “There are a lot of fracks right now that aren’t producing,” Basu tells Datanami. “It’s like leaving billions of dollars on the table.”

Under Ayata’s prescriptive approach, fracking efficiency can be boosted through the generous collection and correct application of big data analytics technology. Here’s how it works:

First, large amounts of semi-structured data are collected from the drill site–including the seismic records that have traditionally directed conventional oil, but also new sources such as oil and mud logs, sounds from the drill head, videos from underground cameras, notes taken by drillers and pumpers, and data from the artificial lift (extraction) operation. Then, all this data (totaling tens to hundreds of terabytes per well) is timestamped and sucked into Ayata’s hosted Hadoop platform, mapped, reduced, and processed by hundreds of various algorithms. In the end, it generates a prescription on where to frack.

“We’re combining different types of data sources to know the type of the rock and the type of ‘completion,’ or what they’re injecting into the rock,” Basu says. “Knowing the rock and completion, we can say, frack a few meters this way or that way, because the rock is softer and the same amount of water sand would release more oil than in the place where you put it. We can tell them how much water, how much chemicals and sand to inject per frack. We can prescribe that, and we can also extrapolate and tell them how much oil and gas can they expect to come out and at what time.”

Obviously, this is easier said than done. And in fact, it hasn’t been done yet; several drilling and exploration outfits are currently testing Ayata’s approach, Basu says. But the future looks promising for applying the power of algorithms–including machine learning, natural language processing, signal processing, pattern recognition, image processing, and speech recognition algorithms–to turn semi-structured data from the drill site into actionable information.

The algorithms really run the show, Basu says. “We have hundreds of algorithms, and depending on the incoming data and the rules, the algorithms pick algorithms,” he says. “We don’t like human decision making, so we tried to automate as much as we possibly can to pick up nuances in the changes in the data that may be of significance that established business and domain rules may not.”

The lesson here is that data is not additive, and simply collecting more of the wrong type of data won’t bring you closer to your goal. Finding the right balance among the various data points can be like finding a needle in a haystack. As the state of the art of data science marches into the future, prescriptive approaches like Ayata’s will undoubtedly become more common.

“That’s what our software does–it brings it all together. That’s our claim to fame,” Basu says. “We are combining different data types and scientific disciplines that were not and are not designed to work together. This kind of problem has not been tackled before.”

Related Items:

Shell Drills into Big Data Analytics

Earthquake Science Makes Headway in Big Data Era

IBM Makes A $1-Billion Bet To Make Watson A Business

 

 

Datanami