Follow BigDATAwire:
August 14, 2019

Big Data Shines Way Forward for Big Quake Prediction

(Andrey-VP/Shutterstock)

The magnitude 6.4 earthquake that struck near Ridgecrest, California, on July 4 was an eye opener for many in the region. But when a much larger magnitude 7.1 quake hit the following day, it became clear the first temblor was just an opener for the main act. It was also a dramatic display of the predictive potential of foreshocks, which earth scientists are now beginning to unravel.

In April, three months before the Ridgecrest quakes, researchers from Los Alamos National Laboratory, California Institute of Technology, and Scripps Institution of Oceanography published a paper in the journal Science that showcased how analysis of small, previously undiscovered quakes could help unravel the mysteries of what triggers the bigger temblors.

Research has shown that fewer than half of mainshock earthquakes have a foreshock associated with them, but that figure could theoretically be much higher. Maybe there are many smaller quakes occurring – foreshocks to the main events — that were evading detection using traditional approaches? That was the working hypothesis, and to test it, the researchers developed a technique to detect signals of earthquake buried in the seismic noise.

The data mining operation started with a comprehensive catalog of earthquakes in Southern California, courtesy of the Southern California Seismic Network, which maintains a network of 550 seismic monitoring stations and uses other traditional methods to record earth movements. While the SCSN is considered one of the world’s top seismic systems, scientists wondered if it was accurately detecting all of the earthquakes that were actually occurring.

The researchers used a Los Alamos supercomputer equipped with 200 GPUs to analyze 100TB of data about earthquakes cataloged by the SCSN from 2008 to 2017. They analyzed the waveforms generated by seismic equipment during larger quakes, and devised an algorithm to detect similar waveforms that were recorded in the data but did not have earthquakes associated with them.

This template matching technique resulted in a Quake Template Matching (QTM) catalog that was composed of 1.81 million quakes. That is 10x more earthquakes than had previously been identified, the researchers said in an April 18 press release. What’s more, the quakes brought the number of mainshocks that had foreshocks associated with them up to 72%. (That was before the ground started shaking on the Fourth of July.)

“It’s very difficult to unpack what triggers larger earthquakes because they are infrequent, but with this new information about a huge number of small earthquakes, we can see how stress evolves in fault systems,” Daniel Trugman, a post-doctoral fellow at Los Alamos National Laboratory and co-author of two papers on the research, stated in the press release. “This new information about triggering mechanisms and hidden foreshocks gives us a much better platform for explaining how big quakes get started.”

The M7.1 earthquake that struck near Ridgecrest was the largest in the state in 20 years (Nick-Sklias/Shutterstock)

The computational research work identified “initiation sequences” behind larger earthquakes. It also revealed three-dimensional geometry and fault structures, as well as physical and geographic details that could help scientists predict bigger quakes in the future. The findings were also published in the Geophysical Research Letters in July.

“In the laboratory, we see small events as precursors to big slip events, but we don’t see this consistently in the real world. This big data template-matching analysis bridges the gap,” Trugman stated in the press release. “And now we’ve discovered quakes previously discounted as noise and learned more about their behavior. If we can identify these sequences as foreshocks in real time, we can predict the big one.”

Related Items:

Earthquake Science Makes Headway in Big Data Era

Hadoop Speeds Seismic Event Processing

 

BigDATAwire