Blog

The Robots Have Come to Seismic Interpretation

Posted by Peter Wang on Jan 27, 2017 2:02:04 PM RSS Share Post

I use Google Photos as a means to automatically backup my phone and tablet pictures. I also upload photos from my non-Wi-Fi digital camera to Google Photos. I’m sure many of you do this as well.

Food.jpgI recently found an incredible feature in Google Photos. If you put any word in as a search term, it will find photos with those features in it. I searched on “food” on my photos, and I got the images below. As you can tell, we eat well in our family, and we like to take pictures of our food!

I searched on “president”, and got a picture of a University of Houston dean, standing at a podium giving a speech. Well – OK, that’s pretty presidential, right? That’s my daughter standing behind him.

UH President.jpg

I searched on “rock” and got these two photos. Some rocks, and a picture of my daughter at The Dome of the Rock قبة الصخرة‎‎ or כיפת הסלע‎‎ shrine in the Old City of Jerusalem. Yep, and the search works when I drop in the Arabic or Hebrew names, too. Try searching your own photos, then tell me - are you amazed, frightened, or both?

Ro.jpg

Clearly, “machine learning”, “robotics”, or “artificial intelligence”, whatever you want to call it, is causing a revolution in our lives – right here, right now. You are watching it. You are living in it.

I caught a glimpse of how machine learning is changing seismic interpretation this week, when I fired up the Beta test version of Paradigm’s new Rock Type Classification. It had been discussed previously at SEG meetings and in the SEG Interpretation journal, but it had not been in a commercial software product until 2017. Within a few minutes, it had classified, using facies logs at well locations, a 3D prestack seismic volume, and had returned a 3D volume consisting of rock classes; silicoclastics, bioherm (tight, wet or oil), limestone, shaly limestone, interbedded, and biostrom (tight, wet or oil).. It enabled the user to sidestep a traditional QSI (AVO & Inversion) project, which would have taken longer and would have required a more expert user. Rock Type Classification was very easy to use.

Rock Type Classification.jpgYou traditional QSI users are probably wondering, “Why is having machine learning crawl over processed 3D gathers without the benefit of physics (Aki & Richard, Shuey, or Zoeppritz) a good thing?” Here’s my thinking on the matter, and it is controversial.

With the benefit of hindsight, we all know that seismic stacking destroys valuable information contained in the gather. We destroy AVO, we destroy AVAZ (azimuthal amplitude variations), we destroy VVAZ (residual moveout due to azimuthal velocity variations). As another example, old timers in the 1980s used to scoff at us youngsters by calling onlaps, downlaps, and toplaps “noise”. But we now know that these are valuable seismic stratigraphic information.

One of my mentors at the University of Houston, Dr. John A. McDonald, once said, “Noise is only signals you don’t understand”.

But now we don’t call AVO, AVAZ, and VVAZ noise. Surely we’re smarter now; we use AVO and Prestack Inversion, which preserve gather information. That’s good enough, right?

I’m sorry, but we may be committing the same mistake. We do destroy some of the gather information by computing AVO attributes or running Inversion. AVO is a curve-fit. If we forward-model from AVO parameters and generate a synthetic gather, we don’t get the same gather back. Prestack Inversion summarizes the gather by estimating Vp, Vs, and Density; it’s a least-squares solution. If we forward-model from Vp, Vs, and Density and generate a synthetic gather, we don’t get the same gather back.

So let me ask you, and I will be brutally honest – if we take the difference between a real gather and a synthetic gather, do we really think that it’s all truly noise? Or is it also a signal that we don’t understand? Is it like subtle onlaps, downlaps, and toplaps? Too subtle for us to be bothered with? 

In fact, it may be geology whispering to us.

Pete and Paranthropus boisei.Updated.jpgThe real problem is, humans have limitations. We can easily visualize only seven dimensions in an interpretation workstation. We have three spatial dimensions. We can display data in RGB or HSV rendering, for a total of six. If we have time-lapse seismic, then seven. That’s about it. Beyond that, we have to start playing games like combining cubes with cube math and such, but that just combines cubes to make new and fewer total cubes, still limiting us to seven in the display.

That’s why we stack, or make AVO attributes, or Prestack Invert to Vp, Vs, and Density - because we have to reduce the dimensionality of seismic data to something our ape brains can use. Otherwise, we can’t handle the truth! Prestack seismic data is too big for us to work with, so we have to simplify it.

Our ape brains have dimensional limitations. But machine learning has no such limitations. Why not unleash neural networks on a prestack gather, and let them try to learn from all of the richness in that dataset, as well as from well control, in order to try to classify a volume of data which would otherwise be untouchable without simplification?

Why don’t we let machine learning classify our prestack seismic, just like they classify my Google Photos?

Join me for a Lunch & Learn session or Webinar to learn more.

Register Today!

For folks in Perth, Australia, you can meet one of the inventors of Rock Type Classification, my colleague, Kamal Hami-Eddine, who will be presenting the same topic in your city on February 7.  Click here to learn more.

Tags: Oil and Gas Software, artificial intelligence, rock type classification, Machine Learning, Robotic Seismic Interpretation, prestack seismic