By Bruno de Ribet, Technical Global Director
New classification procedures can significantly expand geoscientists’ traditional use of pre-stack data, in a reservoir characterization workflow, for predicting lithofacies away from the wellbore and assessing uncertainty for each rock type. This innovation enables workflows that highlight reservoir heterogeneities and details that are normally not captured in standard reservoir characterization workflows. Geoscientists can also build or update a 3D geological model with quantification of uncertainties.
In a recent Paradigm Virtual Lecture Series webcast, I describe one such 3D reservoir property model by the principal subsurface rock types and their accurate spatial distribution. I also introduce a proposed new methodology based on Democratic Neural Network Association.
Why should we address a new approach to lithology prediction? First, facies distribution remains a problematic challenge for the oil and gas industry when trying to characterize a reservoir. Successful hydrocarbon exploration and production requires the prediction of lithofacies distribution throughout the reservoir with detailed precision. Within reservoir modeling studies, reservoir facies determination is also a major uncertainty, yet it’s highly important as it will affect reservoir properties distribution.
Therefore, the goal is to capture lateral and vertical heterogeneities regarding the facies distribution. It’s also important to ensure models account for the facies distribution away from the wells.
The Neural Network Approach
When we talk about seismic facies, we talk about seismic data, unsupervised or supervised processes. Let’s take for example, a waveform classification in the Eagle Ford (see the image to the right). The neurons at the top signify the synthetic model applied to the seismic data. The distribution or variation in the facies classification is also shown. This process was unsupervised, using the waveform and the most common algorithm applied in such technology – the Neural Network Approach. The technology was originally developed by Kohonen in 1982 and is a self-organizing mapping (SOM) approach.
The next image highlights a seismic facies volume of the Barnett Shale. From this interpretation, we can detect the extension of the sweet spot (red/orange color) from the different seismic attributes. This is a multi-attribute and quantitative approach associated to a specific facies and one that uses an algorithm such as partitional or hierarchical clustering.
When we apply such a workflow, it’s also important to consider several basic facts, including:
The organization of the seismic data in the post-stack domain
When we use clustering techniques like Neural Network Technology, post-stack seismic data classification is mostly based on data density and eventually on data characteristics.
The non-linear relation between what we observe at the well location, the well facies and seismic attributes.
We also have data from different origins and resolutions, including logs, core, pre- and post-stack.
In the new method, our main goal is to combine geological (lithology) and geophysical data. By using such a rich seismic data set, the geoscientist can account for all pre-stack information. The methodology would also infer qualitative information, such as lithology and facies interpreted at wells, from 3D seismic attributes. He can find patterns in seismic attributes for predicting what is seen at the well using a probabilistic “lithology type” approach, distribution and uncertainty.
To accomplish this, we propose a methodology based on supervised learning using democratic associative neural networks. Of most importance is supervised learning, which shows from the data which input variables and “theoretical” responses are known. The process determines a relationship between input seismic data and corresponding lithofacies index defined using a set of neuron vectors and a given algorithm.
What is the main difference between this method and classical lithofacies classification? This new method uses several independent networks in parallel. Multiple functions are activated and therefore, establish different learning rules. Some neural networks, for example, will learn from the density of the information, while others will learn from frontier facies between two main clusters. Parallel training provides a suite of independent predicted responses and prediction to well measurements. Geoscientists can then formulate multiple strategies. And as a result of using different neural networks with a different activation function, the resolution issues are solved without a priori definition of the transfer function between seismic attributes and well facies.
In summary, this workflow is innovative for two main reasons:
The ability to combine geological information with pre-stack data to reconciliate the geophysical information completely.
The use of a specific set of neural networks in parallel, offering the ability to learn from the data and then establish different strategies.
It is also a geologically-coherent workflow, because of what is learned from the well and inferred from the seismic attributes. The resulting facies distribution helps geoscientists understand between wells – the most important uncertainty in facies distribution. With a workflow-based implementation, the user can run scenarios and establish which one is the best or the most suitable to the data. Overall, this is a complementary approach to any seismic-driven reservoir characterization workflow, especially in heterogeneous geological environments.
For more information about the methodology proposed, download “Integrate Well and Pre-Stack Data for Lithology Prediction,” a 2014 Virtual Lecture Series webcast.