A statistical screening methodology is presented to address uncertainty related to main geological assumptions in green field modeling. The goals are the identification of the entire range of uncertainty on production, learning which are the most impacting geological uncertain inputs and understanding the relationships between geological scenarios and classes of dynamic behavior.
The paper presents the methodology and an example application to a green field case study. The method is applied on an ensemble of reservoir models created by combining geological parameters across their range of uncertainty. The ensemble of models is then simulated with a selected development strategy and dynamic responses are grouped in classes of outcome through clustering algorithms. Ensemble responses are visualized on a multidimensional stacking plot, as a function of the geological input, and the most influential parameters are identified by axes sorting on the plot. Geological scenarios are then classified on dynamic responses through classification tree algorithms. Finally, a representative set of models is selected from the geological scenarios.
The example study application shows a final oil recovery uncertainty range of 4:1, which is reasonable for a green field in lack of data. Such high range of uncertainty could hardly be found by common risk assessment based on fixed geological assumptions, which often tend to underestimate uncertainty on forecasts. Ensemble outcomes are grouped in four classes by oil recovery, plateau strength, produced water, and breakthrough time. The adoption of such clustering features gives a broad understanding of the reservoir dynamic response. The most influential geological inputs among the examined structural and sedimentological parameters in the example application result to be the fault orientation and channel fraction. This screening result highlights the main drivers of geological uncertainty and is useful for the following scenario classification phase. Classification of the geological scenarios leads to five classes of geological parameter sets, each linked to a main class of dynamic behavior, and finally to five representative models. These five models constitute an effective sampling of the geological uncertainty space which also captures the different types of dynamic response.
This paper will contribute to widen the engineering experience on the use of machine learning for risk analysis by presenting an application on a real field case study to explore the relationship between geological uncertainty and reservoir dynamic behavior.
While Image processing is still an area of research, standard workflows have emerged and are routinely used in Oil&Gas companies.
However, while hardware capabilities have increased consequently, allowing large samples to be scanned with a high fidelity, permeability simulations are still limited to small samples unless having access to HPC. Direct simulations are known to be more flexible in terms of type of rocks, but limited in terms of sample size, while Pore Network Model based allow much larger sample sizes but less rock types.
In this study, we will focus on the pore space analysis of a middle-eastern carbonate sample. The rock sample is 7.5 cm tall and has a diameter of 3.8 cm.
It has been acquired at 3 different resolution: a microCT scan at 16μm, a microCT scan of a 10 mm of diameter subsample at 5 μm, and a 10 mm of diameter SEM section at 2μm.
This study will propose a methodology to mix the different scales in order to get an accurate pore space analysis of the largest possible sample size.
As micro porous regions are visible at every scale, bringing uncertainty to the segmentation step, the first part of our analysis will consist of determining the most accurate pore space at the three different resolutions. We will rely on image registration (2D to 3D and 3D to 3D) and image based upscaling methods, further validated by simulation results.
Given the large numerical size of the samples, specific workflows involving large data 3D visualization and processing will be presented.
Then, different measures will be conducted: porosity and connected porosity, absolute permeability with three different methods (Lattice Boltzmann, Finite Volume, Pore Network Modeling), relative permeability curves using a Pore Network Model simulator. A new pore network model generation applicable to highly concave pore spaces such as carbonates ones will also be introduced.
A scalable method using automation will be presented, so that repeating the simulations on different samples of different space origins and size is easy.
We will expose the results and limits of every method and will determine which size is bringing a convergence of the results. We will especially look at the convergence of direct based simulations and pore network model based ones, such that expanding the size prior to Pore Network Model generation can be reliable.
In addition to the benchmark of the different simulation methods and their associated limits, the results will help us determining the representative elementary volume at different resolutions and the associated uncertainty depending on whether sub-resolution acquisitions are available or not.
microCTs and SEM image of the carbonate rock sample
Zafar is a strategy consultant with Accenture and is based out of Mumbai. Before Accenture, he worked for 5 years at Halliburton designing drill bits for oil and gas companies in South Asia. He has been a volunteer with TWA since 2013 supporting multiple sections prior to transitioning to a leadership role in 2018. He is a keen technophile, an avid debater, and a passionate Toastmaster. He has participated in and won several public speaking and debate competitions. His hobbies include running, collecting key-rings, building robots, and keeping abreast of global geopolitics. Kristin Cook is the Advisor to TWA. She is an MS candidate in Energy and Earth Resources at the University of Texas at Austin. Her interests include energy policy, oil and gas project development, and energy security and geopolitics. Prior to starting graduate school, Cook worked for 5 years as a production engineer in the San Juan Basin, a natural gas field in northwestern New Mexico.
Dynamic data is information that changes asynchronously as the information is updated. Unlike static data, which is infrequently accessed and unlikely to be modified, or streaming data, which has a constant flow of information, dynamic data involves updates that may come at any time, with sporadic periods of inactivity in between. In the context of reservoir engineering, dynamic data is used during the creation of a reservoir model in conjunction with historical static data. When modeled accurately, any sampling from the conditional distribution would produce accurate static and dynamic characteristics. When a permanence of ratio hypothesis is employed, the conditional probability P(AǀB,C) can be expressed in terms of P(A), P(AǀB), and P(AǀC).
Integration of time-lapse seismic data into dynamic reservoir model is an efficient process in calibrating reservoir parameters update. The choice of the metric which will measure the misfit between observed data and simulated model has a considerable effect on the history matching process, and then on the optimal ensemble model acquired. History matching using 4D seismic and production data simultaneously is still a challenge due to the nature of the two different type of data (time-series and maps or volumes based).
Conventionally, the formulation used for the misfit is least square, which is widely used for production data matching. Distance measurement based objective functions designed for 4D image comparison have been explored in recent years and has been proven to be reliable. This study explores history matching process by introducing a merged objective function, between the production and the 4D seismic data. The proposed approach in this paper is to make comparable this two type of data (well and seismic) in a unique objective function, which will be optimised, avoiding by then the question of weights. An adaptive evolutionary optimisation algorithm has been used for the history matching loop. Local and global reservoir parameters are perturbed in this process, which include porosity, permeability, net-to-gross, and fault transmissibility.
This production and seismic history matching has been applied on a UKCS field, it shows that a acceptalbe production data matching is achieved while honouring saturation information obtained from 4D seismic surveys.
The work discussed and presented in this paper focuses on the history matching of reservoirs by integrating 4D seismic data into the inversion process using machine learning techniques. A new integrated scheme for the reconstruction of petrophysical properties with a modified Ensemble Smoother with Multiple Data Assimilation (ES-MDA) in a synthetic reservoir is proposed. The permeability field inside the reservoir is parametrised with an unsupervised learning approach, namely K-means with Singular Value Decomposition (K-SVD). This is combined with the Orthogonal Matching Pursuit (OMP) technique which is very typical for sparsity promoting regularisation schemes. Moreover, seismic attributes, in particular, acoustic impedance, are parametrised with the Discrete Cosine Transform (DCT). This novel combination of techniques from machine learning, sparsity regularisation, seismic imaging and history matching aims to address the ill-posedness of the inversion of historical production data efficiently using ES-MDA. In the numerical experiments provided, I demonstrate that these sparse representations of the petrophysical properties and the seismic attributes enables to obtain better production data matches to the true production data and to quantify the propagating waterfront better compared to more traditional methods that do not use comparable parametrisation techniques.
Multiple point statistical (MPS) simulation is a modern pattern-based geostatistical approach for describing and stochastically simulating geologic formations with complex connectivity patterns. In MPS geostatistical simulation, a template containing data patterns around each simulation cell is used to extract and store the local conditional probabilities from a training image (TI). To generate a simulated sample, a random path is generated to sequentially visit all unsampled grid cells and draw conditional samples from the corresponding stored conditional probabilities. The grid-based implementation of MPS simulation offers several advantages for integration of hard and soft data. In the Single Normal Equation SIMulation (SNESIM) implementation of MPS for facies simulation, it has been observed that the integration of soft data can result in many facies realizations that do not provide consistent patterns with the incorporated probability map. This is partly explained by the Markov property that only considers probabilities that are co-located with the simulation node, and hence ignoring spatial information from neighboring cells. In addition to this effect, we show another important mechanism is in play in the SNESIM algorithm that explains the observed behavior. Specifically, at the early stage of the simulation when the first few percentage of the simulation nodes on the random path are visited the local conditioning data are limited and the resulting conditional probabilities that are obtained from the TI are not strictly constrained. Hence the conditional probabilities cover a wide range of values in the range [0,1]. However, after this initial stage, as the simulated data populate more cells in the model grid, they tend to severely constrain the conditional probabilities to assume extreme values of 0 or 1. With these extreme values at the later stages of the simulation the probability values that are included in the soft data (as secondary source of information) tend to be disregarded and the facies types are predominantly determined by the TI. We demonstrate and discuss this behavior of the SNESIM algorithm through several examples and present strategies that can be adopted to compensate for this effect. The presented examples are related to indirect integration of the flow data by first inferring probabilistic information about facies types and using the results as soft data for integration into SNESIM algorithm.
Agent-based models (ABMs) provide a fast alternative to traditional partial differential equation (PDE)- based oil reservoir models by applying localized inexpensive simulations, rather than solving a partial differential equation at every time-step. However, while there have been theoretical and numerical results obtained with ABMs in social science applications, the accuracy of ABMs has not been analyzed in the context of oil reservoir modeling.
The field of data-driven analytics and machine learning is rapidly evolving today and slowly beginning to reshape the petroleum sector with transformative initiatives.
This work describes a heuristic approach combining mathematical modeling and associated data-driven workflows for estimating reservoir pressure surfaces through space and time using measured data. This procedure has been implemented successfully in a giant offshore field with a complex history of active pressure management by water and gas.
This practical workflow generates present-day pressure maps that can be used directly in reservoir management by locating poorly supported areas and planning for mitigation activities. It assists and guides the history matching process by offering a benchmark against which simulated pressures can be compared. Combined with data-based streamlines computation, this workflow improves the understanding of fluid flow movements, help to identify baffles and assists in field sectorization.
The distinctive feature of this data-driven approach is the unbiased reliance on field-observed data that complements complex modeling and compute-intensive schemes typically found in reservoir simulation. Conventional dynamic simulation and the tracing of streamlines require adequate static (e.g. permeability tensor) and dynamic models (e.g. pressures for each active cell in the system).
Alternatively data-driven streamlines are readily available and calibrated.
This paper presents innovative algorithms and workflows to the relatively limited existing body of literature on data-driven methods for pressure mapping.
In this case study, new insights are effectively revealed such as inter-reservoir communication, enabled a better understanding of the gas movement and supported the change in production strategy.
The paper is organized as follow. After a general overview of the field studied, this paper describes in detail the workflows used to interpolate pressures in space and time along with cross-validation results. Various applications of the pressure predictions are presented in the sections thereafter.
Liu, Xige (Northeastern University, Shenyang) | Zhu, Wancheng (Northeastern University, Shenyang) | Yu, Qinglei (Northeastern University, Shenyang) | Liu, Honglei (Northeastern University, Shenyang) | Cheng, Guanwen (Northeastern University, Shenyang)
The existence or many joints brings about the problems of interaction between joints. In this regard, it is very important to explore the interaction between two joints under shear loading condition. In this study, the shear performance of double joints was studied using the laboratory experiments, where the smooth cleavage joints were generated by sawing off and the rough joints were generated by the Brazilian Split test. The experimental results of double joints of sandstone show that, under lower normal stress the interlayer rock between two joints does not fracture, and the peak shear strength of the specimen is determined by the weaker joint. In contrast, under higher normal stress, the peak shear strength is attained when the tensile fractures initiate in the interlayer rock, and it has also relevancy to the JRC of double joints and interlayer thickness. Additionally, numerical simulation of the double-joint shear tests show that the direction of the cracks trends to be parallel with that of the maximum principal stress, and the stress concentration in the joint surfaces causes penetration between different joints, which leads to a lower strength.
Shear strength of a single joint can be effectively estimated by an empirical formula , However, the structural planes or joints in the real rock mass are usually not exist alone, the interaction between different joints has important influence on the overall strength. Yang et al.  studied the strength and deformation of rock specimens cut with parallel structures under the uniaxial compression conditions. It indicated that the failure modes of specimens with multiple structures can be divided into three types. The stress-strain characteristics of rock mass with multiple joints based on the double axial compression tests have been conducted by Yoshinaka and Yamabe , and the related constitutive equation was established. Through the true triaxial compression tests on large size multi-jointed rock specimens, Reik and Zacas  have found that the deformation and failure mode of jointed rock mass are both related to the direction of the joints and the stress state. Kulatilake et al. deemed that the fracture tensor component can be used to establish a nonlinear relationship with the strength of rock mass with many groups of joints. Jaeger  and Bray  predicted the strength of rock mass containing one or two joints by the principle of stress superposition, and they considered that the weakest structure played a decisive role in the strength of rock mass. Hoek and Brown  have also set up a prediction formula of jointed rock mass strength based on the uniaxial compression test. In addition, the numerical simulation method is used to study the mechanical properties of the multi-jointed rock, and the results show that the direction, number and spacing of the structural planes have influence on the overall strength-. Generally speaking, the current understanding of the interaction between different joints is still not enough, especially on the interpenetration between two or more rough joints.