|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Taha, Taha (Emerson Automation Solutions) | Ward, Paul (Emerson Automation Solutions) | Peacock, Gavin (Emerson Automation Solutions) | Heritage, John (Emerson Automation Solutions) | Bordas, Rafel (Emerson Automation Solutions) | Aslam, Usman (Emerson Automation Solutions) | Walsh, Steve (Emerson Automation Solutions) | Hammersley, Richard (Emerson Automation Solutions) | Gringarten, Emmanuel (Emerson Automation Solutions)
Abstract This paper presents a case study in 4D seismic history matching using an automated, ensemble-based workflow that tightly integrates the static and dynamic domains. Subsurface uncertainties, captured at every stage of the interpretative and modelling process, are used as inputs within a repeatable workflow. By adjusting these inputs, an ensemble of models is created, and their likelihoods constrained by observations within an iterative loop. The result is multiple realizations of calibrated models that are consistent with the underlying geology, the observed production data, the seismic signature of the reservoir and its fluids. It is effectively a digital twin of the reservoir with an improved predictive ability that provides a realistic assessment of uncertainty associated with production forecasts. The example used in this study is a synthetic 3D model mimicking a real North Sea field. Data assimilation is conducted using an Ensemble Smoother with multiple data assimilations (ES-MDA). This paper has a significant focus on seismic data, with the corresponding result vector generated via a petro-elastic model. 4D seismic data proves to be a key additional source of measurement data with a unique volumetric distribution creating a coherent predictive model. This allows recovery of the underlying geological features and more accurately models the uncertainty in predicted production than was possible by matching production data alone. A significant advantage of this approach is the ability to utilize simultaneously multiple types of measurement data including production, RFT, PLT and 4D seismic. Newly acquired observations can be rapidly accommodated which is often critical as the value of most interventions is reduced by delay.
Thulin, Kristian (International Research Institute of Stavanger and Centre for Integrated Petroleum Research University of Bergen) | Nævdal, Geir (International Research Institute of Stavanger and Centre for Integrated Petroleum Research University of Bergen) | Skaug, Hans Julius (University of Bergen) | Aanonsen, Sigurd Ivar (University of Bergen)
Summary The ensemble Kalman filter (EnKF) is currently considered one of the most promising methods for conditioning reservoir-simulation models to production data. The EnKF is a sequential Monte Carlo method based on a low-rank approximation of the system covariance matrix. The posterior probability distribution of model variables may be estimated from the updated ensemble, but, because of the low-rank covariance approximation, the updated ensemble members become correlated samples from the posterior distribution. We suggest using multiple EnKF runs, each with a smaller ensemble size, to obtain truly independent samples from the posterior distribution. This allows a pointwise confidence interval to be constructed for the posterior cumulative distribution function (CDF). We investigate the methodology for finding an optimal combination of ensemble batch size n and number of EnKF runs m while keeping the total number of ensemble members n×m constant. The optimal combination of n and m is found through minimizing the integrated mean-square error (MSE) for the CDFs. We illustrate the approach on two models, first a small linear model and then a synthetic 2D model inspired by petroleum applications. In the latter case, we choose to define an EnKF run with 10,000 ensemble members as having zero Monte Carlo error. The proposed methodology should be applicable also to larger, more-realistic models.
In the current business environment, oil and gas operators in unconventional plays are continually pushed to reduce risk and increase profitability. Recent advances in horizontal drilling, hydraulic fracturing, and completion design in shale wells have helped reduce costs. However, accurate reserve estimation and production forecasting remain the greatest unknowns impacting the bottom line. The use of advanced subsurface models is gaining momentum to address this challenge. Typical techniques currently used to build and history match these models are based on simplified assumptions and ignore uncertainty in the subsurface characterization. This paper presents a multi-disciplinary ensemble-based history matching approach for reliable forecast production from the shale reservoirs. It incorporates uncertainty from different modeling domains, leading to the generation of improved predictive shale models.
The proposed approach leverages micro-seismic data to create a more realistic representation of the Stimulated Reservoir Volume (SRV). Micro-seismic event locations, magnitude, and fracture plane characteristics are used to construct a Discrete Fracture Network (DFN) required for petrophysical modeling. A forward model comprising DFN modeling, an application to generate relative permeability curves, and a reservoir simulator is set up using a common platform integrator. These applications are run in tandem to generate an ensemble of history matched shale models that capture the range of uncertainties in fracture attributes, relative permeability, and other important dynamic parameters. Production data is assimilated into the shale model using Bayesian statistics and state-of-the-art supervised machine learning techniques.
Our approach is demonstrated using data acquired from three hydraulically fracked wells drilled in the Eagle Ford shale oil window. The use of Bayesian statistics and machine learning techniques led to the identification of multiple shale models calibrated to the production data. Fracture attributes, saturation endpoints, relative permeability curvature, matrix porosity, and initial water saturation were found to significantly affect the history match. A comparison of prior and posterior ensembles showed a significant reduction in uncertainty in predicted production.
The proposed technique ensures that important, previously neglected subsurface uncertainties and their dependencies are captured and used as input to the simulation model. Improved SRV delineation using micro-seismic data and property modeling using DFN, combined with the ability to capture uncertainty through an integrated multi-disciplinary approach, deliver predictive shale models that enable future decisions to be made with a high degree of confidence.
Aslam, Usman (Emerson Automation Solutions) | Martinez Cruz, Dalia (Emerson Automation Solutions) | Perez Cardenas, Luis Hernando (Emerson Automation Solutions) | Leon Garcia, Alfredo (Grupo R Petroleo y Gas) | Ramirez Ramirez, Christian (Grupo R Petroleo y Gas)
Abstract Modern cubic Equations-of-State (EOS) are used to describe reservoir fluid phase-behavior and for volumetric prediction under varying pressure, temperature, and fluid composition. These equations require calibration to the measured laboratory data for reliable prediction. Typical techniques use linear regression or gradient descent methods to calibrate an EOS to measured data. This results in a single solution, whereas such calibration is an inverse problem with a non-unique solution. In addition, these calibration techniques are limited to cases where the initial fluid composition is known. Bayesian inference accelerated by response surface modeling, also termed proxy modeling, is a technique commonly used to calibrate subsurface models to historical production data. This paper extends the application of a proxy modeling approach to regressing an EOS while simultaneously determining the initial fluid composition of a multi-component hydrocarbon mixture. The proposed technique is demonstrated through its application to a PVT model based on a black-oil fluid sample obtained from an oil field in the Gulf of Mexico. The initial fluid composition of the fluid sample was unknown, but the sample was characterized using two PVT experiments including CCE (Constant Composition Experiment) and DLE (Differential Liberation Experiment). The PVT model was initially parametrized by uncertain input parameters with prior distributions. The fluid composition of a typical black-oil fluid sample was used as an initial guess in the PVT model. An initial proxy model was created using the parametrized PVT model with the objective of reducing the mismatch between simulated and user-selected measured PVT data. The proxy model was continuously improved using a sequential design algorithm which involves Latin Hypercube (LHC) sampling, genetic algorithm followed by the gradient optimization. This sequential design ensures that multiple calibrated PVT models with an acceptable degree of accuracy are found while exploring the entire solution space of possible PVT models. In addition, the proposed technique helps determine the initial fluid composition which traditional regression approaches lack. Results show that the mismatch between the simulated and the measured PVT data is significantly less than using traditional approaches. Comparison of prior versus posterior ensembles of PVT models generated using the proxy model reveals that the mole fractions of various components gradually converge to a single value and the uncertainty in the phase envelope is significantly reduced. The proxy model used in our proposed technique provides a robust minimization method which chooses and works with most significant EOS parameters, alleviating the tedious and time-consuming process of regression parameters selection. New regression parameters can be introduced midway during regression and the tuned parameters are always within reasonable physical limits since they are sampled from the user-defined prior distribution. Unlike traditional approaches for PVT regression, the proposed approach does not place a limit on the number of uncertain parameters that can be changed during regression.
Abstract One of the challenging issues of using 4D seismic data into reservoir history matching is to compare the measured data to the model data in a consistent way. It is important to decide which kind of seismic data can be best used and at which level of history matching process they can be integrated. In this work, we have performed 4D seismic history matching of a sector model based on a North sea reservoir in the ensemble Kalman filter (EnKF) framework and have investigated the effects of different types of time-difference seismic data to update reservoir models. The reservoir-seismic model system consists of a commercial reservoir simulator coupled to an implemented rock physics model and a forward seismic modeling tool based on 1D convolution with weak contrast reflectivity approximation. The objective of this work is to investigate the sensitivity of different combinations of production and seismic data on EnKF model updating. The uncertain static reservoir parameters considered are porosity and permeability; dynamic variables such as pressure and water saturation are conditioned to both production and seismic data. In particular, we are interested to quantify the performance of the wells; also to match seismic data and to estimate the reservoir parameters. In most of the cases of reservoir characterization, time-difference impedance data performed better than time-difference amplitude data and considerably reduced posterior ensemble spread. The matching of seismic data generally improved with the inclusion of time-difference seismic data. In estimating posterior porosity and permeability, seismic difference data provided better estimate than using only production data, especially in aquifer region and also in areas that might be considered for in-fill wells. Thus, in our realistic synthetic case based on a full field reservoir model, we experienced that the integration of seismic data in the elastic domain provided better performance than using amplitude data. In addition, we investigated the effects of vertical resolution of seismic data on EnKF model updating, and showed that the choice of wavelet discretization points can have significant influence on the quality of history matching.