It is observed across the global oil and gas industry that all projects are based on specific production forecasts to provide volumes and ensure continuity in the exploration and production business. The majority of these forecasts fall short or under perform based on promises made to the investment community and various stakeholders. There is evidence that industry expectations are usually higher than historical delivery. The onus lies with the exploration, development and production groups tasked with providing this information to find and use all tools and methods available at their disposal and to make realistic corrections during the forecasting process. It is also important to note that a single model will not perfectly represent reality, but we can rely on multiple paths to represent a realistic range of outcomes.
Integration of time-lapse seismic data into dynamic reservoir model is an efficient process in calibrating reservoir parameters update. The choice of the metric which will measure the misfit between observed data and simulated model has a considerable effect on the history matching process, and then on the optimal ensemble model acquired. History matching using 4D seismic and production data simultaneously is still a challenge due to the nature of the two different type of data (time-series and maps or volumes based).
Conventionally, the formulation used for the misfit is least square, which is widely used for production data matching. Distance measurement based objective functions designed for 4D image comparison have been explored in recent years and has been proven to be reliable. This study explores history matching process by introducing a merged objective function, between the production and the 4D seismic data. The proposed approach in this paper is to make comparable this two type of data (well and seismic) in a unique objective function, which will be optimised, avoiding by then the question of weights. An adaptive evolutionary optimisation algorithm has been used for the history matching loop. Local and global reservoir parameters are perturbed in this process, which include porosity, permeability, net-to-gross, and fault transmissibility.
This production and seismic history matching has been applied on a UKCS field, it shows that a acceptalbe production data matching is achieved while honouring saturation information obtained from 4D seismic surveys.
Ahmadinia, Masoud (Centre for Fluid and Complex Systems, Coventry University) | Shariatipour, Seyed (Centre for Fluid and Complex Systems, Coventry University) | Andersen, Odd (SINTEF Digital, Mathematics and Cybernetics) | Sadri, Mahdi (Centre for Fluid and Complex Systems, Coventry University)
To improve the reservoir simulation model, uncertain parameters such as porosity and permeability in the reservoir rock strata need to be adjusted to match the simulated production data with the actual production data. This process is known as History Matching (HM). In geological CO2 storage that is being promoted for use in depleted hydrocarbon reservoirs and saline aquifers, CO2 tends to migrate upwards and accumulate as a separate plume in the zone immediately beneath the reservoir caprock. Thus caprock morphology is of considerable importance with respect to storage safety and migration prediction for the purpose of long-term CO2 storage. Moreover, small scale caprock irregularities, which are not captured by seismic surveys, could be one of the sources of errors while matching the observed CO2 plume migration and the numerical modelling results (e.g. Sleipner). Thus here we study the impact of uncertainties in slope and rugosity (small scale caprock irregularities not captured by seismic surveys) on plume migration, using a history-matching process. We defined 10 cases with different initial guesses to reproduce the caprock properties representing an observed plume shape. The results showed a reasonable match between the horizontal plume shape in calibrated and observed models with an average error of 2.95 percentages
To enhance the applicability of localization to various history-matching problems, the authors adopt an adaptive localization scheme that exploits the correlations between model variables and observations. A challenging problem of automated history-matching work flows is ensuring that, after applying updates to previous models, the resulting history-matched models remain consistent geologically. This paper presents a novel approach to generate approximate conditional realizations using the distributed Gauss-Newton (DGN) method together with a multiple local Gaussian approximation technique. This work presents a systematic and rigorous approach of reservoir decomposition combined with the ensemble Kalman smoother to overcome the complexity and computational burden associated with history matching field-scale reservoirs in the Middle East. This paper presents a comparison of existing work flows and introduces a practically driven approach, referred to as “drill and learn,” using elements and concepts from existing work flows to quantify the value of learning (VOL).
A challenging problem of automated history-matching work flows is ensuring that, after applying updates to previous models, the resulting history-matched models remain consistent geologically. This paper discusses a project with the objective of leveraging prestack and poststack seismic data in order to reconstruct 3D images of thin, discontinuous, oil-filled packstone pay facies of the Upper and Lower Wolfcamp formation. The Oklahoma City independent has a new-look portfolio and new operational and financial priorities. And now it has enlisted an energy research firm to leverage advanced analytics and machine learning to help get the most out of its assets. The objective of this case study is to describe a specific approach to establishing an exploration strategy at the initial stage on the basis of not only uncertainty reduction, but also early business-case development and maximization of future economic value.
Liu, Guoxiang (Baker Hughes a GE Company) | Stephenson, Hayley (Baker Hughes a GE Company) | Shahkarami, Alireza (Baker Hughes a GE Company) | Murrell, Glen (Baker Hughes a GE Company) | Klenner, Robert (Energy & Environmental Research Center, University of North Dakota) | Iyer, Naresh (GE Global Research) | Barr, Brian (GE Global Research) | Virani, Nurali (GE Global Research)
Optimization problems, such as optimal well-spacing or completion design, can be resolved rapidly via surrogate proxy models, and these models can be built using either data-based or physics-based methods. Each approach has its strengths and weaknesses with respect to management of uncertainty, data quality or validation. This paper explores how data- and physics-based proxy models can be used together to create a workflow that combines the strengths of each approach and delivers an improved representation of the overall system. This paper presents use cases that display reduced simulation computational costs and/or reduced uncertainty in the outcomes of the models. A Bayesian calibration technique is used to improve predictability by combining numerical simulations with data regressions. Discrepancies between observations and surrogate outcomes are then observed to calibrate the model and improve the prediction quality and further reduce uncertainty. Furthermore, Gaussian Process Regression is used to locate global minima/maxima, with a minimal number of samples. To demonstrate the methodology, a reservoir model involving two wells in a drill space unit (DSU) in the Bakken Formation was constructed using publicly available data. This reservoir model was tuned by history matching the production data for the two wells. A data-based regression model was constructed based on machine learning technologies using the same dataset. Both models were coupled in a system to build a hybrid model to test the proposed process of data and physics coupling for completion optimization and uncertainty reduction. Subsequently, Gaussian Process Model was used to explore optimization scenarios outside of the data region of confidence and to exploit the hybrid model to further reduce uncertainty and prediction. Overall, both the computation time to identify optimal completion scenarios and uncertainty were reduced. This technique creates a robust framework to improve operational efficiency and drive completion optimization in an optimal timeframe. The hybrid modeling workflow has also been piloted in other applications such as completion design, well placement and optimization, parent-child well interference analysis, and well performance analysis.
Gao, Guohua (Shell Global Solutions (US)) | Vink, Jeroen C. (Shell Global Solutions International) | Chen, Chaohui (Shell International Exploration and Production) | Araujo, Mariela (Shell Global Solutions (US)) | Ramirez, Benjamin A. (Shell International Exploration and Production) | Jennings, James W. (Shell International Exploration and Production) | El Khamra, Yaakoub (Shell Global Solutions (US)) | Ita, Joel (Shell Global Solutions (US))
Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon-field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty-quantification work flow by seamless integration of a distributed-Gauss-Newton (GN) (DGN) optimization method with a Gaussian mixture model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches.
Multiple local maximum-a-posteriori (MAP) estimates are determined with the local-search DGN optimization method. A GMM is constructed to approximate the posterior probability-density function (PDF) by reusing simulation results generated during the DGN minimization process. The traditional acceptance/rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR-GMM samples are independent, identically distributed samples that can be directly used for uncertainty quantification of model parameters and production forecasts.
The proposed method is first validated with 1D nonlinear synthetic problems with multiple MAP points. The AR-GMM samples are better than the original GMM samples. The method is then tested with a synthetic history-matching problem using the SPE01 reservoir model (Odeh 1981; Islam and Sepehrnoori 2013) with eight uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, such as Markov-chain Monte Carlo (MCMC) and global-search DGN combined with the randomized-maximum-likelihood (RML) approach, but have a much lower computational cost (by a factor of five to 100). Finally, it is applied to a real-field reservoir model with synthetic data, with 235 uncertain parameters. AGMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. There are 105 AR-GMM samples accepted from the 1,000 original GMM samples, and they are used to quantify the uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR-GMM samples are quite consistent with the production data observed after the history-matching period.
The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated work flow.
A well-designed pilot is instrumental in reducing uncertainty for the full-field implementation of improved oil recovery (IOR) operations. Traditional model-based approaches for brown-field pilot analysis can be computationally expensive as it involves probabilistic history matching first to historical field data and then to probabilistic pilot data. This paper proposes a practical approach that combines reservoir simulations and data analytics to quantify the effectiveness of brown-field pilot projects.
In our approach, an ensemble of simulations are first performed on models based on prior distributions of subsurface uncertainties and then results for simulated historical data, simulated pilot data and ob jective functions are assembled into a database. The distribution of simulated pilot data and ob jective functions are then conditioned to actual field data using the Data-Space Inversion (DSI) technique, which circumvents the difficulties of traditional history matching. The samples from DSI, conditioned to the observed historical data, are next processed using the Ensemble Variance Analysis (EVA) method to quantify the expected uncertainty reduction of ob jective functions given the pilot data, which provides a metric to ob jectively measure the effectiveness of the pilot and compare the effectiveness of different pilot measurements and designs. Finally, the conditioned samples from DSI can also be used with the classification and regression tree (CART) method to construct signpost trees, which provides an intuitive interpretation of pilot data in terms of implications for ob jective functions.
We demonstrate the practical usefulness of the proposed approach through an application to a brown-field naturally fractured reservoir (NFR) to quantify the expected uncertainty reduction and Value of Information (VOI) of a waterflood pilot following more than 10 years of primary depletion. NFRs are notoriously hard to history match due to their extreme heterogeneity and difficult parameterization; the additional need for pilot analysis in this case further compounds the problem. Using the proposed approach, the effectiveness of a pilot can be evaluated, and signposts can be constructed without explicitly history matching the simulation model. This allows ob jective and efficient comparison of different pilot design alternatives and intuitive interpretation of pilot outcomes. We stress that the only input to the workflow is a reasonably sized ensemble of prior simulations runs (about 200 in this case), i.e., the difficult and tedious task of creating history-matched models is avoided. Once the simulation database is assembled, the data analytics workflow, which entails DSI, EVA, and CART, can be completed within minutes.
To the best of our knowledge, this is the first time the DSI-EVA-CART workflow is proposed and applied to a field case. It is one of the few pilot-evaluation methods that is computationally efficient for practical cases. We expect it to be useful for engineers designing IOR pilot for brown fields with complex reservoir models.
Calibrating production and economic forecasts (objective functions) to observed data is a key component in oil and gas reservoir management. Traditional model-based data assimilation (history matching) entails first calibrating models to the data and then using the calibrated models for probabilistic forecast, which is often ill-posed and time-consuming. In this study, we present an efficient regression-based approach that directly predicts the objectives conditioned to observed data without model calibration.
In the proposed workflow, a set of samples is drawn from the prior distribution of the uncertainty parameter space, and simulations are performed on these samples. The simulated data and values of the objective functions are then assembled into a database, and a functional relationship between the perturbed simulated data (simulated data plus error) and the objective function is established through nonlinear regression methods such as nonlinear partial least square (NPLS) with automatic parameter selection. The prediction from this regression model provides estimates for the mean of the posterior distribution. The posterior variance is estimated by a localization technique.
The proposed methodology is applied to a data assimilation problem on a field-scale reservoir model. The posterior distributions from our approach are validated with reference solution from rejection sampling and then compared with a recently proposed method called ensemble variance analysis (EVA). It is shown that EVA, which is based on a linear-Gaussian assumption, is equivalent to simulation regression with linear regression function. It is also shown that the use of NPLS regression and localization in our proposed workflow eliminates the numerical artifact from the linear-Gaussian assumption and provides substantially better prediction results when strong nonlinearity exists. Systematic sensitivity studies have shown that the improvement is most dramatic when the number of training samples is large and the data errors are small.
The proposed nonlinear simulation-regression procedure naturally incorporates data error and enables the estimation of the posterior variance of objective quantities through an intuitive localization approach. The method provides an efficient alternative to traditional two-step approach (probabilistic history matching and then forecast) and offers improved performance over other existing methods. In addition, the sensitivity studies related to the number of training runs and measurement errors provide insights into the necessity of introducing nonlinear treatments in estimating the posterior distribution of various objective quantities.