Decline curve analysis is widely used in industry to perform production forecasting and to estimate reserves volumes. A useful technique in verifying the validity of a decline model is to estimate the Arps decline parameters, the loss ratio and the b-factor, with respect to time. This is used to check the model fit and to determine the flow regimes under which the reservoir produces. Existing methods to estimate the b-factor are heavily impacted by noise in production data. In this work, we introduce a new method to estimate the Arps decline parameters.
We treat the loss ratio and the b-factor over time as parameters to be estimated in a Bayesian framework. We include prior information on the parameters in the model. This serves to regularize the solution and prevent noise in the data from being amplified. We then fit the parameters to the model using Markov chain Monte Carlo methods to obtain probability distributions of the parameters. These distributions characterize the uncertainty in the parameters being estimated. We then compare our method with existing methods using simulated and field data.
We show that our method produces smooth loss ratio and b-factor estimates over time. Estimates using the three-point derivative method are not matched with data, and results in biased estimates of the Arps parameters. This can lead to misleading fits in decline curve analysis and unreliable estimates of reserves. We show that our technique helps in identification of end of linear flow and start of boundary dominated flow. We use our method on simulated data, with and without noise. Finally, we demonstrate the validity of our method on field cases.
Fitting a decline curve using the loss ratio and b-factor plots is a powerful technique that can highlight important features in the data and the possible points of failure of a model. Calculating these plots using the Bourdet three-point derivative induces bias and magnifies noise. Our analysis ensures that this estimation is robust and repeatable by adding prior information on the parameters to the model and by calibrating the estimates to the data.
Reliability of subsurface assessment for different field development scenarios depends on how effective the uncertainty in production forecast is quantified. Currently there is a body of work in the literature on different methods to quantify the uncertainty in production forecast. The objective of this paper is to revisit and compare these probabilistic uncertainty quantification techniques through their applications to assisted history matching of a deep-water offshore waterflood field. The paper will address the benefits, limitations, and the best criteria for applicability of each technique.
Three probabilistic history matching techniques commonly practiced in the industry are discussed. These are Design-of-Experiment (DoE) with rejection sampling from proxy, Ensemble Smoother (ES) and Genetic Algorithm (GA). The model used for this study is an offshore waterflood field in Gulf-of-Mexico. Posterior distributions of global subsurface uncertainties (e.g. regional pore volume and oil-water contact) were estimated using each technique conditioned to the injection and production data.
The three probabilistic history matching techniques were applied to a deep-water field with 13 years of production history. The first 8 years of production data was used for the history matching and estimate of the posterior distribution of uncertainty in geologic parameters. While the convergence behavior and shape of the posterior distributions were different, consistent posterior means were obtained from Bayesian workflows such as DoE or ES. In contrast, the application of GA showed differences in posterior distribution of geological uncertainty parameters, especially those that had small sensitivity to the production data. We then conducted production forecast by including infill wells and evaluated the production performance using sample means of posterior geologic uncertainty parameters. The robustness of the solution was examined by performing history matching multiple times using different initial sample points (e.g. random seed). This confirmed that heuristic optimization techniques such as GA were unstable since parameter setup for the optimizer had a large impact on uncertainty characterization and production performance.
This study shows the guideline to obtain the stable solution from the history matching techniques used for different conditions such as number of simulation model realizations and uncertainty parameters, and number of datapoints (e.g. maturity of the reservoir development). These guidelines will greatly help the decision-making process in selection of best development options.
Taha, Taha (Emerson Automation Solutions) | Ward, Paul (Emerson Automation Solutions) | Peacock, Gavin (Emerson Automation Solutions) | Heritage, John (Emerson Automation Solutions) | Bordas, Rafel (Emerson Automation Solutions) | Aslam, Usman (Emerson Automation Solutions) | Walsh, Steve (Emerson Automation Solutions) | Hammersley, Richard (Emerson Automation Solutions) | Gringarten, Emmanuel (Emerson Automation Solutions)
This paper presents a case study in 4D seismic history matching using an automated, ensemble-based workflow that tightly integrates the static and dynamic domains. Subsurface uncertainties, captured at every stage of the interpretative and modelling process, are used as inputs within a repeatable workflow. By adjusting these inputs, an ensemble of models is created, and their likelihoods constrained by observations within an iterative loop. The result is multiple realizations of calibrated models that are consistent with the underlying geology, the observed production data, the seismic signature of the reservoir and its fluids. It is effectively a digital twin of the reservoir with an improved predictive ability that provides a realistic assessment of uncertainty associated with production forecasts.
The example used in this study is a synthetic 3D model mimicking a real North Sea field. Data assimilation is conducted using an Ensemble Smoother with multiple data assimilations (ES-MDA). This paper has a significant focus on seismic data, with the corresponding result vector generated via a petro-elastic model. 4D seismic data proves to be a key additional source of measurement data with a unique volumetric distribution creating a coherent predictive model. This allows recovery of the underlying geological features and more accurately models the uncertainty in predicted production than was possible by matching production data alone.
A significant advantage of this approach is the ability to utilize simultaneously multiple types of measurement data including production, RFT, PLT and 4D seismic. Newly acquired observations can be rapidly accommodated which is often critical as the value of most interventions is reduced by delay.
A well-designed pilot is instrumental in reducing uncertainty for the full-field implementation of improved oil recovery (IOR) operations. Traditional model-based approaches for brown-field pilot analysis can be computationally expensive as it involves probabilistic history matching first to historical field data and then to probabilistic pilot data. This paper proposes a practical approach that combines reservoir simulations and data analytics to quantify the effectiveness of brown-field pilot projects.
In our approach, an ensemble of simulations are first performed on models based on prior distributions of subsurface uncertainties and then results for simulated historical data, simulated pilot data and ob jective functions are assembled into a database. The distribution of simulated pilot data and ob jective functions are then conditioned to actual field data using the Data-Space Inversion (DSI) technique, which circumvents the difficulties of traditional history matching. The samples from DSI, conditioned to the observed historical data, are next processed using the Ensemble Variance Analysis (EVA) method to quantify the expected uncertainty reduction of ob jective functions given the pilot data, which provides a metric to ob jectively measure the effectiveness of the pilot and compare the effectiveness of different pilot measurements and designs. Finally, the conditioned samples from DSI can also be used with the classification and regression tree (CART) method to construct signpost trees, which provides an intuitive interpretation of pilot data in terms of implications for ob jective functions.
We demonstrate the practical usefulness of the proposed approach through an application to a brown-field naturally fractured reservoir (NFR) to quantify the expected uncertainty reduction and Value of Information (VOI) of a waterflood pilot following more than 10 years of primary depletion. NFRs are notoriously hard to history match due to their extreme heterogeneity and difficult parameterization; the additional need for pilot analysis in this case further compounds the problem. Using the proposed approach, the effectiveness of a pilot can be evaluated, and signposts can be constructed without explicitly history matching the simulation model. This allows ob jective and efficient comparison of different pilot design alternatives and intuitive interpretation of pilot outcomes. We stress that the only input to the workflow is a reasonably sized ensemble of prior simulations runs (about 200 in this case), i.e., the difficult and tedious task of creating history-matched models is avoided. Once the simulation database is assembled, the data analytics workflow, which entails DSI, EVA, and CART, can be completed within minutes.
To the best of our knowledge, this is the first time the DSI-EVA-CART workflow is proposed and applied to a field case. It is one of the few pilot-evaluation methods that is computationally efficient for practical cases. We expect it to be useful for engineers designing IOR pilot for brown fields with complex reservoir models.
Calibrating production and economic forecasts (objective functions) to observed data is a key component in oil and gas reservoir management. Traditional model-based data assimilation (history matching) entails first calibrating models to the data and then using the calibrated models for probabilistic forecast, which is often ill-posed and time-consuming. In this study, we present an efficient regression-based approach that directly predicts the objectives conditioned to observed data without model calibration.
In the proposed workflow, a set of samples is drawn from the prior distribution of the uncertainty parameter space, and simulations are performed on these samples. The simulated data and values of the objective functions are then assembled into a database, and a functional relationship between the perturbed simulated data (simulated data plus error) and the objective function is established through nonlinear regression methods such as nonlinear partial least square (NPLS) with automatic parameter selection. The prediction from this regression model provides estimates for the mean of the posterior distribution. The posterior variance is estimated by a localization technique.
The proposed methodology is applied to a data assimilation problem on a field-scale reservoir model. The posterior distributions from our approach are validated with reference solution from rejection sampling and then compared with a recently proposed method called ensemble variance analysis (EVA). It is shown that EVA, which is based on a linear-Gaussian assumption, is equivalent to simulation regression with linear regression function. It is also shown that the use of NPLS regression and localization in our proposed workflow eliminates the numerical artifact from the linear-Gaussian assumption and provides substantially better prediction results when strong nonlinearity exists. Systematic sensitivity studies have shown that the improvement is most dramatic when the number of training samples is large and the data errors are small.
The proposed nonlinear simulation-regression procedure naturally incorporates data error and enables the estimation of the posterior variance of objective quantities through an intuitive localization approach. The method provides an efficient alternative to traditional two-step approach (probabilistic history matching and then forecast) and offers improved performance over other existing methods. In addition, the sensitivity studies related to the number of training runs and measurement errors provide insights into the necessity of introducing nonlinear treatments in estimating the posterior distribution of various objective quantities.
In this study we explore the use of multilevel derivative-free optimization for history matching, with model properties described using PCA-based parameterization techniques. The parameterizations applied in this work are optimization-based PCA (O-PCA) and convolutional neural network-based PCA (CNN-PCA). The latter, which derives from recent developments in deep learning, is able to represent accurately models characterized by multipoint spatial statistics. Mesh adaptive direct search (MADS), a pattern search method that parallelizes naturally, is applied for the optimizations required to generate posterior (history matched) models. The use of PCA-based parameterization reduces considerably the number of variables that must be determined during history matching (since the dimension of the parameterization is much smaller than the number of grid blocks in the model), but the optimization problem can still be computationally demanding. The multilevel strategy introduced here addresses this issue by reducing the number of simulations that must be performed at each MADS iteration. Specifically, the PCA coefficients (which are the optimization variables after parameterization) are determined in groups, at multiple levels, rather than all at once. Numerical results are presented for 2D cases, involving channelized systems (with binary and bimodal permeability distributions) and a deltaic-fan system, using O-PCA and CNN-PCA parameterizations. O-PCA is effective when sufficient conditioning (hard) data are available, but it can lead to geomodels that are inconsistent with the training image when these data are scarce or nonexistent. CNN-PCA, by contrast, can provide accurate geomodels that contain realistic features even in the absence of hard data. History matching results demonstrate that substantial uncertainty reduction is achieved in all cases considered, and that the multilevel strategy is effective in reducing the number of simulations required. It is important to note that the parameterizations discussed here can be used with a wide range of history matching procedures (including ensemble methods), and that other derivative-free optimization methods can be readily applied within the multilevel framework.
In this work, we investigate different approaches for history matching of imperfect reservoir models while accounting for model error. The first approach (base case scenario) relies on direct Bayesian inversion using iterative ensemble smoothing with annealing schedules without accounting for model error. In the second approach the residual, obtained after calibration, is used to iteratively update the covariance matrix of the total error, that is a combination of model error and data error. In the third approach, PCA-based error model is used to represent the model discrepancy during history matching. However, the prior for the PCA weights is quite subjective and is generally hard to define. Here the prior statistics of model error parameters are estimated using pairs of accurate and inaccurate models. The fourth approach, inspired from
Araujo, Mariela (Shell International Exploration and Production Inc.) | Chen, Chaohui (Shell International Exploration and Production Inc.) | Gao, Guohua (Shell International Exploration and Production Inc.) | Jennings, Jim (Shell International Exploration and Production Inc.) | Ramirez, Benjamin (Shell International Exploration and Production Inc.) | Xu, Zhihua (ExxonMobil) | Yeh, Tzu-hao (Shell International Exploration and Production Inc.) | Alpak, Faruk Omer (Shell International Exploration and Production Inc.) | Gelderblom, Paul (Shell International Exploration and Production Inc.)
Increased access to computational resources has allowed reservoir engineers to include assisted history matching (AHM) and uncertainty quantification (UQ) techniques as standard steps of reservoir management workflows. Several advanced methods have become available and are being used in routine activities without a proper understanding of their performance and quality. This paper provides recommendations on the efficiency and quality of different methods for applications to production forecasting, supporting the reservoir-management decision-making process.
Results from five advanced methods and two traditional methods were benchmarked in the study. The advanced methods include a nested sampling method MultiNest, the integrated global search Distributed Gauss-Newton (DGN) optimizer with Randomized Maximum Likelihood (RML), the integrated local search DGN optimizer with a Gaussian Mixture Model (GMM), and two advanced Bayesian inference-based methods from commercial simulation packages. Two traditional methods were also included for some test problems: the Markov-Chain Monte Carlo method (MCMC) is known to produce accurate results although it is too expensive for most practical problems, and a DoE-proxy based method widely used and available in some form in most commercial simulation packages.
The methods were tested on three different cases of increasing complexity: a 1D simple model based on an analytical function with one uncertain parameter, a simple injector-producer well pair in the SPE01 model with eight uncertain parameters, and an unconventional reservoir model with one well and 24 uncertain parameters. A collection of benchmark metrics was considered to compare the results, but the most useful included the total number of simulation runs, sample size, objective function distributions, cumulative oil production forecast distributions, and marginal posterior parameter distributions.
MultiNest and MCMC were found to produce the most accurate results, but MCMC is too costly for practical problems. MultiNest is also costly, but it is much more efficient than MCMC and it may be affordable for some practical applications. The proxy-based method is the lowest-cost solution. However, its accuracy is unacceptably poor.
DGN-RML and DGN-GMM seem to have the best compromise between accuracy and efficiency, and the best of these two is DGN-GMM. These two methods may produce some poor-quality samples that should be rejected for the final uncertainty quantification.
The results from the benchmark study are somewhat surprising and provide awareness to the reservoir engineering community on the quality and efficiency of the advanced and most traditional methods used for AHM and UQ. Our recommendation is to use DGN-GMM instead of the traditional proxy-based methods for most practical problems, and to consider using the more expensive MultiNest when the cost of running the reservoir models is moderate and high-quality solutions are desired.
This paper considers Bayesian methods to discriminate between models depending on posterior model probability. When applying ensemble-based methods for model updating or history matching, the uncertainties in the parameters are typically assumed to be univariate Gaussian random fields. In reality, however, there often might be several alternative scenarios that are possible a priori. We take that into account by applying the concepts of model likelihood and model probability and suggest a method that uses importance sampling to estimate these quantities from the prior and posterior ensembles. In particular, we focus on the problem of conditioning a dynamic reservoir-simulation model to frequent 4D-seismic data (e.g., permanent-reservoir-monitoring data) by tuning the top reservoir surface given several alternative prior interpretations with uncertainty. However, the methodology can easily be applied to similar problems, such as fault location and reservoir compartmentalization. Although the estimated posterior model probabilities will be uncertain, the ranking of models according to estimated probabilities appears to be quite robust.
Bayesian inference provides a convenient framework for history matching and prediction. In this framework, prior knowledge, system nonlinearity, and measurement errors can be directly incorporated into the posterior distribution of the parameters. The Markov-chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior distribution. However, the MCMC method usually requires a large number of forward simulations. Hence, it can be a computationally intensive task, particularly when dealing with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model outputs in the form of polynomials using the stochastic collocation method (SCM). In addition, we use interpolation with the nested sparse grids and adaptively take into account the different importance of parameters for high-dimensional problems. Furthermore, we introduce an additional transform process to improve the accuracy of the surrogate model in case of strong nonlinearities, such as a discontinuous or unsmooth relation between the input parameters and the output responses. Once the surrogate system is built, we can evaluate the likelihood with little computational cost. Numerical results demonstrate that the proposed method can efficiently estimate the posterior statistics of input parameters and provide accurate results for history matching and prediction of the observed data with a moderate number of parameters.