Araujo, Mariela (Shell International Exploration and Production Inc.) | Chen, Chaohui (Shell International Exploration and Production Inc.) | Gao, Guohua (Shell International Exploration and Production Inc.) | Jennings, Jim (Shell International Exploration and Production Inc.) | Ramirez, Benjamin (Shell International Exploration and Production Inc.) | Xu, Zhihua (ExxonMobil) | Yeh, Tzu-hao (Shell International Exploration and Production Inc.) | Alpak, Faruk Omer (Shell International Exploration and Production Inc.) | Gelderblom, Paul (Shell International Exploration and Production Inc.)
Increased access to computational resources has allowed reservoir engineers to include assisted history matching (AHM) and uncertainty quantification (UQ) techniques as standard steps of reservoir management workflows. Several advanced methods have become available and are being used in routine activities without a proper understanding of their performance and quality. This paper provides recommendations on the efficiency and quality of different methods for applications to production forecasting, supporting the reservoir-management decision-making process.
Results from five advanced methods and two traditional methods were benchmarked in the study. The advanced methods include a nested sampling method MultiNest, the integrated global search Distributed Gauss-Newton (DGN) optimizer with Randomized Maximum Likelihood (RML), the integrated local search DGN optimizer with a Gaussian Mixture Model (GMM), and two advanced Bayesian inference-based methods from commercial simulation packages. Two traditional methods were also included for some test problems: the Markov-Chain Monte Carlo method (MCMC) is known to produce accurate results although it is too expensive for most practical problems, and a DoE-proxy based method widely used and available in some form in most commercial simulation packages.
The methods were tested on three different cases of increasing complexity: a 1D simple model based on an analytical function with one uncertain parameter, a simple injector-producer well pair in the SPE01 model with eight uncertain parameters, and an unconventional reservoir model with one well and 24 uncertain parameters. A collection of benchmark metrics was considered to compare the results, but the most useful included the total number of simulation runs, sample size, objective function distributions, cumulative oil production forecast distributions, and marginal posterior parameter distributions.
MultiNest and MCMC were found to produce the most accurate results, but MCMC is too costly for practical problems. MultiNest is also costly, but it is much more efficient than MCMC and it may be affordable for some practical applications. The proxy-based method is the lowest-cost solution. However, its accuracy is unacceptably poor.
DGN-RML and DGN-GMM seem to have the best compromise between accuracy and efficiency, and the best of these two is DGN-GMM. These two methods may produce some poor-quality samples that should be rejected for the final uncertainty quantification.
The results from the benchmark study are somewhat surprising and provide awareness to the reservoir engineering community on the quality and efficiency of the advanced and most traditional methods used for AHM and UQ. Our recommendation is to use DGN-GMM instead of the traditional proxy-based methods for most practical problems, and to consider using the more expensive MultiNest when the cost of running the reservoir models is moderate and high-quality solutions are desired.
Gao, Guohua (Shell Global Solutions, US Inc.) | Vink, Jeroen C. (Shell Global Solutions International B.V.) | Chen, Chaohui (Shell International Exploration & Production Inc.) | Araujo, Mariela (Shell International Exploration & Production Inc.) | Ramirez, Benjamin (Shell International Exploration & Production Inc.) | Jennings, Jim W. (Shell International Exploration & Production Inc.) | Khamra, Yaakoub El (Shell Global Solutions, US Inc.) | Ita, Joel (Shell Global Solutions, US Inc.)
Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty quantification workflow by seamless integration of a distributed Gauss-Newton (DGN) optimization method with Gaussian Mixture Model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches.
Multiple local maximum-a-posteriori (MAP) estimates are located with the local-search DGN optimization method. A GMM is constructed to approximate the posterior probability density function, by fitting simulation results generated during the DGN minimization process. The traditional acceptance-rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR-GMM samples are independent, identically-distributed (i.i.d.) samples that can be directly used for uncertainty quantification of model parameters and production forecasts.
The proposed method is first validated with 1-D nonlinear synthetic problems having multiple MAP points. The AR-GMM samples are better than the original GMM samples. Then, it is tested with a synthetic history-matching problem using the SPE-1 reservoir model with 8 uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, e.g., Markov chain Monte Carlo (MCMC) and global search DGN combined with the Randomized Maximum Likelihood (RML) approach, but have a much lower computational cost (by a factor of 5 to 100). Finally, it is applied to a real field reservoir model with synthetic data, having 235 uncertain parameters. A GMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. 105 AR-GMM samples are accepted from the 1000 original GMM samples, and are used to quantify uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR-GMM samples are quite consistent with the production data observed after the history matching period.
The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated workflow.
The fracturing of horizontal wells is a recently developed tool to help enable tight and shale formations to produce economically. Production data analysis of the wells in such formations is frequently performed using analytical and semi-analytical methods. However, in the presence of nonlinearities such as multi-phase flow and geomechanical effects, the numerical simulations are necessary for interpretations and history-matching techniques as they are required for model calibration.
Reservoir history-matching techniques are usually based on the frequentist approach and can provide a single solution that can maximize the Likelihood function. Production forecasts using a single calibrated model cannot honor the uncertainty in the model parameters. Therefore, a Bayesian approach is suggested where we can combine our prior knowledge about the model parameters together with the Likelihood to update our knowledge in light of the data. The Bayesian approach is enriched by applying a Markov chain Monte Carlo process to updated the prior knowledge and approximate the posterior distributions.
In this paper, a one-year production data of a real gas condensate well in a Canadian tight formation (lower Montney Formation) is considered. This is a horizontal well with eight fracture stages. A representative 2D model is constructed which is characterized by 17 parameters which include relative permeability curves, capillary pressure, geomechanical effects, fracture half-length, fracture conductivity, and permeability and water saturation in the stimulated region and the matrix. Careful analysis of available data provide acceptable prior ranges for the model parameters using non-informative uniform distributions. Markov chain Monte Carlo algorithm is implemented using a Gibbs sampler and the posterior distributions are found. The results provide an acceptable set of models that can represent the production history data. Using these distributions, a probabilistic forecast is performed and P10, P50 and P90 are estimated.
This paper highlights the limitations of the current history-matching approaches and provides a novel workflow on how to quantify the uncertainty for the shale and tight formations using numerical simulations to provide reliable probabilistic forecasts.
Uncertainty quantification is an important task in reservoir simulation studies used for decision making. There have been many techniques proposed in the SPE literature for quantifying uncertainty, such as Markov chain Monte Carlo (MCMC). MCMC is statistical method for sampling from an arbitrary probability distribution to quantifying uncertainty in reservoir simulation. The major difficulty in applying MCMC methods is high computational cost. The purpose of this paper is to demonstrate the performance of a new technique - Multilevel Markov Chain Monte Carlo (MLMCMC) - for quantifying uncertainty in reservoir simulation with less computional cost compared to Standard MCMC.
History matching is commonly performed in reservoir simulations to calibrate model parameters and to improve prediction accuracy. History matching problems often have non-unique solutions, i.e., there exist different combinations of parameter values that all yield the simulation results matching the measurements. In such a situation, finding a single solution matching the observations does not guarantee a correct prediction for future production. Alternatively, a more reliable prediction should be made with an uncertainty quantification based on all different possible scenarios of the model parameters. Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior probability density function (PDF). Lacking an analytical expression, the posterior PDF are often shown with a sample of realizations, each representing a possible scenario. This paper presents a novel sampling algorithm aiming to deal with two commonly encountered difficulties in the sampling process. First, a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational burden. To alleviate this burden, our algorithm uses a Gaussian process (GP)-based surrogate as an approximation of the computationally expensive reservoir model to speed up the sampling process. The GP surrogate is adaptively refined locally such that the necessary approximation accuracy is achieved with a minimum level of computational cost. Secondly, when the dependent relationship between observation variables and input parameters is nonlinear, the posterior PDF could be in a complex form, such as multimodal, which is difficult to sample from. To tackle this difficulty, a Gaussian mixture model (GMM) is used as the proposal PDF to explore the parameter space. The GMM is flexible to approximate different distributions and is particularly efficient when the posterior is multimodal. The developed approach is tested with an illustrative history matching problem and shows its capability in handling the above-mentioned issues. Multimodal posterior of the testing problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.
The history matching procedure can be divided into three sections: decision variables definition, objective function formulation and optimization. The most widespread approach regarding objective function formulation is the Bayesian framework. A Bayesian framework allows the incorporation of prior knowledge into the objective function which acts as a regularization method. In this approach, objective function consists of two terms; likelihood and prior knowledge functions. In order to maximize posterior probability function, usually a summation of prior and likelihood functions is minimized in which the prior and observed data covariance matrixes relate these two functions.
Inappropriate covariance matrixes can lead to an incorrect domination of one of the functions over the other one and accordingly result in a false optimum point. In this study, to decrease the chance of convergence into a false optimum point, due to inaccurate covariance matrixes, an application of multi-objective optimization in history matching is introduced while likelihood and prior functions are the two objective functions.
By making use of Pareto optimization (multi-objective optimization), a set of solutions named the Pareto front is provided which consists of nondominated solutions. Hence, an inaccuracy in the covariance matrixes cannot allow one objective function to dominate over the other one. After providing the set of solutions, usually a number of solutions are taken out from the set based on postoptimzation trade-offs for uncertainty analysis purposes.
For this study, a synthetic case is constructed and history matching is carried out with two different approaches; the conventional and the proposed approach. In order to compare the approaches, it is assumed that covariance matrix of the observed data is not exactly known. Then, history matching is carried out, using a single objective genetic algorithm with different covariance matrixes and also, using a multi-objective genetic algorithm. A comparison between the outcomes of the conventional approach and the proposed approach demonstrates that decisions can be made with more confidence using the proposed approach
This paper provides a study of a history match on a complex reservoir model using a global optimization method. This is done by applying Evolutionary Algorithms to the problem of history matching. The results of the history match are then used to carry out an uncertainty assessment on variables of interest. The main parameters used in the history match included: horizontal permeabilities, porosities and vertical transmissibilities.
This study also made use of methods for improving the convergence of the optimization cycle, which included using correlations, adopting a Bayesian approach and exploring the search space. The results obtained over the optimization cycle, are used to identify sensitivity parameters, correlations and parameter trends in a global search space. In addition the original manual history match was further improved by adopting a pressure match using an Evolutionary Strategy. Best matched cases were selected based on the global and partial objective values of each match. Predictions runs were performed in order to investigate the effect on the cumulative oil produced and the STOIIP.
Finally an uncertainty assessment of the most recent history match was carried out using an experimental design matrix. The results of the experimental design were used to generate a proxy, which is used in a Monte Carlo simulation to develop P10/P50/P90 oil forecasts.