Reliability of subsurface assessment for different field development scenarios depends on how effective the uncertainty in production forecast is quantified. Currently there is a body of work in the literature on different methods to quantify the uncertainty in production forecast. The objective of this paper is to revisit and compare these probabilistic uncertainty quantification techniques through their applications to assisted history matching of a deep-water offshore waterflood field. The paper will address the benefits, limitations, and the best criteria for applicability of each technique.
Three probabilistic history matching techniques commonly practiced in the industry are discussed. These are Design-of-Experiment (DoE) with rejection sampling from proxy, Ensemble Smoother (ES) and Genetic Algorithm (GA). The model used for this study is an offshore waterflood field in Gulf-of-Mexico. Posterior distributions of global subsurface uncertainties (e.g. regional pore volume and oil-water contact) were estimated using each technique conditioned to the injection and production data.
The three probabilistic history matching techniques were applied to a deep-water field with 13 years of production history. The first 8 years of production data was used for the history matching and estimate of the posterior distribution of uncertainty in geologic parameters. While the convergence behavior and shape of the posterior distributions were different, consistent posterior means were obtained from Bayesian workflows such as DoE or ES. In contrast, the application of GA showed differences in posterior distribution of geological uncertainty parameters, especially those that had small sensitivity to the production data. We then conducted production forecast by including infill wells and evaluated the production performance using sample means of posterior geologic uncertainty parameters. The robustness of the solution was examined by performing history matching multiple times using different initial sample points (e.g. random seed). This confirmed that heuristic optimization techniques such as GA were unstable since parameter setup for the optimizer had a large impact on uncertainty characterization and production performance.
This study shows the guideline to obtain the stable solution from the history matching techniques used for different conditions such as number of simulation model realizations and uncertainty parameters, and number of datapoints (e.g. maturity of the reservoir development). These guidelines will greatly help the decision-making process in selection of best development options.
Araujo, Mariela (Shell International Exploration and Production Inc.) | Chen, Chaohui (Shell International Exploration and Production Inc.) | Gao, Guohua (Shell International Exploration and Production Inc.) | Jennings, Jim (Shell International Exploration and Production Inc.) | Ramirez, Benjamin (Shell International Exploration and Production Inc.) | Xu, Zhihua (ExxonMobil) | Yeh, Tzu-hao (Shell International Exploration and Production Inc.) | Alpak, Faruk Omer (Shell International Exploration and Production Inc.) | Gelderblom, Paul (Shell International Exploration and Production Inc.)
Increased access to computational resources has allowed reservoir engineers to include assisted history matching (AHM) and uncertainty quantification (UQ) techniques as standard steps of reservoir management workflows. Several advanced methods have become available and are being used in routine activities without a proper understanding of their performance and quality. This paper provides recommendations on the efficiency and quality of different methods for applications to production forecasting, supporting the reservoir-management decision-making process.
Results from five advanced methods and two traditional methods were benchmarked in the study. The advanced methods include a nested sampling method MultiNest, the integrated global search Distributed Gauss-Newton (DGN) optimizer with Randomized Maximum Likelihood (RML), the integrated local search DGN optimizer with a Gaussian Mixture Model (GMM), and two advanced Bayesian inference-based methods from commercial simulation packages. Two traditional methods were also included for some test problems: the Markov-Chain Monte Carlo method (MCMC) is known to produce accurate results although it is too expensive for most practical problems, and a DoE-proxy based method widely used and available in some form in most commercial simulation packages.
The methods were tested on three different cases of increasing complexity: a 1D simple model based on an analytical function with one uncertain parameter, a simple injector-producer well pair in the SPE01 model with eight uncertain parameters, and an unconventional reservoir model with one well and 24 uncertain parameters. A collection of benchmark metrics was considered to compare the results, but the most useful included the total number of simulation runs, sample size, objective function distributions, cumulative oil production forecast distributions, and marginal posterior parameter distributions.
MultiNest and MCMC were found to produce the most accurate results, but MCMC is too costly for practical problems. MultiNest is also costly, but it is much more efficient than MCMC and it may be affordable for some practical applications. The proxy-based method is the lowest-cost solution. However, its accuracy is unacceptably poor.
DGN-RML and DGN-GMM seem to have the best compromise between accuracy and efficiency, and the best of these two is DGN-GMM. These two methods may produce some poor-quality samples that should be rejected for the final uncertainty quantification.
The results from the benchmark study are somewhat surprising and provide awareness to the reservoir engineering community on the quality and efficiency of the advanced and most traditional methods used for AHM and UQ. Our recommendation is to use DGN-GMM instead of the traditional proxy-based methods for most practical problems, and to consider using the more expensive MultiNest when the cost of running the reservoir models is moderate and high-quality solutions are desired.
Bayesian inference provides a convenient framework for history matching and prediction. In this framework, prior knowledge, system nonlinearity, and measurement errors can be directly incorporated into the posterior distribution of the parameters. The Markov-chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior distribution. However, the MCMC method usually requires a large number of forward simulations. Hence, it can be a computationally intensive task, particularly when dealing with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model outputs in the form of polynomials using the stochastic collocation method (SCM). In addition, we use interpolation with the nested sparse grids and adaptively take into account the different importance of parameters for high-dimensional problems. Furthermore, we introduce an additional transform process to improve the accuracy of the surrogate model in case of strong nonlinearities, such as a discontinuous or unsmooth relation between the input parameters and the output responses. Once the surrogate system is built, we can evaluate the likelihood with little computational cost. Numerical results demonstrate that the proposed method can efficiently estimate the posterior statistics of input parameters and provide accurate results for history matching and prediction of the observed data with a moderate number of parameters.
Gao, Guohua (Shell Global Solutions, US Inc.) | Vink, Jeroen C. (Shell Global Solutions International B.V.) | Chen, Chaohui (Shell International Exploration & Production Inc.) | Araujo, Mariela (Shell International Exploration & Production Inc.) | Ramirez, Benjamin (Shell International Exploration & Production Inc.) | Jennings, Jim W. (Shell International Exploration & Production Inc.) | Khamra, Yaakoub El (Shell Global Solutions, US Inc.) | Ita, Joel (Shell Global Solutions, US Inc.)
Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty quantification workflow by seamless integration of a distributed Gauss-Newton (DGN) optimization method with Gaussian Mixture Model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches.
Multiple local maximum-a-posteriori (MAP) estimates are located with the local-search DGN optimization method. A GMM is constructed to approximate the posterior probability density function, by fitting simulation results generated during the DGN minimization process. The traditional acceptance-rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR-GMM samples are independent, identically-distributed (i.i.d.) samples that can be directly used for uncertainty quantification of model parameters and production forecasts.
The proposed method is first validated with 1-D nonlinear synthetic problems having multiple MAP points. The AR-GMM samples are better than the original GMM samples. Then, it is tested with a synthetic history-matching problem using the SPE-1 reservoir model with 8 uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, e.g., Markov chain Monte Carlo (MCMC) and global search DGN combined with the Randomized Maximum Likelihood (RML) approach, but have a much lower computational cost (by a factor of 5 to 100). Finally, it is applied to a real field reservoir model with synthetic data, having 235 uncertain parameters. A GMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. 105 AR-GMM samples are accepted from the 1000 original GMM samples, and are used to quantify uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR-GMM samples are quite consistent with the production data observed after the history matching period.
The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated workflow.
In this work, a Bayesian data assimilation methodology for simultaneous estimation of channelized facies and petrophysical properties (e.g., permeability fields) is explored. Based on the work of
Ma, Xiang (ExxonMobil Upstream Research Company) | Hetz, Gill (Texas A&M University) | Wang, Xiaochen (ExxonMobil Upstream Research Company) | Bi, Linfeng (ExxonMobil Upstream Research Company) | Stern, Dave (ExxonMobil Upstream Research Company) | Hoda, Nazish (ExxonMobil Upstream Research Company)
Many recent developments in generating history matched reservoir models that approximately characterize subsurface uncertainty are associated with the ensemble smoother (ES) method. It is much better suited for practical history matching applications because it does not require updating of the dynamical variables and thus the frequent simulation restarts required by ensemble kalman filter (EnKF) are avoided. However, the performance of original single update scheme of ES is poor for strongly nonlinear problems and therefore iterations may be needed. Several iterative forms of ES were proposed in the past few years, most of which combine ideas from random maximum likelihood (RML) and ensemble-based techniques. Unlike previous implementations, we pose the history matching problem as a full nonlinear least squares optimization problem and classical Levenberg-Marquardt (LM) algorithm is used as the optimization solver. By showing the that solution of the linearized least squares subproblems arising from each iteration has similar structure to that of standard ES update equation, we propose to use ES as the linear least squares solver to avoid the expensive adjoint calculation. In this way, the proposed algorithm can be considered as an iterative ES and the regularization parameter can be updated following the standard LM rule. Furthermore, because it is casted as an optimization problem, it is straightforward to extend it to robust nonlinear least squares method that can automatically estimate the measurement noise level and reduce the effect of outliers in the data that is essential for field applications. Two synthetic reservoir models are used to showcase the effectiveness and robustness of the newly developed algorithm.
The fracturing of horizontal wells is a recently developed tool to help enable tight and shale formations to produce economically. Production data analysis of the wells in such formations is frequently performed using analytical and semi-analytical methods. However, in the presence of nonlinearities such as multi-phase flow and geomechanical effects, the numerical simulations are necessary for interpretations and history-matching techniques as they are required for model calibration.
Reservoir history-matching techniques are usually based on the frequentist approach and can provide a single solution that can maximize the Likelihood function. Production forecasts using a single calibrated model cannot honor the uncertainty in the model parameters. Therefore, a Bayesian approach is suggested where we can combine our prior knowledge about the model parameters together with the Likelihood to update our knowledge in light of the data. The Bayesian approach is enriched by applying a Markov chain Monte Carlo process to updated the prior knowledge and approximate the posterior distributions.
In this paper, a one-year production data of a real gas condensate well in a Canadian tight formation (lower Montney Formation) is considered. This is a horizontal well with eight fracture stages. A representative 2D model is constructed which is characterized by 17 parameters which include relative permeability curves, capillary pressure, geomechanical effects, fracture half-length, fracture conductivity, and permeability and water saturation in the stimulated region and the matrix. Careful analysis of available data provide acceptable prior ranges for the model parameters using non-informative uniform distributions. Markov chain Monte Carlo algorithm is implemented using a Gibbs sampler and the posterior distributions are found. The results provide an acceptable set of models that can represent the production history data. Using these distributions, a probabilistic forecast is performed and P10, P50 and P90 are estimated.
This paper highlights the limitations of the current history-matching approaches and provides a novel workflow on how to quantify the uncertainty for the shale and tight formations using numerical simulations to provide reliable probabilistic forecasts.
Yeh, T. (Shell International E&P) | Uvieghara, T. (Shell Nigeria E&P Co. Ltd.) | Jennings, J. W. (Shell International E&P.) | Chen, C. (Shell International E&P.) | Alpak, F. O. (Shell International E&P.) | Tendo, F. (Shell Nigeria E&P Co. Ltd.)
Reliable reservoir uncertainty estimation is crucial to understanding its valuation and making robust decisions. Conventional practices where history matching and production forecast are performed on selected high-mid-low cases do not provide a reliable estimation of forecast uncertainty. This is typically reflected in a narrow range for ultimate recovery (UR) or net present value (NPV) predictions. In order to capture the inherent subsurface uncertainty, it is necessary to use an ensemble of models which spans the full uncertainty space.
The Probabilistic History Matching (PHM) workflow is an ensemble-based workflow, aimed to improve forecasting uncertainty estimation. One of the biggest challenges is that it typically requires a large ensemble size to span the uncertainty space due to the limited information (e.g. core data or well logs). This requirement may render the Assisted History Matching (AHM) exercise infeasible when computational resources are relatively limited. Therefore, it is necessary to reduce the number of models to a manageable size prior to performing AHM. Here we implement the Dynamic Fingerprinting workflow, to effectively select a set of representative models from the ensemble while preserving the uncertainty of the variables of interest. In this methodology, time-of-flight (TOF) and drainage time (DRT) information, which are direct estimates of swept and undrained volumes are used to characterize each model. A small subset of models is then selected based on their dissimilarity in flow pattern and used for AHM/forecasting.
The workflow was applied to a deepwater West Africa reservoir. An ensemble of 810 models was generated to represent the subsurface uncertainty. Ten models which were highly dissimilar in flow response were selected from the ensemble for AHM and estimating forecast uncertainties. The AHM was performed using an Experiment Design (ED) - Response Surface Modeling (RSM) - Markov-Chain Monte-Carlo (MCMC) workflow. For validation purposes, a different AHM workflow was performed on each of the 810 models using a derivative-free optimization algorithm. The comparison between the results supports the choice of the representatives from the Dynamic Fingerprinting work flow as well as the history matching conclusions from the ED-RSM-MCMC workflow.
Chen, Chaohui (Shell International Exploration & Production Inc.) | Li, Ruijian (Shell Exploration & Production Co.) | Gao, Guohua (Shell Global Solutions (US) Inc.) | Vink, Jeroen C. (Shell Global Solutions International B.V.) | Cao, Richard (Shell Exploration & Production Co.)
For unconventional reservoirs, it is very difficult to determine the values of key parameters or properties that govern fluid flow in the subsurface due to unknown fracture growth and rock properties. These parameters generally have quite large uncertainty ranges and need to be calibrated by available production data. Using an ensemble of history matched reservoir models to predict the Estimated Ultimate Recovery (EUR) is one of popular approaches when parallelized computing facilities become cheaper and cheaper to customers. The Randomized Maximum Likelihood (RML) method has been proved quite robust for generating multiple realizations by conditioning to production data. However, it is still expensive to apply traditional optimization algorithms to find a conditional realization by minimizing the objective function defined within a Bayesian framework, especially when adjoint-derivatives are unavailable. How to generate multiple conditional realizations efficiently is critically important but still a very challenging task for proper uncertainty quantification.
In this paper, a novel approach that hybrids the direct-pattern-search and the Gauss-Newton algorithm is developed to generate multiple conditional realizations simultaneously. The proposed method is applied to history match a real unconventional Liquid Rich Shale reservoir. The reservoir is stimulated by multiple stage hydraulic fractures. In this example, uncertainty parameters include those characterizing uncertainties of reservoir properties (including matrix permeability, permeability reduction coefficient, porosity, initial water saturation and pressure) and those for hydraulic fractures (height, width, length, and effective permeability of SRV zone). Uncertainty of production forecasts are quantified with both unconditional and conditional realizations.
The case study indicates that the new method is very efficient and robust. Uncertainty ranges of parameters and production forecasts before and after conditioning to production data are quantified and compared. The new approach enhances the EUR assessment confidence level and therefore significantly reduces risks for unconventional assets development.
For unconventional reservoirs, the key reservoir properties, such as effective flowing fracture length (Xf), effective fracture height (Hf), permeability and permeability reduction coefficient, fracture conductivity (FCD), drainage area (A) etc., that govern fluid flow in subsurface are very difficult to obtain due to unknown fracture growth in tight rock. Uncertainties associated with these parameters are usually quite large. Understanding the uncertainty of the subsurface model is helpful to define how to drill wells and determine fracture stages spacing or the number of wells. There are mainly three categories of Estimated Ultimate Recovery (EUR) prediction methodologies for unconventionals:
Reservoir history matching is a computationally expensive process, which requires multiple simulation runs. Therefore, there is a constant quest for more efficient sampling algorithms that can provide an ensemble of equally-good history matched models with a diverse range of predictions using fewer simulations. We introduce a novel stochastic Gaussian Process (GP) for assisted history matching where realizations are considered to be Gaussian random variables. The GP benefits from a small initial population and selects the next best possible samples by maximizing the expected improvement (EI). The maximization of EI function is computationally cheap and is performed by the Differential Evolution (DE) algorithm. The algorithm is successfully applied to a structurally complex faulted reservoir with 12 unknown parameters, 8 production and 4 injection wells. We show that the GP algorithm with EI maximization can significantly reduce the number of required simulations for history matching. The ensemble is then used to estimate the posterior distributions by performing the Markov chain Monte Carlo (McMC) using a cross-validated GP model. The hybrid workflow presents an efficient and computationally-cheap mechanism for history matching and uncertainty quantification of complex reservoir models.