|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Abstract In this work, we investigate different approaches for history matching of imperfect reservoir models while accounting for model error. The first approach (base case scenario) relies on direct Bayesian inversion using iterative ensemble smoothing with annealing schedules without accounting for model error. In the second approach the residual, obtained after calibration, is used to iteratively update the covariance matrix of the total error, that is a combination of model error and data error. In the third approach, PCA-based error model is used to represent the model discrepancy during history matching. However, the prior for the PCA weights is quite subjective and is generally hard to define. Here the prior statistics of model error parameters are estimated using pairs of accurate and inaccurate models. The fourth approach, inspired from Köpke et al. (2017), relies on building an orthogonal basis for the error model misfit component, which is obtained from difference between PCA-based error model and corresponding actual realizations of prior error. The fifth approach is similar to third approach, however the additional covariance matrix of error model misfit is also computed from the prior model error statistics and added into the covariance matrix of the measurement error. The sixth approach, inspired from Oliver and Alfonzo (2018), is the combination of second and third approach, i.e. PCA-based error model is used along with the iterative update of the covariance matrix of the total error during history matching. Based on the results, we conclude that a good parameterization of the error model is needed in order to obtain good estimate of physical model parameters and to provide better predictions. In this study, the last three approaches (i.e. 4, 5, 6) outperform the others in terms of the quality of the estimated parameters and the prediction accuracy (reliability of the calibrated models).
Summary Owing to the complex nature of hydrocarbon reservoirs, the numerical model constructed by geoscientists is always a simplified version of reality: for example, it might lack resolution from discretization and lack accuracy in modeling some physical processes. This flaw in the model that causes mismatch between actual observations and simulated data when “perfect” model parameters are used as model inputs is known as “model error”. Even in a situation when the model is a perfect representation of reality, the inputs to the model are never completely known. During a typical model calibration procedure, only a subset of model inputs is adjusted to improve the agreement between model responses and historical data. The remaining model inputs that are not calibrated and are likely fixed at incorrect values result in model error in a similar manner as the imperfect model scenario. Assimilation of data without accounting for model error can result in the incorrect adjustment to model parameters, the underestimation of prediction uncertainties, and bias in forecasts. In this paper, we investigate the benefit of recognizing and accounting for model error when an iterative ensemble smoother is used to assimilate production data. The correlated “total error” (a combination of model error and observation error) is estimated from the data residual after a standard history‐matching using the Levenberg-Marquardt form of iterative ensemble smoother (LM‐EnRML). This total error is then used in further data assimilations to improve the estimation of model parameters and quantification of prediction uncertainty. We first illustrate the method using a synthetic 2D five‐spot example, where some model errors are deliberately introduced, and the results are closely examined against the known “true” model. Then, the Norne field case is used to further evaluate the method. The Norne model has previously been history‐matched using the LM‐EnRML (Chen and Oliver 2014), where cell‐by‐cell properties (permeability, porosity, net‐to‐gross, vertical transmissibility) and parameters related to fault transmissibility, depths of water/oil contacts, and relative permeability function are adjusted to honor historical data. In this previous study, the authors highlighted the importance of including large amounts of model parameters, the proper use of localization, and heuristic adjustment of data noise to account for modeling error. In this paper, we improve the last aspect by quantitatively estimating model error using residual analysis.
Abstract The ensemble based methods (especially various forms of iterative ensemble smoothers) have been proven to be effective in calibrating multiple reservoir models, so that they are consistent with historical production data. However, due to the complex nature of hydrocarbon reservoirs, the model calibration is never perfect, it is always a simplified version of reality with coarse representation and unmodeled physical processes. This flaw in the model that causes mismatch between actual observations and simulated data when ‘perfect’ model parameters are used as model input is known as ‘model error’. Assimilation of data without accounting for this model error can result in incorrect adjustment to model parameters, underestimation of prediction uncertainties and bias in forecasts. In this paper, we investigate the benefit of recognising and accounting for model error when an iterative ensemble smoother is used to assimilate production data. The correlated ‘total error’ (combination of model error and observation error) are estimated from the data residual after a standard history matching using Levenberg-Marquardt form of iterative ensemble smoother (LM-EnRML). This total error is then used in further data assimilations to improve the model prediction and uncertain quantification from the final updated model ensemble. We first illustrate the method using a synthetic 2D five spot case, where some model errors are deliberately introduced, and the results are closely examined against the known ‘true’ model. Then the Norne field case is used to further evaluate the method. The Norne model has previously been history matched using the LM-EnRML (Chen and Oliver, 2014), where cell-by-cell properties (permeability, porosity, net-to-gross, vertical transmissibility) and parameters related to fault transmissibility, depths of water-oil contacts and relative permeability function are adjusted to honour historical data. In this previous study, the authors highlighted the importance of including large amounts of model parameters, proper use of localization, and adjustment of data noise to account for modelling error. In the current study, we further improve the aspect regarding the quantification of model error. The results showed promising benefit of a systematic procedure of model diagnostics, model improvement and model error quantification during data assimilations.
Gao, Guohua (Shell Global Solutions (US)) | Vink, Jeroen C. (Shell Global Solutions International) | Chen, Chaohui (Shell International Exploration and Production) | Araujo, Mariela (Shell Global Solutions (US)) | Ramirez, Benjamin A. (Shell International Exploration and Production) | Jennings, James W. (Shell International Exploration and Production) | Khamra, Yaakoub El (Shell Global Solutions (US)) | Ita, Joel (Shell Global Solutions (US))
Summary Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon–field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty–quantification work flow by seamless integration of a distributed–Gauss–Newton (GN) (DGN) optimization method with a Gaussian mixture model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches. Multiple local maximum–a–posteriori (MAP) estimates are determined with the local–search DGN optimization method. A GMM is constructed to approximate the posterior probability–density function (PDF) by reusing simulation results generated during the DGN minimization process. The traditional acceptance/rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR–GMM samples are independent, identically distributed samples that can be directly used for uncertainty quantification of model parameters and production forecasts. The proposed method is first validated with 1D nonlinear synthetic problems with multiple MAP points. The AR–GMM samples are better than the original GMM samples. The method is then tested with a synthetic history–matching problem using the SPE01 reservoir model (Odeh 1981; Islam and Sepehrnoori 2013) with eight uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, such as Markov–chain Monte Carlo (MCMC) and global–search DGN combined with the randomized–maximum–likelihood (RML) approach, but have a much lower computational cost (by a factor of five to 100). Finally, it is applied to a real–field reservoir model with synthetic data, with 235 uncertain parameters. A GMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. There are 105 AR–GMM samples accepted from the 1,000 original GMM samples, and they are used to quantify the uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR–GMM samples are quite consistent with the production data observed after the history–matching period. The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated work flow.
Abstract In this work, a Bayesian data assimilation methodology for simultaneous estimation of channelized facies and petrophysical properties (e.g., permeability fields) is explored. Based on the work of Zhao et al. (2016a,b), common basis DCT is used for the parameterization of facies fields in order to achieve model feature extraction and reduce the inverse problem dimensionality. An iterative ensemble smoother method along with a post-processing technique are employed to simultaneously update the parameterized facies model, i.e., DCT coefficients, and the permeability values within each facies in order to match the reservoir production data. Two synthetic examples are designed and investigated to evaluate the performance of the proposed history matching workflow under different types of prior uncertainty. One example is a 2D three-facies reservoir with sinuous channels and the other example involves a 3D three-facies five-layer reservoir with two different geological zones. The computational results indicate that the posterior realizations calibrated by the proposed workflow are able to correctly estimate the key geological features and permeability distributions of the true model with good data match results. It is known that the reliability of prior models is essential in solving dynamic inverse problems for subsurface characterization. However, the prior realizations are usually obtained using data from various sources with different level of uncertainty which results in great challenges in the history matching process. Thus in this paper, we investigate several particular cases regarding different prior uncertainties which include fluvial channels conditioned to uncertain hard data information or generated by diverse geological continuity models. The proposed methodology presents desirable robustness against these prior uncertainties that occur frequently in the practical applications.