Gao, Guohua (Shell Global Solutions (US)) | Vink, Jeroen C. (Shell Global Solutions International) | Chen, Chaohui (Shell International Exploration and Production) | Araujo, Mariela (Shell Global Solutions (US)) | Ramirez, Benjamin A. (Shell International Exploration and Production) | Jennings, James W. (Shell International Exploration and Production) | El Khamra, Yaakoub (Shell Global Solutions (US)) | Ita, Joel (Shell Global Solutions (US))
Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon-field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty-quantification work flow by seamless integration of a distributed-Gauss-Newton (GN) (DGN) optimization method with a Gaussian mixture model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches.
Multiple local maximum-a-posteriori (MAP) estimates are determined with the local-search DGN optimization method. A GMM is constructed to approximate the posterior probability-density function (PDF) by reusing simulation results generated during the DGN minimization process. The traditional acceptance/rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR-GMM samples are independent, identically distributed samples that can be directly used for uncertainty quantification of model parameters and production forecasts.
The proposed method is first validated with 1D nonlinear synthetic problems with multiple MAP points. The AR-GMM samples are better than the original GMM samples. The method is then tested with a synthetic history-matching problem using the SPE01 reservoir model (Odeh 1981; Islam and Sepehrnoori 2013) with eight uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, such as Markov-chain Monte Carlo (MCMC) and global-search DGN combined with the randomized-maximum-likelihood (RML) approach, but have a much lower computational cost (by a factor of five to 100). Finally, it is applied to a real-field reservoir model with synthetic data, with 235 uncertain parameters. AGMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. There are 105 AR-GMM samples accepted from the 1,000 original GMM samples, and they are used to quantify the uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR-GMM samples are quite consistent with the production data observed after the history-matching period.
The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated work flow.
Araujo, Mariela (Shell International Exploration and Production Inc.) | Chen, Chaohui (Shell International Exploration and Production Inc.) | Gao, Guohua (Shell International Exploration and Production Inc.) | Jennings, Jim (Shell International Exploration and Production Inc.) | Ramirez, Benjamin (Shell International Exploration and Production Inc.) | Xu, Zhihua (ExxonMobil) | Yeh, Tzu-hao (Shell International Exploration and Production Inc.) | Alpak, Faruk Omer (Shell International Exploration and Production Inc.) | Gelderblom, Paul (Shell International Exploration and Production Inc.)
Increased access to computational resources has allowed reservoir engineers to include assisted history matching (AHM) and uncertainty quantification (UQ) techniques as standard steps of reservoir management workflows. Several advanced methods have become available and are being used in routine activities without a proper understanding of their performance and quality. This paper provides recommendations on the efficiency and quality of different methods for applications to production forecasting, supporting the reservoir-management decision-making process.
Results from five advanced methods and two traditional methods were benchmarked in the study. The advanced methods include a nested sampling method MultiNest, the integrated global search Distributed Gauss-Newton (DGN) optimizer with Randomized Maximum Likelihood (RML), the integrated local search DGN optimizer with a Gaussian Mixture Model (GMM), and two advanced Bayesian inference-based methods from commercial simulation packages. Two traditional methods were also included for some test problems: the Markov-Chain Monte Carlo method (MCMC) is known to produce accurate results although it is too expensive for most practical problems, and a DoE-proxy based method widely used and available in some form in most commercial simulation packages.
The methods were tested on three different cases of increasing complexity: a 1D simple model based on an analytical function with one uncertain parameter, a simple injector-producer well pair in the SPE01 model with eight uncertain parameters, and an unconventional reservoir model with one well and 24 uncertain parameters. A collection of benchmark metrics was considered to compare the results, but the most useful included the total number of simulation runs, sample size, objective function distributions, cumulative oil production forecast distributions, and marginal posterior parameter distributions.
MultiNest and MCMC were found to produce the most accurate results, but MCMC is too costly for practical problems. MultiNest is also costly, but it is much more efficient than MCMC and it may be affordable for some practical applications. The proxy-based method is the lowest-cost solution. However, its accuracy is unacceptably poor.
DGN-RML and DGN-GMM seem to have the best compromise between accuracy and efficiency, and the best of these two is DGN-GMM. These two methods may produce some poor-quality samples that should be rejected for the final uncertainty quantification.
The results from the benchmark study are somewhat surprising and provide awareness to the reservoir engineering community on the quality and efficiency of the advanced and most traditional methods used for AHM and UQ. Our recommendation is to use DGN-GMM instead of the traditional proxy-based methods for most practical problems, and to consider using the more expensive MultiNest when the cost of running the reservoir models is moderate and high-quality solutions are desired.
Permeability estimation in carbonate reservoirs is challenging and it generally consists of core-calibrated algorithms applied on open-hole logs. Moreover, due to inherent multi-scale heterogeneities, apparent permeability from production logging tool (PLT) is usually necessary to let the static log-based prediction honor dynamic data. The correspondence between dynamic corrections and carbonate rock types is a long-standing problem and an elegant solution is presented by integrating advanced nuclear magnetic resonance (NMR) log modeling with multi-rate PLT interpretation.
The methodology, discussed on an oil-bearing carbonate reservoir, starts with a rigorous mapping between NMR responses and pore-size distribution, mainly determined by special core analyses (SCAL). Hence, a robust porosity partition template and a physically-based permeability formula are established downhole relying on the quantitative integration of SCAL and advanced NMR modeling. Multi-rate PLT and well test data are then analyzed to evaluate the boost needed for log permeability to match the dynamic behavior of the wells. Finally, porosity partition outcomes are used as pointwise predictors of dynamic permeability enhancement by means of a probabilistic approach.
In details, a system built upon mercury injection capillary pressure measurements, representative of the entire reservoir, shows a well-defined pore structure consisting of micropores, mesopores and macropores. At the same time, a quantitative link is established between NMR transverse relaxation time and pore-size distributions through an effective surface relaxivity parameter, both at laboratory and reservoir conditions. This allows discriminating micro, meso and macro-porosity downhole. Effective surface relaxivity also plays a critical role in the subsequent NMR permeability estimation based on a capillary tube model of the porous media and exploiting the full NMR/pore-size distributions. Although the match with core data proves the reliability of the comprehensive rock characterization, log permeability values underestimate the actual dynamic performances from well test. Therefore, the standard apparent permeability method from multi-rate PLT interpretation provides the necessary correction from the dynamic standpoint. Macro-porosity content is demonstrated to be the driver for a quantitative estimation of the excess in matrix permeability and an additional term complements the original NMR permeability predictor in order to honor the dynamic evidences. The approach makes use of a probabilistic framework aimed at considering the uncertainties in the a-priori simultaneous static and dynamic characterization.
The presented innovative methodology addresses the well-known issue of quantitatively incorporating dynamic log modeling into a purely static workflow, thus leading to a more accurate permeability estimation. This is fundamental for production optimization and reservoir modeling purposes in highly heterogeneous carbonate environments.
Noise is an inherent feature of seismic data, especially in land acquisition. Due to the presence of noise, observations can be treated as random variables with associated uncertainty. An accurate estimate of data uncertainty is important for data interpretation and also for imaging and tomography. For challenging inverse problem, like full waveform inversion, an accurate estimate of data uncertainty can lead to robust qualitative estimates of posterior uncertainty, and also guide the selection of key parameters, e.g. the regularization strength. Uncertainty estimation can be a challenging task for seismic data because not enough repeat measurements are available to estimate reliable statistics. Based on a land seismic field experiment with repeated shots, we quantify seismic data uncertainty (with standard deviation as a proxy) and the short-term repeatability of reflection data. We find that the uncertainty for coherent seismic events in data is proportional to their amplitude while the distributions characterizing the main events differ from Gaussians.
Presentation Date: Monday, October 15, 2018
Start Time: 1:50:00 PM
Location: 210C (Anaheim Convention Center)
Presentation Type: Oral
Li, Kun (China University of Petroleum (East China)) | Zhu, Ming (Shenzhen Branch of CNOOC Ltd) | Du, Jiayuan (Shenzhen Branch of CNOOC Ltd) | Liu, Daoli (Shenzhen Branch of CNOOC Ltd) | Yin, Xingyao (China University of Petroleum (East China)) | Zong, Zhaoyun (China University of Petroleum (East China))
Lithology prediction and geofluid discrimination are the ultimate objectives of rock physical analysis and prestack seismic inversion. For prestack Bayesian estimation and geostatistical simulation, the prior probability density distribution of model parameters are usually influenced by subsurface lithologies and geofluid facies, which consist of several Gaussian probability components with different means and covariances. With the assumption of Gaussian mixture a priori, one improved prestack EVA inversion (elastic impedance variation with angle) conditioned by seismic and well data in mixed-domain is proposed to realize the estimation of discrete lithofacies and continuous geofluid parameters. The peaks number of prior Gaussian probability density is the same as classifications of sedimentary lithologies. For the resolution of seismic inversion, sequential simulation algorithm is utilized to sample the posterior probability distributions. Besides, the low frequency regularization and nonlinear bounding constraint strategy are introduced into the proposed method, which can enhance the stability of prestack EVA inversion and overcome the unrealistic solutions of elastic parameters. Finally, model tests and the applications on field prestack seismic data can verify the effectiveness and practicability in geofluid discrimination of the proposed algorithm.
Presentation Date: Thursday, October 18, 2018
Start Time: 8:30:00 AM
Location: 206A (Anaheim Convention Center)
Presentation Type: Oral
He, Dongyang (China University of Petroleum-East China) | Yin, Xingyao (China University of Petroleum-East China) | Zong, Zhaoyun (China University of Petroleum-East China) | Li, Kun (China University of Petroleum-East China)
Summary Gaussian mixture model can be used to describe the multimodal behaviour of reservoir properties due to their variations within different discrete variables, such as facies. The weights of the Gaussian components represents the probabilities of the discrete variables. However, Bayesian linear inversion based on Gaussian mixture may misclassify discrete variables at some points, which may lead to a bad inversion result. In this study, we consider the spatial variability of discrete variables and combine Gaussian mixture model with the Sequential indicator simulation to determine the weight of each discrete variable in Sequential Bayesian linear inversion problems. We then can obtain the analytical solution of the Bayesian linear inverse problem and simultaneously classify the discrete variables.
SUMMARY Compare d with c onventiona l ima ging methods, Le ast-squares migration (LSM) can produce images with improve d resolution and reduced migration artifacts. However, LSM has been ma inly restricted to the imaging of acoustic wavefie lds. In this abstract, we use Gaussian beams as wave propa gator to formulate a least-squares migration scheme for elastic waves of isotropic me dia. With this method, not only can we obtain better illuminated multi-wave images with higher resolution, but a lso we can achieve a good ba lance between computationa l cost and imaging accuracy, whic h are both important requirements for iterative LSM methods. A synthetic Marmousi 2 dataset is used to demonstrate the validity and effective ness of our proposed method.
With a finite recording aperture, a limited source spectrum and irregular acquisition geometry, conventional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because conventional migration is commonly formuated as the adjoint, instead of the inverse operator of the forward modeling. We propose a time-domain least-squares Gaussian beam migration, which helps us to balance subsurface illumination and improve image resolution. Based on the Born approximation for the acoustic wave equation, we first derive alinear time-domain Gaussian beam modeling operator. Then, we formulate its adjoint operator, i.e., Gaussianbeam migration, as the gradient of an
Presentation Date: Tuesday, October 16, 2018
Start Time: 1:50:00 PM
Location: 207A (Anaheim Convention Center)
Presentation Type: Oral
Summary To improve seismic inversion, we propose to use prior information from empirical estimates of spatial rock property distributions obtained from outcrop training images. Using photographic images of chalk outcrops, we derive empirical distributions of spatial rock property gradients and estimates of spatial correlations. We use this information as a priori information in probabilistic, seismic inversion. Our results are realistic, high-resolution posterior samples of subsurface models, whose variability can be used as an estimate of model uncertainties. Introduction Statistics of the subsurface reflectivity and its influence on seismic inversion has been debated for decades.
Locci-Lopez, Daniel (School of Geosciences, University of Louisiana at Lafayette) | Zhang, Rui (School of Geosciences, University of Louisiana at Lafayette) | Oyem, Arnold (Department of Earth and Atmospheric Sciences, University of Houston) | Castagna, John (Department of Earth and Atmospheric Sciences, University of Houston)
Summary Multi-resolution spectral decomposition methods such as the S-transform and the Continuous Wavelet Transform, are known to distort spectral attributes such as peak frequency. We introduce a spectral decomposition approach via a multi-scale Fourier Transform that utilizes a frequency-dependent temporal window to achieve any desired combination of temporal and frequency resolution. We investigate a specific frequency-dependent window that focusses the analysis on the full-width at half-maximum of a frequency-dependent Gaussian function. The resulting time-frequency analysis has significantly improved timeresolution relative to the S-transform. This is demonstrated on real seismic data in the Permian Basin.