Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Gao, Guohua
Benchmarking of Advanced Methods for Assisted History Matching and Uncertainty Quantification
Araujo, Mariela (Shell International Exploration and Production Inc.) | Chen, Chaohui (Shell International Exploration and Production Inc.) | Gao, Guohua (Shell International Exploration and Production Inc.) | Jennings, Jim (Shell International Exploration and Production Inc.) | Ramirez, Benjamin (Shell International Exploration and Production Inc.) | Xu, Zhihua (ExxonMobil) | Yeh, Tzu-hao (Shell International Exploration and Production Inc.) | Alpak, Faruk Omer (Shell International Exploration and Production Inc.) | Gelderblom, Paul (Shell International Exploration and Production Inc.)
Abstract Increased access to computational resources has allowed reservoir engineers to include assisted history matching (AHM) and uncertainty quantification (UQ) techniques as standard steps of reservoir management workflows. Several advanced methods have become available and are being used in routine activities without a proper understanding of their performance and quality. This paper provides recommendations on the efficiency and quality of different methods for applications to production forecasting, supporting the reservoir-management decision-making process. Results from five advanced methods and two traditional methods were benchmarked in the study. The advanced methods include a nested sampling method MultiNest, the integrated global search Distributed Gauss-Newton (DGN) optimizer with Randomized Maximum Likelihood (RML), the integrated local search DGN optimizer with a Gaussian Mixture Model (GMM), and two advanced Bayesian inference-based methods from commercial simulation packages. Two traditional methods were also included for some test problems: the Markov-Chain Monte Carlo method (MCMC) is known to produce accurate results although it is too expensive for most practical problems, and a DoE-proxy based method widely used and available in some form in most commercial simulation packages. The methods were tested on three different cases of increasing complexity: a 1D simple model based on an analytical function with one uncertain parameter, a simple injector-producer well pair in the SPE01 model with eight uncertain parameters, and an unconventional reservoir model with one well and 24 uncertain parameters. A collection of benchmark metrics was considered to compare the results, but the most useful included the total number of simulation runs, sample size, objective function distributions, cumulative oil production forecast distributions, and marginal posterior parameter distributions. MultiNest and MCMC were found to produce the most accurate results, but MCMC is too costly for practical problems. MultiNest is also costly, but it is much more efficient than MCMC and it may be affordable for some practical applications. The proxy-based method is the lowest-cost solution. However, its accuracy is unacceptably poor. DGN-RML and DGN-GMM seem to have the best compromise between accuracy and efficiency, and the best of these two is DGN-GMM. These two methods may produce some poor-quality samples that should be rejected for the final uncertainty quantification. The results from the benchmark study are somewhat surprising and provide awareness to the reservoir engineering community on the quality and efficiency of the advanced and most traditional methods used for AHM and UQ. Our recommendation is to use DGN-GMM instead of the traditional proxy-based methods for most practical problems, and to consider using the more expensive MultiNest when the cost of running the reservoir models is moderate and high-quality solutions are desired.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
Enhancing the Performance of the Distributed Gauss-Newton Optimization Method by Reducing the Effect of Numerical Noise and Truncation Error With Support-Vector Regression
Guo, Zhenyu (University of Tulsa) | Chen, Chaohui (Shell International Exploration and Production Incorporated) | Gao, Guohua (Shell Global Solutions US Incorporated) | Vink, Jeroen (Shell Global Solutions International)
Summary Numerical optimization is an integral part of many history-matching (HM) workflows. However, the optimization performance can be affected negatively by the numerical noise existent in the forward models when the gradients are estimated numerically. As an unavoidable part of reservoir simulation, numerical noise refers to the error caused by the incomplete convergence of linear or nonlinear solvers or truncation errors caused by different timestep cuts. More precisely, the allowed solver tolerances and allowed changes of pressure and saturation imply that simulation results no longer smoothly change with changing model parameters. For HM with linear-distributed Gaussian-Newton (L-DGN), caused by the discontinuity of simulation results, the sensitivity matrix computed by linear interpolation might be less accurate, which might result in slow convergence or, even worse, failure of convergence. Recently, we have developed an HM workflow by integrating the support-vector regression (SVR) with the distributed-Gaussian-Newton (DGN) method optimization method referred to as SVR-DGN. Unlike L-DGN that computes the sensitivity matrix with a simple linear proxy, SVR-DGN computes the sensitivity matrix by taking the gradient of the SVR proxies. In this paper, we provide theoretical analysis and case studies to show that SVR-DGN can compute a more-accurate sensitivity matrix than L-DGN, and SVR-DGN is insensitive to the negative influence of numerical noise. We also propose a cost-saving training procedure by replacing bad-training points, which correspond to relatively large values of the objective function, with those training-data points (simulation data) that have smaller values of the objective function and are generated at most-recent iterations for training the SVR proxies. Both the L-DGN approach and the newly proposed SVR-DGN approach are tested first with a 2D toy problem to show the effect of numerical noise on their convergence performance. We find that their performance is comparable when the toy problem is free of numerical noise. As the numerical-noise level increases, the performance of the L-DGN degrades sharply. By contrast, the SVR-DGN performance is quite stable. Then, both methods are tested using a real-field HM example. The convergence performance of the SVR-DGN is quite robust for both the tight and loose numerical settings, whereas the performance of the L-DGN degrades significantly when loose numerical settings are applied.
- Europe (0.67)
- North America > United States > Texas (0.46)
Integration of Support Vector Regression With Distributed Gauss-Newton Optimization Method and Its Applications to the Uncertainty Assessment of Unconventional Assets
Guo, Zhenyu (University of Tulsa) | Chen, Chaohui (Shell Exploration and Production Company Incorporated) | Gao, Guohua (Shell Global Solutions US Incorporated) | Cao, Richard (Shell Exploration and Production Company Incorporated) | Li, Ruijian (Shell Exploration and Production Company Incorporated) | Liu, Chunlei (Shell Exploration and Production Company Incorporated)
Summary Reservoir model parameters generally have very large uncertainty ranges, and need to be calibrated by history matching (HM) available production data. Properly assessing the uncertainty of production forecasts (e.g., with an ensemble of calibrated models that are conditioned to production data) has a direct impact on business decision making. It requires performing numerous reservoir simulations on a distributed computing environment. Because of the current low-oil-price environment, it is demanding to reduce the computational cost of generating multiple realizations of history-matched models without compromising forecasting quality. To solve this challenge, a novel and more efficient optimization method (referred to as SVR-DGN) is proposed in this paper, by replacing the less accurate linear proxy of the distributed Gauss-Newton (DGN) optimization method (referred to as L-DGN) with a more accurate response-surface model of support vector regression (SVR). Resembling L-DGN, the proposed SVR-DGN optimization method can be applied to find multiple local minima of the objective function in parallel. In each iteration, SVR-DGN proposes an ensemble of search points or reservoir-simulation models, and the flow responses of these reservoir models are simulated on high-performance-computing (HPC) clusters concurrently. All successfully simulated cases are recorded in a training data set. Then, an SVR proxy is constructed for each simulated response using all training data points available in the training data set. Finally, the sensitivity matrix at any point can be calculated analytically by differentiating the SVR models. SVR-DGN computes more-accurate sensitivity matrices, proposes better search points, and converges faster than L-DGN. The quality of the SVR proxy is validated with a toy problem. The proposed method is applied to a real field HM example of a Permian liquid-rich shale reservoir. The uncertain parameters include reservoir static properties, hydraulic-fracture properties, and parameters defining relative permeability curves. The performance of the proposed SVR-DGN optimization method is compared with the L-DGN optimizer and the hybrid Gauss-Newton with a direct-pattern-search (GN-DPS) optimizer, using the same real field example. Our numerical tests indicate that the SVR-DGN optimizer can find better solutions with smaller values of the objective function and with a less computational cost (approximately one-third of L-DGN and 1/30 of GN-DPS). Finally, the proposed method is applied to generate multiple conditional realizations for the uncertainty quantification of production forecasts.
Global-Search Distributed-Gauss-Newton Optimization Method and Its Integration With the Randomized-Maximum-Likelihood Method for Uncertainty Quantification of Reservoir Performance
Chen, Chaohui (Shell International Exploration and Production Company) | Gao, Guohua (Shell Global Solutions US Incorporated) | Li, Ruijian (Shell Exploration and Production Company) | Cao, Richard (Shell Exploration and Production Company) | Chen, Tianhong (Shell Exploration and Production Company) | Vink, Jeroen C. (Shell Global Solutions International) | Gelderblom, Paul (Shell Global Solutions International)
Summary Although it is possible to apply traditional optimization algorithms together with the randomized-maximum-likelihood (RML) method to generate multiple conditional realizations, the computation cost is high. This paper presents a novel method to enhance the global-search capability of the distributed-Gauss-Newton (DGN) optimization method and integrates it with the RML method to generate multiple realizations conditioned to production data synchronously. RML generates samples from an approximate posterior by minimizing a large ensemble of perturbed objective functions in which the observed data and prior mean values of uncertain model parameters have been perturbed with Gaussian noise. Rather than performing these minimizations in isolation using large sets of simulations to evaluate the finite-difference approximations of the gradients used to optimize each perturbed realization, we use a concurrent implementation in which simulation results are shared among different minimization tasks whenever these results are helping to converge to the global minimum of a specific minimization task. To improve sharing of results, we relax the accuracy of the finite-difference approximations for the gradients with more widely spaced simulation results. To avoid trapping in local optima, a novel method to enhance the global-search capability of the DGN algorithm is developed and integrated seamlessly with the RML formulation. In this way, we can improve the quality of RML conditional realizations that sample the approximate posterior. The proposed work flow is first validated with a toy problem and then applied to a real-field unconventional asset. Numerical results indicate that the new method is very efficient compared with traditional methods. Hundreds of data-conditioned realizations can be generated in parallel within 20 to 40 iterations. The computational cost (central-processing-unit usage) is reduced significantly compared with the traditional RML approach. The real-field case studies involve a history-matching study to generate history-matched realizations with the proposed method and an uncertainty quantification of production forecasting using those conditioned models. All conditioned models generate production forecasts that are consistent with real-production data in both the history-matching period and the blind-test period. Therefore, the new approach can enhance the confidence level of the estimated-ultimate-recovery (EUR) assessment using production-forecasting results generated from all conditional realizations, resulting in significant business impact.
- North America > United States (1.00)
- Europe (1.00)
- Overview > Innovation (0.54)
- Research Report (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.84)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.70)
Robust Uncertainty Quantification through Integration of Distributed Gauss-Newton Optimization with Gaussian Mixture Model and Parallelized Sampling Algorithms
Gao, Guohua (Shell Global Solutions, US Inc.) | Vink, Jeroen C. (Shell Global Solutions International B.V.) | Chen, Chaohui (Shell International Exploration & Production Inc.) | Araujo, Mariela (Shell International Exploration & Production Inc.) | Ramirez, Benjamin (Shell International Exploration & Production Inc.) | Jennings, Jim W. (Shell International Exploration & Production Inc.) | Khamra, Yaakoub El (Shell Global Solutions, US Inc.) | Ita, Joel (Shell Global Solutions, US Inc.)
Abstract Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty quantification workflow by seamless integration of a distributed Gauss-Newton (DGN) optimization method with Gaussian Mixture Model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches. Multiple local maximum-a-posteriori (MAP) estimates are located with the local-search DGN optimization method. A GMM is constructed to approximate the posterior probability density function, by fitting simulation results generated during the DGN minimization process. The traditional acceptance-rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR-GMM samples are independent, identically-distributed (i.i.d.) samples that can be directly used for uncertainty quantification of model parameters and production forecasts. The proposed method is first validated with 1-D nonlinear synthetic problems having multiple MAP points. The AR-GMM samples are better than the original GMM samples. Then, it is tested with a synthetic history-matching problem using the SPE-1 reservoir model with 8 uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, e.g., Markov chain Monte Carlo (MCMC) and global search DGN combined with the Randomized Maximum Likelihood (RML) approach, but have a much lower computational cost (by a factor of 5 to 100). Finally, it is applied to a real field reservoir model with synthetic data, having 235 uncertain parameters. A GMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. 105 AR-GMM samples are accepted from the 1000 original GMM samples, and are used to quantify uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR-GMM samples are quite consistent with the production data observed after the history matching period. The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated workflow.
- Europe (1.00)
- North America > United States > Texas (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.88)
A Gauss-Newton Trust-Region Solver for Large-Scale History-Matching Problems
Gao, Guohua (Shell Global Solutions (US) Incorporated) | Jiang, Hao (Shell Global Solutions (US) Incorporated) | Hagen, Paul van (Shell Global Solutions International B.V.) | Vink, Jeroen C. (Shell Global Solutions International B.V.) | Wells, Terence (Shell Global Solutions International B.V.)
Summary Solving the Gauss-Newton trust-region subproblem (TRS) with traditional solvers involves solving a symmetric linear system with dimensions the same as the number of uncertain parameters, and it is extremely computational expensive for history-matching problems with a large number of uncertain parameters. A new trust-region (TR) solver is developed to save both memory usage and computational cost, and its performance is compared with the well-known direct TR solver using factorization and iterative TR solver using the conjugate-gradient approach. With application of the matrix inverse lemma, the original TRS is transformed to a new problem that involves solving a linear system with the number of observed data. For history-matching problems in which the number of uncertain parameters is much larger than the number of observed data, both memory usage and central-processing-unit (CPU) time can be significantly reduced compared with solving the original problem directly. An auto-adaptive power-law transformation technique is developed to transform the original strong nonlinear function to a new function that behaves more like a linear function. Finally, the Newton-Raphson method with some modifications is applied to solve the TRS. The proposed approach is applied to find best-match solutions in Bayesian-style assisted-history-matching (AHM) problems. It is first validated on a set of synthetic test problems with different numbers of uncertain parameters and different numbers of observed data. In terms of efficiency, the new approach is shown to significantly reduce both the computational cost and memory usage compared with the direct TR solver of the GALAHAD optimization library (see ). In terms of robustness, the new approach is able to reduce the risk of failure to find the correct solution significantly, compared with the iterative TR solver of the GALAHAD optimization library. Our numerical results indicate that the new solver can solve large-scale TRSs with reasonably small amounts of CPU time (in seconds) and memory (in MB). Compared with the CPU time and memory used for completing one reservoir simulation run for the same problem (in hours and in GB), the cost for finding the best-match parameter values using our new TR solver is negligible. The proposed approach has been implemented in our in-house reservoir simulation and history-matching system, and has been validated on a real-reservoir-simulation model. This illustrates the main result of this paper: the development of a robust Gauss-Newton TR approach, which is applicable for large-scale history-matching problems with negligible extra cost in CPU and memory.
- North America > United States (0.46)
- Europe (0.46)
- Asia > Middle East (0.46)
- Europe > United Kingdom (0.89)
- Europe > Netherlands (0.89)
- Asia > Middle East > Syria (0.89)
- Africa > Nigeria (0.89)
- Information Technology > Modeling & Simulation (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
Applying Support Vector Regression to Reduce the Effect of Numerical Noise and Enhance the Performance of History Matching
Guo, Zhenyu (Shell International Exploration and Production Inc.) | Chen, Chaohui (Shell International Exploration and Production Inc.) | Gao, Guohua (Shell Global Solutions US Inc.) | Vink, Jeroen (Shell Global Solutions International B. V.)
Abstract Numerical optimization is an integral part of many history matching (HM) workflows. To be efficient, these model-based optimization methods often use numerically computed gradients, which are difficult to calculate accurately due to numerical noise in simulation results. In this paper, Support Vector Regression (SVR) is integrated with a model-based optimization algorithm, Distributed Gauss-Newton (DGN), to provide accurate gradients that are insensitive to the negative influence of this type of numerical noise. Previously we have developed a parallelized DGN optimization method, which uses an ensemble of reservoir simulation models to compute the required gradients with a simple linear interpolation or regression method. Numerical noise is unavoidable for reservoir simulations. More precisely, the allowed solver tolerances imply that simulation results no longer smoothly change with changing model parameters. By setting tight convergence criteria, these discontinuities can be reduced but then the overall simulation run time will increase and obviously jeopardize optimization efficiency. Furthermore, the inaccurate gradients degrade the convergence performance of the original linear DGN (L-DGN) significantly, or even worse; it may result in failure of convergence. In this paper, we use the simulation results to build SVR models, which are then used to compute the required gradients. The accuracy of the SVR models is further improved by reusing simulation results of preceding iterations. Starting from an initial ensemble of models, new search points for each realization are generated with a modified Gauss-Newton trust region method using the sensitivity matrix estimated with SVR. The SVR proxies are updated when simulation results of new search points are available. The procedure is repeated until the distributed optimization process has converged. Both our L-DGN approach and the newly proposed SVR-DGN approach are first tested with a two-dimensional toy problem to show the effect of numerical noise on their convergence performance. We find that their performance is comparable when the toy problem is free of numerical noise. When the numerical noise level increases, the performance of DGN degrades sharply. In contrast, SVR-DGN performance is quite stable. Both methods are similarly tested using a real field history matching example. Also here, the convergence performance of SVR-DGN is not affected by different solver settings (i.e., noise levels), whereas the performance of L-DGN degrades significantly when loose numerical settings are applied. Moreover, the overall convergence rate is faster when the SVR-computed gradients are used. Our results show that SVR can be utilizedefficiently and robustly to obtain accurate gradients from numerically computed, noisy simulation results. The SVR approach can also be integrated with other derivative-free optimization methods which require building accurate and robust proxy models that are used to guide iterative parameter updates.
- North America > United States > Texas (0.69)
- Europe (0.68)
A Direct Overparameterize and Optimize Method for Stratigraphically Consistent Assisted History Matching of Object-Based Geomodels: Algorithm and Field Application
Alpak, Faruk O. (Shell International Exploration and Production Inc.) | Jennings, James W. (Shell International Exploration and Production Inc.) | Gelderblom, Paul (Shell Global Solutions International B.V.) | Chen, Chaohui (Shell International Exploration and Production Inc.) | Gao, Guohua (Shell Global Solutions International Inc.) | Du, Kuifu (Shell Brasil Exploration and Production)
Summary Object-based static models are typically constructed for stratigraphically complex reservoirs. In this approach, the stratigraphic architecture is represented by use of distinct objects with specific geometric attributes and petrophysical characteristics. It is a major challenge to condition such models to production data while simultaneously maintaining the geologic realism and static conditioning. A novel work flow is developed for the assisted history matching (AHM) of object-based geomodels where the uncertain object locations and attributes are directly modified without resorting to (post-geomodeling) reparameterization techniques. It contains an object-modeling algorithm for channels and levees that provides direct access to the preraster geomodeling parameters for individual object locations and attributes. These parameters are gradually modified subject to physical constraints to achieve a history match. A fully integrated protocol is used in this process that automatically couples the static modeling algorithm with the reservoir simulator. The resulting work flow is moderated by a massively parallel and highly efficient iterative data-integration algorithm. In the AHM work flow, static and dynamic conditioning operations are respectively driven by separate objective functions and are performed at the iteration level in a sequential fashion. The static-conditioning step may add and remove objects in the geomodel, which changes the number of active AHM parameters over the course of the iterative search. The work flow handles such operations with minimal impact on the robustness of the search. A potential application of the direct AHM work flow for object-based geomodels is the identification of locations and attributes of channels in deepwater turbidite reservoirs, where the channels are typically below the resolution of the seismic data; the well spacing is typically larger than the characteristic object dimension, yet the production data exhibit strong sensitivity to channel connectivity. The concept of gradually adjusting the channel locations with the information in the production data (while maintaining static conditioning) is demonstrated on a real data set for a deepwater channelized-turbidite reservoir. The models proposed by the new AHM work flow not only improve the difficult-to-history-match injected-gas-breakthrough profiles but also provide geologically based explanations for them, taking into account the channel connectivity. The proposed AHM work flow ensures consistency across static and dynamic models by integrating multidisciplinary data with an easily auditable and replicable capability.
- Europe (1.00)
- Asia (0.92)
- North America > United States > Texas (0.46)
- Geology > Sedimentary Geology > Depositional Environment > Marine Environment > Deep Water Marine Environment (1.00)
- Geology > Geological Subdiscipline (1.00)
- Geology > Rock Type > Sedimentary Rock > Clastic Rock (0.88)
- Geophysics > Seismic Surveying (1.00)
- Geophysics > Time-Lapse Surveying > Time-Lapse Seismic Surveying (0.67)
- North America > United States > Louisiana > China Field (0.89)
- Europe > Netherlands (0.89)
- Asia > Malaysia (0.89)
- Asia > India (0.89)
EUR Assessment of Unconventional Assets Using Machine Learning and Distributed Computing Techniques
Guo, Zhenyu (Shell International Exploration & Production Inc.) | Chen, Chaohui (Shell International Exploration & Production Inc.) | Gao, Guohua (Shell Global Solutions (US) Inc.) | Cao, Richard (Shell Exploration & Production Co.) | Li, Ruijian (Shell Exploration & Production Co.) | Liu, Chunlei (Shell Exploration & Production Co.)
Summary Reservoir model parameters generally have quite large uncertainty ranges and need to be calibrated by history matching available production data. The ensemble of calibrated models has a direct impact on business decision making, such as EUR assessment and well spacing optimization. Multi-realization history matching (MHM) techniques have been applied to quantify the EUR uncertainty. In practice, MHM requires performing a large number of reservoir simulations on a distributed computing environment. Given the current low oil price environment, it is demanding to reduce the computational cost of MHM without compromising forecasting quality. To solve this challenge, this paper proposed a novel EUR assessment method that integrates a machine learning technique with an advanced distributed computing technique in a history matching workflow. Starting from an initial ensemble of reservoir models, each realization is calibrated iteratively with a previously published Distributed Gauss-Newton method (DGN). Responses of the reservoir realizations are generated by running simulations on a high-performance-cluster (HPC) concurrently. The responses generated during iterations are added to the training data set, which is used to train a set of support vector regression (SVR) models. As the sensitivity matrix for each realization can be estimated analytically from the SVR models, the DGN can use the sensitivity matrix to generate better search point such that the objective function value can decrease more rapidly. The procedure is repeated until convergence. The proposed method is applied to assess EUR for several wells in the Permian Liquid Rich Shale Reservoir field with complicated subsurface oil/water/gas multiphase flows. The uncertain parameters include reservoir static properties, hydraulic-fracture properties, and parameters defining dynamic properties such as relative permeabilities, etc. With the integration of SVR in DGN, the new method saves about 65% of the simulation runs compared to the method without using SVR. Efficiency comes from learning the simulated results in the previous iterations. The case study indicates that the new method provides faster EUR forecast and comparable uncertainty ranges compared with those obtained without using SVR. More importantly, the new method enhances the statistical learning of reservoir performance and therefore significantly increases capital efficiency for exploiting unconventional resources.
- North America > United States > Texas > Permian Basin > Yeso Formation (0.98)
- North America > United States > Texas > Permian Basin > Yates Formation (0.98)
- North America > United States > Texas > Permian Basin > Wolfcamp Formation (0.98)
- (21 more...)
Integration of Distributed Gauss-Newton with Randomized Maximum Likelihood Method for Uncertainty Quantification of Reservoir Performance
Chen, Chaohui (Shell International E&P Co.) | Gao, Guohua (Shell Global Solutions US Inc.) | Li, Ruijian (Shell Exploration & Production Co) | Cao, Richard (Shell Exploration & Production Co) | Chen, Tianhong (Shell Exploration & Production Co) | Vink, Jeroen C. (Shell Global Solutions International) | Gelderblom, Paul (Shell Global Solutions International)
Abstract Although it is possible to apply traditional optimization algorithms together with the Randomized Maximum Likelihood (RML) method to generate multiple conditional realizations, the computation cost is high. This paper presents a novel method that integrates the Distributed Gauss-Newton (DGN) method with the RML method to generate multiple realizations conditioned to production data synchronously. RML generates samples from an approximate posterior by finding a large ensemble of maximum posteriori points, from a distribution function in which the data and prior mean values have been perturbed with Gaussian noise. Rather than performing these optimizations in isolation, using large sets of simulations to evaluate the finite difference approximations of the gradients used to optimize each perturbed realization, we use a concurrent implementation, in which simulation results are shared among optimizations whenever these results are helping to converge a specific optimization. In order to improve sharing of results, we relax the accuracy of the finite difference approximations for the gradients, by using more widely spaced simulation results. To avoid trapping in local optima, a novel global search algorithm integrated with DGN and RML is applied. In this way we can significantly increase the number of conditional realizations that sample the approximate posterior, while reducing the total number of simulations needed to converge the optimization processes needed to obtain these realizations. The proposed workflow has been applied to field examples on liquid rich shale or tight oil reservoirs developed with hydraulically fractured horizontal wells. The uncertain parameters include stimulated rock volume (SRV) and matrix properties, such as permeability and porosity, and hydraulic-fracture properties, such as conductivity, height, and half length. The case studies involve a sensitivity analysis to identify key parameters, a history matching study to generate history-matched realizations with the proposed method, and an uncertainty quantification of production forecasting based on those conditioned models. The new approach is able to enhance the confidence level of the Estimated Ultimate Recovery (EUR) assessment by accounting for production forecasting results generated from all history-matched realizations. Numerical results indicate that the new method is very efficient compared with traditional methods. Hundreds of history-matched, or rather data-conditioned, realizations can be generated in parallel within 20-40 iterations. The computational cost (CPU usage) is reduced by a factor of 10 to 25 when compared to the traditional RML approach.
- Europe (1.00)
- North America > United States > Texas (0.46)
- Research Report (0.68)
- Overview > Innovation (0.54)
- Geology > Geological Subdiscipline > Economic Geology > Petroleum Geology (0.48)
- Geology > Rock Type > Sedimentary Rock > Clastic Rock (0.34)