Gao, Guohua (Shell Global Solutions (US)) | Vink, Jeroen C. (Shell Global Solutions International) | Chen, Chaohui (Shell International Exploration and Production) | Araujo, Mariela (Shell Global Solutions (US)) | Ramirez, Benjamin A. (Shell International Exploration and Production) | Jennings, James W. (Shell International Exploration and Production) | El Khamra, Yaakoub (Shell Global Solutions (US)) | Ita, Joel (Shell Global Solutions (US))
Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon-field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty-quantification work flow by seamless integration of a distributed-Gauss-Newton (GN) (DGN) optimization method with a Gaussian mixture model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches.
Multiple local maximum-a-posteriori (MAP) estimates are determined with the local-search DGN optimization method. A GMM is constructed to approximate the posterior probability-density function (PDF) by reusing simulation results generated during the DGN minimization process. The traditional acceptance/rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR-GMM samples are independent, identically distributed samples that can be directly used for uncertainty quantification of model parameters and production forecasts.
The proposed method is first validated with 1D nonlinear synthetic problems with multiple MAP points. The AR-GMM samples are better than the original GMM samples. The method is then tested with a synthetic history-matching problem using the SPE01 reservoir model (Odeh 1981; Islam and Sepehrnoori 2013) with eight uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, such as Markov-chain Monte Carlo (MCMC) and global-search DGN combined with the randomized-maximum-likelihood (RML) approach, but have a much lower computational cost (by a factor of five to 100). Finally, it is applied to a real-field reservoir model with synthetic data, with 235 uncertain parameters. AGMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. There are 105 AR-GMM samples accepted from the 1,000 original GMM samples, and they are used to quantify the uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR-GMM samples are quite consistent with the production data observed after the history-matching period.
The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated work flow.
Gao, Guohua (Shell Global Solutions US, Inc.) | Vink, Jeroen C. (Shell Global Solutions International B.V.) | Chen, Chaohui (Shell International Exploration & Production Inc.) | Tarrahi, Mohammadali (Shell Global Solutions US, Inc.) | El Khamra, Yaakoub (Shell Global Solutions US, Inc.)
It is crucially important for decision making and an extremely challenging task to properly quantify uncertainty of model parameters and production forecasts after conditioning to production data. A novel approach is proposed to generate approximate conditional realizations using the distributed Gauss-Newton (DGN) method together with a multiple local Gaussian approximation technique. Results are compared with those obtained from other approaches such as Randomized Maximum Likelihood (RML), Ensemble-Kalman-Filter (EnKF), and Markov-Chain-Monte-Carlo (MCMC) simulation.
The DGN method is developed to find multiple local minima of the objective function in parallel, by collecting and sharing information from dispersed regions in parameter space dynamically. Around each local minimum, the estimated Hessian obtained from the Gauss-Newton approximation along with the prior inverse covariance matrix is used as a local approximation of the posterior inverse covariance matrix. The posterior joint PDF can then be approximated as a weighted linear superposition of multiple local Gaussian distributions, which can be sampled very efficiently without having to resort to expensive MCMC methods.
The proposed approach is first validated using a nonlinear history matching toy problem with multiple modes. In terms of efficiency, the new approach can significantly reduce the computational cost and accelerate the uncertainty quantification process, when compared to the traditional RML method or traditional MCMC approaches. In terms of accuracy, uncertainty characteristics obtained from the proposed approach are comparable to those generated from the MCMC simulation, and they are much better than those obtained from EnKF or RML. The approach is then also applied to a real field history matching problem, where the dynamic system of multi-phase flow in the reservoir exhibits very strong nonlinear behavior, and the objective function has multiple local minima. Uncertainty ranges of production forecasts for the real field case are quantified by generating an ensemble of conditional realizations. The production forecasts for all conditional realizations are consistent with the production data observed after the history matching period, which further validates the applicability of the proposed method to real field problems.
Its high efficiency makes the new approach practical for large-scale problems, for which methods based on Design of Experiment break down. Furthermore, as was argued by
Across many fields of science and engineering computers now play a significant role in scientific discovery through both large scale simulation and real time data acquisition. As these scientific simulations increasingly require new levels of complexity and fidelity, and leveraging increasing computational capabilities, scientists are migrating their applications back and forth from workstations to supercomputers, high performance clusters, and new distributed grids of computers. For applications which depend on live real world data, the migration to varied and distributed resources provides additional challenges. This migration involves reassigning and testing individual sensor communications and data path integrity for the application before the application is ready for a real time operating environment.
This paper presents one general approach for communicating live real world data to simulations deployed on high-end computational resources. The architecture and design are presented for generic applications, before focusing on a particular scenario for drilling dynamics optimizations. In this scenario, simulations of drilling behavior depend on real time data from on-site (remote) sensors. Sensor data is streamed through to a simulation framework. The framework (available on a desktop computer or a supercomputer) displays relevant information, starts simulations with real time inputs then presents results and recommendations. For our application, we will use the Cactus Code (www.cactuscode.org) open-source high performance scientific computing framework as the simulation framework, and LabVIEW (www.ni.com/LabVIEW) as the data acquisition software. The advances in architecture portability allowed by the framework are summarized, and experiences of system uses are presented. Discussion will include opportunities found and learnings from using this platform in various environments highlighting drilling optimization.
Li, Xin (Louisiana State University) | Lei, Zhou (Louisiana State University) | Huang, Dayong (Louisiana State University) | El Khamra, Yaakoub (CCT) | Allen, Gabrielle (Louisiana State University) | White, Christopher David (Louisiana State University) | Kim, Jong Gyun (Argonne National Lab)
Reservoir simulators are computationally costly and produce diverse, voluminous results. These features motivate a need for a distributed computing systems with high-level methods to manage computation, data, and workflow: the Grid computing community is working to provide such tools.
We begin by discussing the features of Grid computing environments, emphasizing features that enhance reliability and usability, and can contribute to decreased computing cost via a future market in Grid computing with high security. A Gridbased workflow for reservoir simulation is then outlined which uses software such as the Condor and Globus Toolkit to build and manage workflows, the Grid Security Infrastructure (GSI) for security, GridSphere for portal creation and management, and Cactus for parallelization.
The workflow is demonstrated for a geostatistical study of three-dimensional displacements in heterogeneous reservoirs using a parallel IMPES reservoir simulator. A suite of 5,120 simulations assesses the effects of geostatistical methods. Much of the pre- and post-processing is automated in this workflow, which is based on experimental design. Multiple runs are simultaneously executed using Grid computing. Grid services manage security, data acquisition, resource brokerage and allocation, response analysis, and visualization; the reservoir engineer is freed from micromanaging these workflow components. Flow response analyses indicate that efficient, widely used sequential geostatistical simulation methods may overestimate flow response variability, compared to more computationally
costly direct methods such as LU decomposition.
The workflow is extended to automatic history matching using Ensemble Kalman Filters in the Grid environment. Users only need to choose parameters for stochastic simulations in a Web-based Grid portal. A two-dimensional problem with one injector and four producers is created to investigate the performance of the Grid ensemble Kalman filter.
Grid computing offers a promising path toward effective use of computing resources within and between organizations. Existing tools support parallelization of code and construction of workflows. The software developed here can be adapted to drive other simulation tools, and is available for downloading.
Problem Statement Reservoir simulation integrates extensive geoscience and engineering data with complex process models to examine reservoir behavior. Reservoir performance predictions commonly consider many scenarios, cases, and realizations, which require a large number of simulation runs (102 - 105 or more) and vast storage resources. Therefore, future production prediction depends on computational efficiency to improve reservoir characterization and adoption of new oil recovery technologies. But reservoir simulation processes, such as designed simulation, history matching, are time-consuming and expensive. Besides, the volume of simulation ensemble result could be many terabytes.
Reservoir simulation technology has improved vastly in recent decades, but the large number of simulations and the complexity of geological and flow models pose computational and data storage requirements that challenge high performance platforms. Typical computing environments are inadequate for such data and computation-intensive purposes. This resource-intensive computational challenges can be attacked using emerging Grid computing technology.