Oilfield tubulars have been traditionally designed using a deterministic working stress design (WSD) approach, which is based on multipliers called safety factors (SFs). The primary role of a safety factor is to account for uncertainties in the design variables and parameters, primarily the load effect and the strength or resistance of the structure.
Dynamic data is information that changes asynchronously as the information is updated. Unlike static data, which is infrequently accessed and unlikely to be modified, or streaming data, which has a constant flow of information, dynamic data involves updates that may come at any time, with sporadic periods of inactivity in between. In the context of reservoir engineering, dynamic data is used during the creation of a reservoir model in conjunction with historical static data. When modeled accurately, any sampling from the conditional distribution would produce accurate static and dynamic characteristics. When a permanence of ratio hypothesis is employed, the conditional probability P(AǀB,C) can be expressed in terms of P(A), P(AǀB), and P(AǀC).
Nandi Formentin, Helena (Durham University and University of Campinas) | Vernon, Ian (Durham University) | Avansi, Guilherme Daniel (University of Campinas) | Caiado, Camila (Durham University) | Maschio, Célio (University of Campinas) | Goldstein, Michael (Durham University) | Schiozer, Denis José (University of Campinas)
Reservoir simulation models incorporate physical laws and reservoir characteristics. They represent our understanding of sub-surface structures based on the available information. Emulators are statistical representations of simulation models, offering fast evaluations of a sufficiently large number of reservoir scenarios, to enable a full uncertainty analysis. Bayesian History Matching (BHM) aims to find the range of reservoir scenarios that are consistent with the historical data, in order to provide comprehensive evaluation of reservoir performance and consistent, unbiased predictions incorporating realistic levels of uncertainty, required for full asset management. We describe a systematic approach for uncertainty quantification that combines reservoir simulation and emulation techniques within a coherent Bayesian framework for uncertainty quantification.
Our systematic procedure is an alternative and more rigorous tool for reservoir studies dealing with probabilistic uncertainty reduction. It comprises the design of sets of simulation scenarios to facilitate the construction of emulators, capable of accurately mimicking the simulator with known levels of uncertainty. Emulators can be used to accelerate the steps requiring large numbers of evaluations of the input space in order to be valid from a statistical perspective. Via implausibility measures, we compare emulated outputs with historical data incorporating major process uncertainties. Then, we iteratively identify regions of input parameter space unlikely to provide acceptable matches, performing more runs and reconstructing more accurate emulators at each wave, an approach that benefits from several efficiency improvements. We provide a workflow covering each stage of this procedure.
The procedure was applied to reduce uncertainty in a complex reservoir case study with 25 injection and production wells. The case study contains 26 uncertain attributes representing petrophysical, rock-fluid and fluid properties. We selected phases of evaluation considering specific events during the reservoir management, improving the efficiency of simulation resources use. We identified and addressed data patterns untracked in previous studies: simulator targets,
We advance the applicability of Bayesian History Matching for reservoir studies with four deliveries: (a) a general workflow for systematic BHM, (b) the use of phases to progressively evaluate the historical data; and (c) the integration of two-class emulators in the BHM formulation. Finally, we demonstrate the internal discrepancy as a source of error in the reservoir model.
Probabilistic and deterministic methods for reserves and resources evaluation are commonly used in isolation and often considered mutually exclusive. Subsurface uncertainties are critical factors impacting projects and reserves/resources especially in projects/areas where large sums of capital investment are required. Probabilistic methods allow a rigorous use of information on the ranges of uncertainty on key reservoir parameters like porosity, water saturation, permeability aquifer size for reserves estimation. A key output of probabilistic methods is the confidence levels associated with the reserves. Deterministic methods can't provide confidence levels associated with reserves and resources assessments reason why with its successful application is often relaying on the expert knowledge of the evaluator and the strict use of reserves or resources definitions. Technological advances in computing in the last decades have played a key role in advancing computationally intensive probabilistic methodologies including artificial intelligence. These advances have allowed integrated teams to perform studies using sophisticated workflows in feasible timeframes.
Gao, Guohua (Shell Global Solutions (US)) | Vink, Jeroen C. (Shell Global Solutions International) | Chen, Chaohui (Shell International Exploration and Production) | Araujo, Mariela (Shell Global Solutions (US)) | Ramirez, Benjamin A. (Shell International Exploration and Production) | Jennings, James W. (Shell International Exploration and Production) | El Khamra, Yaakoub (Shell Global Solutions (US)) | Ita, Joel (Shell Global Solutions (US))
Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon-field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty-quantification work flow by seamless integration of a distributed-Gauss-Newton (GN) (DGN) optimization method with a Gaussian mixture model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches.
Multiple local maximum-a-posteriori (MAP) estimates are determined with the local-search DGN optimization method. A GMM is constructed to approximate the posterior probability-density function (PDF) by reusing simulation results generated during the DGN minimization process. The traditional acceptance/rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR-GMM samples are independent, identically distributed samples that can be directly used for uncertainty quantification of model parameters and production forecasts.
The proposed method is first validated with 1D nonlinear synthetic problems with multiple MAP points. The AR-GMM samples are better than the original GMM samples. The method is then tested with a synthetic history-matching problem using the SPE01 reservoir model (Odeh 1981; Islam and Sepehrnoori 2013) with eight uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, such as Markov-chain Monte Carlo (MCMC) and global-search DGN combined with the randomized-maximum-likelihood (RML) approach, but have a much lower computational cost (by a factor of five to 100). Finally, it is applied to a real-field reservoir model with synthetic data, with 235 uncertain parameters. AGMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. There are 105 AR-GMM samples accepted from the 1,000 original GMM samples, and they are used to quantify the uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR-GMM samples are quite consistent with the production data observed after the history-matching period.
The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated work flow.
Development and management of oilfields involve several sources of uncertainty that complicate an already challenging decision-making process. Two main sources of uncertainty are related to geologic description of reservoirs and future development scenarios. While geologic uncertainty has been widely studied and robust optimization methods have been developed to account for it, the uncertainty in future development plans has not been considered in optimization problems. Future development strategies have been included as decision variables in field development optimization problems. However, in practice, future field development plans tend to deviate from the solutions obtained in past optimization problems. Therefore, a more prudent and realistic approach toward oilfield optimization is to consider the uncertainty in both geology and future development plans to obtain robust solutions. We develop a closed-loop stochastic field development optimization formulation to account for the uncertainty in geologic description and future infill drilling scenarios. The proposed approach optimizes the decision variables for current stage of planning (e.g. well locations and operational settings) while accounting for geologic and future development uncertainties, where the former uncertainty is represented by using several reservoir model realizations while the latter uncertainty is represented through drilling scenario trees and probabilistic description of future drilling events/parameters. In the developed method, prior to each decision-making stage the reservoir is operated based on the current optimal strategy until dynamic data becomes available to calibrate the geological models. After each data assimilation step, a new optimization is performed to adjust controllable decision variables for the current well configuration (e.g., well rates or BHPs) using the updated models and potentially revised future development scenarios. Using a multi-stage stochastic optimization workflow this process is repeated after each decision stage. Several numerical experiments are presented to discuss various aspects of the proposed closed-loop stochastic optimization formulation and to compare the solutions from different methods adopted for treatment of future development plans. The results indicate that stochastic treatment of future development events (1) can hedge against uncertain future development activities by obtaining optimization solutions that are robust against changes in future decisions, and (2) considerably reduces the performance losses that can result from field development when uncertainty is disregarded.
Reservoir-simulation models play an essential role in generating optimal field-development strategies, but they need to be history-matched before they can be used for reliable forecasting. Traditional history matching of a reservoir involves matching observed production and pressure data at well locations by changing the uncertain parameters in the reservoir model within the acceptable range. The parameters can be classified broadly as static and dynamic. Static parameters include permeability, porosity, and net to gross, among many others. Dynamic parameters may include oil/water contacts, fault transmissibilities, relative permeability curves, and flow pathways.
Bayesian inference provides a convenient framework for history matching and prediction. In this framework, prior knowledge, system nonlinearity, and measurement errors can be directly incorporated into the posterior distribution of the parameters. The Markov-chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior distribution. However, the MCMC method usually requires a large number of forward simulations. Hence, it can be a computationally intensive task, particularly when dealing with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model outputs in the form of polynomials using the stochastic collocation method (SCM). In addition, we use interpolation with the nested sparse grids and adaptively take into account the different importance of parameters for high-dimensional problems. Furthermore, we introduce an additional transform process to improve the accuracy of the surrogate model in case of strong nonlinearities, such as a discontinuous or unsmooth relation between the input parameters and the output responses. Once the surrogate system is built, we can evaluate the likelihood with little computational cost. Numerical results demonstrate that the proposed method can efficiently estimate the posterior statistics of input parameters and provide accurate results for history matching and prediction of the observed data with a moderate number of parameters.
Yamada, Tatsuya (ADNOC Offshore) | Yamamoto, Kei (INPEX Corporation) | Alqubaisi, Alyazia (ADNOC Offshore) | Al Jasmi, Sami (ADNOC Offshore) | Uematsu, Hiroshi (ADNOC Offshore) | Kojima, Keitaro (ADNOC Offshore) | Shibasaki, Toshiaki (ADNOC Offshore) | Al-Jenaibi, Faisal (ADNOC)
Reservoir simulation is widely used for field development planning in many fields and the evaluation of uncertainty range in production forecast is indispensable to make decision for further investment. Reservoir simulation model consists of geological, petrophysical and reservoir engineering parameters for each cell and cell boundary. These reservoir model parameters are usually defined based on limited available data in consideration of their uncertainty range. Therefore, the identification of influential parameters and the reduction of uncertainty range for these parameters are key components to mitigate the prediction uncertainty.
An Upper Jurassic carbonate reservoir in Field A located in offshore Abu Dhabi has long production history for more than 30 years. Field A experienced several development schemes including natural depletion, crestal gas injection and crestal water injection. The current reservoir simulation model reasonably replicates historical performance on pressure, water cut evolution and GOR trend in field and well-by-well scales. On the other hand, we identified some reservoir model parameters have high uncertainty due to reservoir complexity and lack of reliable data.
In this study, we focused on the identification of influential parameters on production forecast and the reduction of parameter uncertainty range using an experimental design approach. More than 200 simulation cases were generated with different combination of selected parameters using Latin Hypercube Sampling method. In each case, we evaluated history matching quality in field scale and relationship between history matching quality and each parameter. We found some parameters have correlation with history matching quality independently from the other parameters settings. This means that the uncertain range of those parameters can be reduced to achieve an acceptable history match irrespective of the other parameters. Furthermore, the prediction uncertain range was analyzed using the selected cases showing reasonable history matching quality to investigate the relationship between cumulative oil production and each parameter. The results indicated some parameters have a stronger impact on production forecast and their uncertainty range need to be reduced by further data gathering or considering other mitigation plans. This study successfully demonstrated that the proposed multiple parameter sensitivity analysis by effective use of experimental design approach enables to reduce the parameter uncertain range and identify the key influential parameters. Furthermore, this study result contributes to the prioritization and optimization of future data gathering plan in Field A.