Hong, Aojie (National IOR Centre of Norway and University of Stavanger) | Bratvold, Reidar B. (National IOR Centre of Norway and University of Stavanger) | Lake, Larry W. (University of Texas at Austin) | Ruiz Maraggi, Leopoldo M. (University of Texas at Austin)
Aojie Hong and Reidar B. Bratvold, National IOR Centre of Norway and University of Stavanger, and Larry W. Lake and Leopoldo M. Ruiz Maraggi, University of Texas at Austin Summary Decline-curve analysis (DCA) for unconventional plays requires a model that can capture the characteristics of different flow regimes. Thus, various models have been proposed. Traditionally, in probabilistic DCA, an analyst chooses a single model that is believed to best fit the data. However, several models might fit the data almost equally well, and the one that best fits the data might not best represent the flow characteristics. Therefore, uncertainty remains regarding which is the "best" model. This work aims to integrate model uncertainty in probabilistic DCA for unconventional plays. Instead of identifying a single "best" model, we propose to regard any model as potentially good, with goodness characterized by a probability. The probability of a model being good is interpreted as a measure of the relative truthfulness of this model compared with the other models. This probability is subsequently used to weight the model forecast. Bayes' law is used to assess the model probabilities for given data. Multiple samples of the model-parameter values are obtained using maximum likelihood estimation (MLE) with Monte Carlo simulation. Thus, the unique probabilistic forecasts of each individual model are aggregated into a single probabilistic forecast, which incorporates model uncertainty along with the intrinsic uncertainty (i.e., the measurement errors) in the given data. We demonstrate and conclude that using the proposed approach can mitigate over/underestimates resulting from using a single decline-curve model for forecasting. The proposed approach performs well in propagating model uncertainty to uncertainty in production forecasting; that is, we determine a forecast that represents uncertainty given multiple possible models conditioned to the data. The field data show that no one model is the most probable to be good for all wells. The novelties of this work are that probability is used to describe the goodness of a model; a Bayesian approach is used to integrate the model uncertainty in probabilistic DCA; the approach is applied to actual field data to identify the most-probable model given the data; and we demonstrate the value of using this approach to consider multiple models in probabilistic DCA for unconventional plays. Introduction Although numerical techniques for forecasting hydrocarbon production have developed rapidly over the past decades, DCA remains an industry-accepted method and is used extensively in the oil and gas industry. Decline-curve models are very computationally attractive because only production data, which can be easily acquired, are required for determining a few parameter values through history matching.
A challenge in oil-reservoir studies is evaluating the ability of geomechanical, statistical, and geophysical methods to predict discrete geological features. This problem arises frequently with fracture corridors, which are discrete, tabular subvertical fracture clusters. Fracture corridors can be inferred from well data such as horizontal-borehole-image logs. Unfortunately, well data, and especially borehole image logs, are sparse, and predictive methods are needed to fill in the gap between wells. One way to evaluate such methods is to compare predicted and inferred fracture corridors statistically, using chi-squared and contingency tables.
In this article, we propose a modified contingency table to validate fracture-corridor-prediction techniques. We introduce two important modifications to capture special aspects of fracture corridors. The first modification is the incorporation of exclusion zones where no fracture corridors can exist, and the second modification is taking into consideration the fuzzy nature of fracture-corridor indicators from wells such as circulation losses. An indicator is fuzzy when it has more than one possible interpretation. The reliability of an indicator is the probability that it correctly suggests a fracture corridor. The indicators with reliability of unity are hard indicators, and “soft” and “fuzzy” indicators are those with reliability that is less than unity.
A structural grid is overlaid on the reservoir top in an oil field. Each cell of the grid is examined for the presence and reliability of inferred fracture corridors and exclusion zones and the confidence level of predicted fracture corridors. The results are summarized in a contingency table and are used to calculate chi-squared and conditional probability of having an actual fracture corridor given a predicted fracture corridor.
Three actual case studies are included to demonstrate how single or joint predictive methods can be statistically evaluated and how conditional probabilities are calculated using the modified contingency tables. The first example tests seismic faults as indicators of fracture corridors. The other examples test fracture corridors predicted by a simple geomechanical method.
Wheeler, Mary F. (The University of Texas at Austin, USA) | Srinivasan, Sanjay (Pennsylvania State University, USA) | Lee, Sanghyun (Florida State University, USA) | Singh, Manik (Pennsylvania State University, USA)
Optimal design of hydraulic fractures is controlled by the distribution of natural fractures in the reservoir. Due to sparse information, there is uncertainty associated with the prediction of the natural fracture system. Our objective here is to: i) Quantify uncertainty associated with prediction of natural fractures using micro-seismic data and a Bayesian model selection approach, and ii) Use fracture probability maps to implement a finite element phase-field approach for modeling interactions of propagating fractures with natural fractures.
The proposed approach employs state-of-the-art numerical modeling of natural and hydraulic fractures using a diffusive adaptive finite element phase-field approach. The diffusive phase field is defined using the probability map describing the uncertainty in the spatial distribution of natural fractures. That probability map is computed using a model selection procedure that utilizes a suite of prior models for the natural fracture network and a fast proxy to quickly evaluate the forward seismic response corresponding to slip events along fractures. Employing indicator functions, diffusive fracture networks are generated utilizing an accurate computational adaptive mesh scheme based on a posteriori error estimators.
The coupled algorithm was validated with existing benchmark problems which include prototype computations with fracture propagation and reservoir flows in a highly heterogeneous reservoir with natural fractures. Implementation of a algorithm for computing fracture probability map based on synthetic micro-seismic data mimicking a Fort Worth basin data set reveals consistency between the interpreted fracture sets and those observed in the reference. Convergence of iterative solvers and numerical efficiencies of the methods were tested against different examples including field-scale problems. Results reveal that the interpretation of uncertainty pertaining to the presence of fractures and utilizing that uncertainty within the phase field approach to simulate the interactions between induced and natural fracture yields complex structures that include fracture branching, fracture hooking etc.
The novelty of this work lies in the efficient integration of the phase-field fracture propagation models to diffusive natural fracture networks with stochastic representation of uncertainty associated with the prediction of natural fractures in a reservoir. The presented method enables practicing engineers to design hydraulic fracturing treatment accounting for the uncertainty associated with the location and spatial variations in natural fractures. Together with efficient parallel implementation, our approach allows for cost-efficient approach to optimizing production processes in the field.
In this work, we investigate different approaches for history matching of imperfect reservoir models while accounting for model error. The first approach (base case scenario) relies on direct Bayesian inversion using iterative ensemble smoothing with annealing schedules without accounting for model error. In the second approach the residual, obtained after calibration, is used to iteratively update the covariance matrix of the total error, that is a combination of model error and data error. In the third approach, PCA-based error model is used to represent the model discrepancy during history matching. However, the prior for the PCA weights is quite subjective and is generally hard to define. Here the prior statistics of model error parameters are estimated using pairs of accurate and inaccurate models. The fourth approach, inspired from
We present a novel sampling algorithm for characterization and uncertainty quantification of heterogeneous multiple facies reservoirs. The method implements a Bayesian inversion framework to estimate physically plausible porosity distributions. This inversion process incorporates data matching at the well locations and constrains the model space by adding
The proposed workflow uses an ensemble-based Markov Chain Monte Carlo approach combined with sampling probability distributions that are physically meaningful. Moreover, the method targets geostatistical modeling to specific zones in the reservoir. Accordingly, it improves fulfilling the inherent stationarity assumption in geostatistical simulation techniques. Parameter sampling and geostatistical simulations are calculated through an inversion process. In other words, the models fit the known porosity field at the well locations and are structurally consistent within main reservoir compartments, zones, and layers obtained from the seismic impedance volume. The new sampling algorithm ensures that the automated history matching algorithm maintains diversity among ensemble members avoiding underestimation of the uncertainty in the posterior probability distribution.
We evaluate the efficiency of the sampling methodology on a synthetic model of a waterflooding field. The predictive capability of the assimilated ensemble is assessed by using production data and dynamic measurements. Also, the qualities of the results are examined by comparing the geological realism of the assimilated ensemble with the reference probability distribution of the model parameters and computing the predicted dynamic data mismatch. Our numerical examples show that incorporating the seismically constrained models as prior information results in an efficient model update scheme and favorable history matching.
Araujo, Mariela (Shell International Exploration and Production Inc.) | Chen, Chaohui (Shell International Exploration and Production Inc.) | Gao, Guohua (Shell International Exploration and Production Inc.) | Jennings, Jim (Shell International Exploration and Production Inc.) | Ramirez, Benjamin (Shell International Exploration and Production Inc.) | Xu, Zhihua (ExxonMobil) | Yeh, Tzu-hao (Shell International Exploration and Production Inc.) | Alpak, Faruk Omer (Shell International Exploration and Production Inc.) | Gelderblom, Paul (Shell International Exploration and Production Inc.)
Increased access to computational resources has allowed reservoir engineers to include assisted history matching (AHM) and uncertainty quantification (UQ) techniques as standard steps of reservoir management workflows. Several advanced methods have become available and are being used in routine activities without a proper understanding of their performance and quality. This paper provides recommendations on the efficiency and quality of different methods for applications to production forecasting, supporting the reservoir-management decision-making process.
Results from five advanced methods and two traditional methods were benchmarked in the study. The advanced methods include a nested sampling method MultiNest, the integrated global search Distributed Gauss-Newton (DGN) optimizer with Randomized Maximum Likelihood (RML), the integrated local search DGN optimizer with a Gaussian Mixture Model (GMM), and two advanced Bayesian inference-based methods from commercial simulation packages. Two traditional methods were also included for some test problems: the Markov-Chain Monte Carlo method (MCMC) is known to produce accurate results although it is too expensive for most practical problems, and a DoE-proxy based method widely used and available in some form in most commercial simulation packages.
The methods were tested on three different cases of increasing complexity: a 1D simple model based on an analytical function with one uncertain parameter, a simple injector-producer well pair in the SPE01 model with eight uncertain parameters, and an unconventional reservoir model with one well and 24 uncertain parameters. A collection of benchmark metrics was considered to compare the results, but the most useful included the total number of simulation runs, sample size, objective function distributions, cumulative oil production forecast distributions, and marginal posterior parameter distributions.
MultiNest and MCMC were found to produce the most accurate results, but MCMC is too costly for practical problems. MultiNest is also costly, but it is much more efficient than MCMC and it may be affordable for some practical applications. The proxy-based method is the lowest-cost solution. However, its accuracy is unacceptably poor.
DGN-RML and DGN-GMM seem to have the best compromise between accuracy and efficiency, and the best of these two is DGN-GMM. These two methods may produce some poor-quality samples that should be rejected for the final uncertainty quantification.
The results from the benchmark study are somewhat surprising and provide awareness to the reservoir engineering community on the quality and efficiency of the advanced and most traditional methods used for AHM and UQ. Our recommendation is to use DGN-GMM instead of the traditional proxy-based methods for most practical problems, and to consider using the more expensive MultiNest when the cost of running the reservoir models is moderate and high-quality solutions are desired.
We present a new digital solution based on a novel technique to predict acid gas membranes remaining performance based on the field data. Gas membranes are widely used onshore and offshore for acid gas removal from natural gas due to their efficiency and compactness. These systems are proven and well accepted, however their performance is highly dependent on field operations practices and conditions of the natural gas stream that feeds the system. If operating conditions are not controlled, the system performance can deteriorate. The weakened performance can lead to undesirable product gas specifications, contractual penalties, unexpected downtime, and ultimately the risk of environmental impact. On the other hand, maintenance anxiety and uncertainty can lead to overspend on membrane elements replacements; increasing overall operating expenditures. We developed the new technique during the past two years to allow the system operator to anticipate performance upsets by predictive monitoring and active machine learning using field operations data of gas membrane systems. This technique has adopted one of recursive Bayesian estimation techniques, linear Kalman filtering, and allows operators to predict and manage remaining membrane performance in the field proactively thereby optimize the membrane replacement expenditure.
This work presents a new workflow to obtain a better-constrained reservoir-scale model for an Alkaline-Surfactant-Polymer (ASP) injection pilot design. It is explained how the impact of uncertain parameters related to ASP flooding can be quantified, using calibrated core-scale simulation based on experimental results, and how the influential parameters range for future reservoir-scale simulation can be determined. Computational costs of core-scale model are therefore much lower, and the final reservoir model is better constrained.
ASP flooding feasibility implies core scale studies, where chemical formulations are validated in the laboratory under field conditions. In the objective of the pilot designing, a numerical model is constructed and calibrated to history-match the core flood sequences: Remaining Oil Saturation (ROS), surfactant-polymer (SP) and polymer-alkaline (PA) injection and eventually the chase water slug. In order to quantify the impact of ASP chemical parameters on the history match, the Global Sensitivity Analysis (GSA) was performed using Response Surface Modeling (RSM). To obtain the acceptable range of influential parameters for future reservoir-scale simulation, the Bayesian optimization is used.
Applying this methodology on a real reservoir core, the laboratory measurements are accurately reproduced. Nevertheless, once the core-scale model was matched, the transition to reservoir-scale model must be done. Due to a large number of parameters and their associated uncertainties, this transition is not straight-forward. Thus, an additional step in our workflow is included. A new methodology is applied to firstly quantify the impact of uncertain parameters related to ASP flooding (adsorption of surfactant on the rock, critical micellar concentration, water mobility reduction by polymer etc.). To do so, the RSM is used and influential parameters are identified. In this study, the surfactant adsorption coefficients are the most influential parameters while others related to SPA have a poor impact on experiment results matching. Secondly, the acceptable range of influential parameters for future reservoir-scale simulation and feasibility study is obtained during Bayesian optimization. Thus, instead of using a wide (prior) range of uncertain parameters values, refined (posterior) distribution laws can be used for future reservoir model.
While the classical approach consists in matching experimental results to obtain calibrated values of certain properties (that are then entered in the reservoir model) and finally determine the influential parameters at the reservoir scale, here the choice was made to determine influential parameters and characterize their impacts at the core scale. This step helps to better constrain the reservoir model. Ongoing work is using the results of this workflow for pilot design and risk analysis.
This paper considers Bayesian methods to discriminate between models depending on posterior model probability. When applying ensemble-based methods for model updating or history matching, the uncertainties in the parameters are typically assumed to be univariate Gaussian random fields. In reality, however, there often might be several alternative scenarios that are possible a priori. We take that into account by applying the concepts of model likelihood and model probability and suggest a method that uses importance sampling to estimate these quantities from the prior and posterior ensembles. In particular, we focus on the problem of conditioning a dynamic reservoir-simulation model to frequent 4D-seismic data (e.g., permanent-reservoir-monitoring data) by tuning the top reservoir surface given several alternative prior interpretations with uncertainty. However, the methodology can easily be applied to similar problems, such as fault location and reservoir compartmentalization. Although the estimated posterior model probabilities will be uncertain, the ranking of models according to estimated probabilities appears to be quite robust.
Bayesian inference provides a convenient framework for history matching and prediction. In this framework, prior knowledge, system nonlinearity, and measurement errors can be directly incorporated into the posterior distribution of the parameters. The Markov-chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior distribution. However, the MCMC method usually requires a large number of forward simulations. Hence, it can be a computationally intensive task, particularly when dealing with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model outputs in the form of polynomials using the stochastic collocation method (SCM). In addition, we use interpolation with the nested sparse grids and adaptively take into account the different importance of parameters for high-dimensional problems. Furthermore, we introduce an additional transform process to improve the accuracy of the surrogate model in case of strong nonlinearities, such as a discontinuous or unsmooth relation between the input parameters and the output responses. Once the surrogate system is built, we can evaluate the likelihood with little computational cost. Numerical results demonstrate that the proposed method can efficiently estimate the posterior statistics of input parameters and provide accurate results for history matching and prediction of the observed data with a moderate number of parameters.