An ensemble-based history-matching framework is proposed to enhance the characterization of petroleum reservoirs through the assimilation of crosswell electromagnetic (EM) data. As one of advanced technologies in reservoir surveillance, crosswell EM tomography can provide a cross-sectional conductivity map and hence saturation profile at an interwell scale by exploiting the sharp contrast in conductivity between hydrocarbons and saline water. Incorporating this new information into reservoir simulation in combination with other available observations is therefore expected to enhance the forecasting capability of reservoir models and to lead to better quantification of uncertainty.
The proposed approach applies ensemble-based data-assimilation methods to build a robust and flexible framework under which various sources of available measurements can be readily integrated. Because the assimilation of crosswell EM data can be implemented in different ways (e.g., components of EM fields or inverted conductivity), a comparative study is conducted. The first approach integrates crosswell EM data in its original form which entails establishing a forward model simulating observed EM responses. In this work, the forward model is based on Archie's law that provides a link between fluid properties and formation conductivity, and Maxwell’s equations that describe how EM fields behave given the spatial distribution of conductivity. Alternatively, formation conductivity can be used for history matching, which is obtained from the original EM data through inversion using an adjoint gradient-based optimization method. Because the inverted conductivity is usually of high dimension and very noisy, an image-oriented distance parameterization utilizing fluid front information is applied aiming to assimilate the conductivity field efficiently and robustly. Numerical experiments for different test cases with increasing complexity are carried out to examine the performance of the proposed integration schemes and potential of crosswell EM data for improving the estimation of relevant model parameters. The results demonstrate the efficiency of the developed history-matching workflow and added value of crosswell EM data in enhancing the characterization of reservoir models and reliability of model forecasts.
It is well-known that oil- and gasfield development is a high-risk venture. Uncertainties originating from geological models (e.g., structure, stratigraphy, channels, and geobodies) are coupled with uncertainties of reservoir models (e.g., distribution of permeability and porosity in the reservoir) and uncertainties of economic parameters (e.g., oil and gas prices and costs associated with drilling and other operations). It is critically important to properly quantify the uncertainty of such parameters and their effect on production forecasts and economic evaluations. Recently, multiobjective-optimization techniques have been developed to maximize expectations of some economic indicators (e.g., net present value) and, at the same time, to minimize associated uncertainty or risk. Because of limited access to the subsurface reservoir (e.g., it is impossible to measure the permeability and porosity at the location of each gridblock of a simulation model), reservoir properties have quite large uncertainties.
This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 181611, “Uncertainty Quantification for History-Matching Problems With Multiple Best Matches Using a Distributed Gauss-Newton Method,” by Guohua Gao, SPE, Jeroen C. Vink, SPE, Chaohui Chen, SPE, Mohammadali Tarrahi, SPE, and Yaakoub El Khamra, Shell, prepared for the 2016 SPE Annual Technical Conference and Exhibition, Dubai, 26–28 September. The paper has not been peer reviewed.
The oil & gas industry has been the backbone of the world's economy in the last century and will continue to be in the decades to come. With increasing demand and conventional reservoirs depleting, new oil industry projects have become more complex and expensive, operating in areas that were previously considered impossible and uneconomical. Therefore, good reservoir management is key for the economical success of complex projects requiring the incorporation of reliable uncertainty estimates for reliable production forecasts and optimizing reservoir exploitation. Reservoir history matching has played here a key role incorporating production, seismic, electromagnetic and logging data for forecasting the development of reservoirs and its depletion. With the advances in the last decade, electromagnetic techniques, such as crosswell electromagnetic tomography, have enabled engineers to more precisely map the reservoirs and understand their evolution. Incorporating the large amount of data efficiently and reducing uncertainty in the forecasts has been one of the key challenges for reservoir management. Computing the conductivity distribution for the field for adjusting parameters in the forecasting process via solving the inverse problem has been a challenge, due to the strong ill-posedness of the inversion problem and the extensive manual calibration required, making it impossible to be included into an efficient reservoir history matching forecasting algorithm. In the presented research, we have developed a novel Finite Difference Time Domain (FDTD) based method for incorporating electromagnetic data directly into the reservoir simulator. Based on an extended Archie relationship, EM simulations are performed for both forecasted and Porosity-Saturation retrieved conductivity parameters being incorporated directly into an update step for the reservoir parameters. This novel direct update method has significant advantages such as that it overcomes the expensive and ill-conditioned inversion process, applicable to arbitrary reservoir geometries and enables efficient integration with real field crosswell EM data.
The oil and gas industry has a long history of model construction relying on the integration on multiple disciplines including seismic, sedimentology, geology and petrophysics. It is undeniable that all steps are required to lead to a good characterization of the studied reservoir and to provide reliable and realistic predictions. However the necessary time to finalize each step often leads to a modeling process of several years. This paper therefore discusses an alternative workflow, or Fast-Track modeling approach, which allows generating a representative and matched reservoir model in a considerably reduced time. This approach consists in:
This workflow ensures achieving a better petrophysical properties consistency between logs, cores and models, a more representative asset volume estimates and also prevents utilization of non-measured parameters such as irreducible water saturation (Swc's) or unproven and extensive permeability multipliers. This altogether contributes in reducing convergences problems as well as providing more consistent dynamic model setup.
This workflow was applied on a complex offshore reservoir consisting of a large gas cap and significant oil rim. The results and achievements are presented within this paper and demonstrate the suitability of this workflow as a short term alternative and shortcut to complex simulation modeling, in waiting of all adequate studies to be completed and integrated in a detailed reservoir model.
Uncertainty in future reservoir performance is usually evaluated from the simulated performance of a small number of reservoir models. Unfortunately, most of the methods for generating reservoir models conditional to production data are known to create a distribution of realizations that is only approximately correct. The adequacy of the approximations is unknown, although several previous investigations of the approximate algorithms have suggested that the distributions of realizations could be badly misleading. In this paper, we evaluate the ability of the various sampling methods to correctly assess the uncertainty in reservoir predictions by comparing the distribution of realizations with a standard distribution from a Markov chain Monte Carlo method.
This study compares the ensemble of realizations from five sampling algorithms for a synthetic, one-dimensional, single-phase flow problem, in order to establish the best algorithm under controlled conditions. The small test problem was chosen in order that a large enough number of realizations could be generated from each method to ensure the statistical validity of the comparisons. Five thousand realizations were generated from each of the approximate sampling algorithms. The approximate sampling methods evaluated were linearization about the maximum a posteriori model (the square-root of the covariance matrix method), randomized maximum likelihood, and two two pilot point methods with six and nine pilot points locations. Realizations were also generated by a Markov chain Monte Carlo method with local perturbations and an attempt was made to generate realizations from a rejection sampling algorithm. The distributions of realizations from the approximate methods were compared to the distributions from the exact methods. While the approximate sampling methods performed relatively well for evaluating uncertainty in average reservoir porosity and effective steady-state permeability, most failed to adequately assess uncertainty in some other function of the reservoir model such as the distribution of extreme permeability values or the data mismatch. In general, the method of randomized maximum likelihood performed better than other approximate methods.
The only practical methods for quantifying uncertainty in reservoir performance require the generation of multiple random reservoir models conditional to available data. By simulating the future production from each realization, an empirical distribution of production characteristics is obtained. The validity of this method for quantifying uncertainty depends strongly on the quality of the distribution of reservoir models generated. Methods for sampling from the a posteriori probability density function (pdf) of reservoir flow models conditioned to production data have been widely reported in the literature. Rigorous methods of sampling from the a posteriori distribution for reservoir properties have been applied by Oliver et al.1; Bonet-Cunha et al.2 and Omre et al.3 Most other attempts to quantify uncertainty in reservoir performance are based on approximate sampling algorithms. The purpose of this study is to evaluate the distribution of samples from several of these approximate methods. The same assumptions and models were used for all methods in this study, as differences in the model assumptions have made it difficult to draw quantitative conclusions on the reliability of the methods in previous studies4;5;6.
This paper (SPE 50991) was revised for publication from paper SPE 36566, first presented at the 1996 SPE Annual Technical Conference and Exhibition held in Denver, Colorado, 6-9 October. Original manuscript received for review 24 September 1996. Revised manuscript received 10 February 1998. Revised manuscript approved 28 April 1998.
It is necessary to construct and sample the a posteriori probability density functions (pdf's) for the rock property fields to properly evaluate the uncertainty in reservoir performance predictions. In this work, the a posteriori pdf is constructed based on prior means and variograms (covariance function) for log-permeability and multiwell pressure data. Within the context of sampling the probability density function, we argue the notion of equally probable realizations is the wrong paradigm for reservoir characterization. If the simulation of Gaussian random fields with a known variogram is the objective, it is shown that the variogram should not be incorporated directly into the objective function if simulated annealing (SA) is applied either to sample the a posteriori pdf or to estimate a global minimum of the associated objective function. It is shown that the hybrid Markov chain Monte Carlo (MCMC) method provides a way to explore more fully the set of plausible log-permeability fields and does not suffer from the high rejection rates of more standard MCMC methods.
Recently, we have shown that reservoir descriptions conditioned to multiwell pressure data and univariate and bivariate statistics for permeability and porosity can be obtained by techniques developed from inverse problem theory. The techniques yield estimates of well skin factors and porosity and permeability fields which honor both the spatial statistics and the pressure data. Imbedded in the methodology is the application of the Gauss-Newton method to construct the maximum a posteriori estimate of the reservoir parameters. If one wishes to determine permeability and porosity values at thousands of grid-blocks for use in a reservoir simulator, then inversion of the Hessian matrix-at each iteration of the Gauss-Newton procedure becomes computationally expensive. In this work, we present two methods to reparameterize the reservoir model to improve the computational efficiency. The first method uses spectral (eigenvalue/eigenvector) decomposition of the prior model. The second method uses a subspace method to reduce the size of the matrix problem that must be solved at each iteration of the Gauss-Newton method. It is shown that proper implementation of the reparameterization techniques significantly decreases the computational time required to generate realizations of the reservoir model, i.e., the porosity and permeability fields and well skin factors, conditioned to prior information on porosity and permeability and multiwell pressure data.
Proper integration of static data (core, log, seismic, and geologic information) with dynamic data (production and well tests) is critical for reservoir characterization. It is known that ignoring prior information obtained from static data when history matching production data yields nonunique solutions, i.e., widely different estimates of the set of reservoir parameters may all yield an acceptable match of the production history. As early as 1976, Gavalas et al. recognized that incorporating prior data when history matching production data would reduce the variation in the estimates of gridblock values of porosity and permeability.
Inverse problem theory provides a methodology to incorporate prior information when history matching production data. The standard application of inverse problem theory depends on the assumption that prior information on the model (set of reservoir parameters to be estimated) satisfies a multinormal distribution and that measurement errors in production data can be considered as Gaussian random variables with zero mean and known variance. Under these assumptions, the most probable model (the maximum a posteriori estimate) conditioned to both prior information and production data can be obtained by minimizing an objective function derived directly from the a posteriori probability density function. Since the a posteriori probability density function is derived from Bayes's theorem, this approach is often referred to as Bayesian estimation. It is convenient to minimize the objective function by a gradient method to obtain an approximation to the most probable model which is referred to as the maximum a posteriori estimate.
Gavalas et al. used Gaussian type expressions for the covariance functions of porosity and permeability, the cross covariance between them, and the prior estimates of the means of porosity and permeability to incorporate prior information in the objective function when history matching multiwell pressure data obtained in a synthetic one-dimensional reservoir under single-phase flow conditions. They showed that incorporating the prior information reduced the errors in the estimates of permeability and porosity and also improved the convergence properties of the minimization algorithms considered.