Hui, Mun-Hong (Chevron Energy Technology Company) | Dufour, Gaelle (Chevron Energy Technology Company) | Vitel, Sarah (Chevron Energy Technology Company) | Muron, Pierre (Chevron Energy Technology Company) | Tavakoli, Reza (Chevron Energy Technology Company) | Rousset, Matthieu (Chevron Energy Technology Company) | Rey, Alvaro (Chevron Energy Technology Company) | Mallison, Bradley (Chevron Energy Technology Company)
Traditionally, fractured reservoir simulations use Dual-Porosity, Dual-Permeability (DPDK) models that can idealize fractures and misrepresent connectivity. The Embedded Discrete Fracture Modeling (EDFM) approach improves flow predictions by integrating a realistic fracture network grid within a structured matrix grid. However, small fracture cells with high conductivity that pose a challenge for simulators can arise and ad hoc strategies to remove them can alter connectivity or fail for field-scale cases. We present a new gridding algorithm that controls the geometry and topology of the fracture network while enforcing a lower bound on the fracture cell sizes. It honors connectivity and systematically removes cells below a chosen fidelity factor. Furthermore, we implemented a flexible grid coarsening framework based on aggregation and flow-based transmissibility upscaling to convert EDFMs to various coarse representations for simulation speedup. Here, we consider pseudo-DPDK (pDPDK) models to evaluate potential DPDK inaccuracies and the impact of strictly honoring EDFM connectivity via Connected Component within Matrix (CCM) models. We combine these components into a practical workflow that can efficiently generate upscaled EDFMs from stochastic realizations of thousands of geologically realistic natural fractures for ensemble applications.
We first consider a simple waterflood example to illustrate our fracture upscaling to obtain coarse (pDPDK and CCM) models. The coarse simulation results show biases consistent with the underlying assumptions (e.g., pDPDK can over-connect fractures). The preservation of fracture connectivity via the CCM aggregation strategy provides better accuracy relative to the fine EDFM forecast while maintaining computational speedup. We then demonstrate the robustness of the proposed EDFM workflow for practical studies through application to an improved oil recovery (IOR) study for a fractured carbonate reservoir. Our automatable workflow enables quick screening of many possibilities since the generation of full-field grids (comprising almost a million cells) and their preprocessing for simulation completes in a few minutes per model. The EDFM simulations, which account for complicated multiphase physics, can be generally performed within hours while coarse simulations are about a few times faster. The comparison of ensemble fine and coarse simulation results shows that on average, a DPDK representation can lead to high upscaling errors in well oil and water production as well as breakthrough time while the use of a more advanced strategy like CCM provides greater accuracy. Finally, we illustrate the use of the Ensemble Smoother with Multiple Data Assimilation (ESMDA) approach to account for field measured data and provide an ensemble of history-matched models with calibrated properties.
Ensemble-based algorithms have been successfully implemented for history matching of geological models. However, their performance is optimal only if the prior-state vector is linearly related to the predicted data and if the joint distribution of the prior-state vector is multivariate Gaussian. Moreover, the number of degrees of freedom is as large as the ensemble size, so the assimilation of large amounts of production or seismic data might lead to the ensemble collapse which results in inaccurate predictions of future performance. In this paper, we introduce a methodology that combines model classification with multidimensional scaling (MDS) and the ensemble smoother algorithm to efficiently history match fluvial and channelized reservoir models. The dynamic responses (production and seismic data) of the different ensemble members are used to compute a dissimilarity matrix. This dissimilarity matrix is then transformed into a lower-dimensional space by the use of MDS. Then, model classification is performed based on the distances between the mapped responses in the lower dimensional space and the actual observed response. In the proposed method. the transformed lower-dimensional data are used instead of original observations in the update equation to update the cluster of ensemble members that are closest to the observed response. In this manner, a limited number of ensemble members are enough to assimilate large amount of observed data without triggering the ensemble collapse problem. The updated subset of models (cluster) are used to infer a probability map and/or new hard conditioning data to re-sample new conditional members for the next iteration or next data-assimilation step. The proposed algorithm is tested by assimilating production and time-lapse seismic data into channelized reservoir models. The presented computational results show significant improvements in terms of preserving channelized features and in terms of reliability of predictions compared to the standard implementation of ensemble-based algorithms.
Applying an ensemble Kalman filter (EnKF) is an effective method for reservoir history matching. The underlying principle is that an initial ensemble of stochastic models can be progressively updated to reflect measured values as they become available. The EnKF performance is only optimal, however, if the prior-state vector is linearly related to the predicted data and if the joint distribution of the prior-state vector is multivariate Gaussian. Therefore, it is challenging to implement the filtering scheme for non-Gaussian random fields, such as channelized reservoirs, in which the continuity of permeability extremes is well-preserved. In this paper, we develop a methodology by combining model classification with multidimensional scaling (MDS) and the EnKF to create rapidly updating models of a channelized reservoir. A dissimilarity matrix is computed by use of the dynamic responses of ensemble members. This dissimilarity matrix is transformed into a lower-dimensional space by use of MDS. Responses mapped in the lower-dimension space are clustered, and on the basis of the distances between the models in a cluster and the actual observed response, the closest models to the observed response are retrieved. Model updates within the closest cluster are performed using EnKF equations. The results of an update are used to resample new models for the next step. Two-dimensional, waterflooding examples of channelized reservoirs are provided to demonstrate the applicability of the proposed method. The obtained results demonstrate that the proposed algorithm is viable both for sequentially updating reservoir models and for preserving channel features after the data-assimilation process.
Ensemble Kalman filtering (EnKF) is an effective method for reservoir history matching. The underlying principle is that an initial ensemble of stochastic models can be progressively updated to reflect the measured values as they become available. However, the ensemble Kalman filter updating scheme restricts the method to multivariate Gaussian random fields. Therefore, it is challenging to implement the filtering scheme for non-Gaussian random fields such as reservoirs where there is a strong contrast in permeability between different geological facies. In this paper, we develop a methodology to combine multidimensional scaling and the ensemble Kalman filter for rapidly updating models for a channelized reservoir. A dissimilarity matrix is computed using the dynamic responses of ensemble members. This dissimilarity matrix is transformed to a lower dimensional space using multidimensional scaling. The responses mapped in the lower dimension space are clustered and based on distances between the models in a cluster and the actual observed response, the closest models to the observed response are retrieved. Updating of models within the closest cluster are performed using EnKF equations. The results of the update are used to re-sample new models for the next step. A two dimensional water flooding example of a channelized reservoir is set up to demonstrate the applicability of the proposed method. The obtained results demonstrate that the proposed algorithm is viable for sequentially updating reservoir models and for preserving the channel features after the data assimilation process.
The ensemble Kalman filter (EnKF) has been successfully implemented to assimilate data in reservoir history matching problems. In the EnKF method, a suite of reservoir models (set of ensemble members) runs independently forward in time (forecast step), and is continuously updated as new data becomes available (analysis step). In this paper, an efficient implementation of the EnKF is presented in which three-level parallelization is employed.
The first level of parallelization is during the forecast step, where each ensemble member runs on a separate processor. This is very efficient for a large number of ensemble members, but without additional parallelization, the memory of a single processor constrains the size of the reservoir simulation. Therefore, a second level of parallelization which uses a parallel reservoir simulator for each realization is implemented. The analysis step requires collecting a state vector from each ensemble member. If this data is collected on a single processor, this poses an additional limitation on the size of the EnKF problem in terms of both memory and computation time. Therefore, we propose an algorithm in which a third level of parallelization is achieved for the analysis step. The main computational gain of parallelization of the analysis step comes from the fact that the matrix-vector multiplications can be parallelized efficiently.
The parallel EnKF algorithm is applied to a set of reservoir history matching problems. The effect of ensemble sizes on the history matching results is investigated. We present computational results that show the efficiency is greatly enhanced by moving from a serial to a parallel implementation of the EnKF. The initial testing of parallel EnKF has been done on the massively parallel machines Ranger and Lonestar, at the Texas Advanced Computing Center (TACC), and the cluster
Bevo2, at the Institute for Computational Engineering and Sciences (ICES) at the University of Texas at Austin.
In recent years the EnKF has been gaining popularity in the petroleum engineering literature (as summarized by Aanonsen et al. (2009)) and has been employed as a reservoir management tool for sequential data assimilation and assessment of uncertainty in future forecasts (Gu and Oliver, 2005; Gao et al., 2006; Evensen et al., 2007; Skjervheim et al., 2007). Evensen (1994) introduced the EnKF algorithm as a better alternative to solving the computationally extremely demanding equation used in the extended Kalman lter. The EnKF algorithm is a Monte Carlo formulation of the Kalman lter, in which an ensemble of reservoir models is used to estimate the correlation between predicted data (such as production rates and bottom-hole pressures) and reservoir variables (static model parameters such as porosity and permeability, as well as dynamic variables such as pressure and saturation for two-phase flow). In addition, the ensemble is used to provide an estimate of uncertainty in future reservoir performance.
The sequential EnKF algorithm consists of a forecast step and an analysis (update) step. The forecast step uses a reservoir simulator for each of the ensemble members to predict data at the next data assimilation time step. The analysis step uses the new measurements and the Kalman update equation to update the model parameters and simulation variables. This process is continuously repeated in time as new dynamic data (production data) becomes available.
Even though the standard implementation of the EnKF algorithm is far more efficient for dynamic data assimilation than automatic history matching based on optimization algorithms, the sequential run of reservoir simulation for each of the ensemble members is still computationally demanding and time consuming, especially for large scale numerical models. In addition, the memory of a single processor dictates the size of a reservoir model (an ensemble member) that can be used in practice. For these reasons, it is advantageous to parallelize the EnKF application. In this work, our main focus is on development and implementation of a parallel EnKF framework for assimilation of production data,
In gradient-based automatic history matching, calculation of the derivatives (sensitivities) of all production data with respect to gridblock rock properties and other model parameters is not feasible for large-scale problems. Thus, the Gauss-Newton (GN) method and Levenberg-Marquardt (LM) algorithm, which require calculation of all sensitivities to form the Hessian, are seldom viable. For such problems, the quasi-Newton and nonlinear conjugate gradient algorithms present reasonable alternatives because these two methods do not require explicit calculation of the complete sensitivity matrix or the Hessian. Another possibility, the one explored here, is to define a new parameterization to radically reduce the number of model parameters.
We provide a theoretical argument that indicates that reparameterization based on the principal right singular vectors of the dimensionless sensitivity matrix provides an optimal basis for reparameterization of the vector of model parameters. We develop and illustrate algorithms based on this parameterization. Like limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS), these algorithms avoid explicit computation of individual sensitivity coefficients. Explicit computation of the sensitivities is avoided by using a partial singular value decomposition (SVD) based on a form of the Lanczos algorithm. At least for all synthetic problems that we have considered, the reliability, computational efficiency, and robustness of the methods presented here are as good as those obtained with quasi-Newton methods.
In gradient based automatic history matching, calculation of the derivatives of all production data with respect to gridblock rock properties (sensitivities) and other model parameters is not feasible for large-scale problems. Thus, the Gauss-Newton method and Levenberg-Marquardt algorithm, which require calculation of all sensitivities to form the Hessian, are seldom viable. For such problems, the quasi-Newton and nonlinear conjugate gradient algorithms present reasonable alternatives as these two methods do not require explicit calculation of the complete sensitivity matrix or the Hessian. Another possibility, the one explored here, is to define a new parameterization to radically reduce the number of model parameters.
We provide a theoretical argument which indicates that reparametrization based on the principal right singular vectors of the dimensionless sensitivity matrix provides an optimal basis for re-parametrization of the vector of model parameters. We present and apply to example problems two algorithms for using this parametrization. Like LBGFS, these algorithms avoid explicit computation of individual sensitivity coefficients. Explicit computation of the sensitivities is avoided by using a partial singular value decomposition based on a form of the Lanczos algorithm. At least for all synthetic problems that we have considered, the reliability, computational efficiency and robustness of the methods presented here are as good as those obtained with quasi-Newton methods.