Reliability of subsurface assessment for different field development scenarios depends on how effective the uncertainty in production forecast is quantified. Currently there is a body of work in the literature on different methods to quantify the uncertainty in production forecast. The objective of this paper is to revisit and compare these probabilistic uncertainty quantification techniques through their applications to assisted history matching of a deep-water offshore waterflood field. The paper will address the benefits, limitations, and the best criteria for applicability of each technique.
Three probabilistic history matching techniques commonly practiced in the industry are discussed. These are Design-of-Experiment (DoE) with rejection sampling from proxy, Ensemble Smoother (ES) and Genetic Algorithm (GA). The model used for this study is an offshore waterflood field in Gulf-of-Mexico. Posterior distributions of global subsurface uncertainties (e.g. regional pore volume and oil-water contact) were estimated using each technique conditioned to the injection and production data.
The three probabilistic history matching techniques were applied to a deep-water field with 13 years of production history. The first 8 years of production data was used for the history matching and estimate of the posterior distribution of uncertainty in geologic parameters. While the convergence behavior and shape of the posterior distributions were different, consistent posterior means were obtained from Bayesian workflows such as DoE or ES. In contrast, the application of GA showed differences in posterior distribution of geological uncertainty parameters, especially those that had small sensitivity to the production data. We then conducted production forecast by including infill wells and evaluated the production performance using sample means of posterior geologic uncertainty parameters. The robustness of the solution was examined by performing history matching multiple times using different initial sample points (e.g. random seed). This confirmed that heuristic optimization techniques such as GA were unstable since parameter setup for the optimizer had a large impact on uncertainty characterization and production performance.
This study shows the guideline to obtain the stable solution from the history matching techniques used for different conditions such as number of simulation model realizations and uncertainty parameters, and number of datapoints (e.g. maturity of the reservoir development). These guidelines will greatly help the decision-making process in selection of best development options.
Hjeij, Dawood (Division of Sustainable Development, College of Science and Engineering, Hamad Bin Khalifa University) | Abushaikha, Ahmad (Division of Sustainable Development, College of Science and Engineering, Hamad Bin Khalifa University)
Most commercially available simulators use the trivial two-point flux approximation (TPFA) method for flux computation. However, the TPFA only gives consistent solutions when used for K-orthogonal grids. In general, multi-point flux approximation (MPFA) methods perform better under both heterogeneous and anisotropic conditions. The mimetic finite difference (MFD) method is designed to preserve properties on unstructured polyhedral grids, and its development for simulating full tensor permeabilities is also crucial step. This paper compares the performance, accuracy, and efficiency of these schemes for simulating complex synthetic and realistic hydrocarbon reservoirs.
A challenging problem of automated history-matching work flows is ensuring that, after applying updates to previous models, the resulting history-matched models remain consistent geologically. To enhance the applicability of localization to various history-matching problems, the authors adopt an adaptive localization scheme that exploits the correlations between model variables and observations. This paper presents a novel approach to generate approximate conditional realizations using the distributed Gauss-Newton (DGN) method together with a multiple local Gaussian approximation technique. This work presents a systematic and rigorous approach of reservoir decomposition combined with the ensemble Kalman smoother to overcome the complexity and computational burden associated with history matching field-scale reservoirs in the Middle East. This paper presents a comparison of existing work flows and introduces a practically driven approach, referred to as “drill and learn,” using elements and concepts from existing work flows to quantify the value of learning (VOL).
High-resolution discretizations can be advantageous in compositional simulation to reduce excessive numerical diffusion that tends to mask shocks and fingering effects. In this work, we outline a fully implicit, dynamic, multilevel, high-resolution simulator for compositional problems on unstructured polyhedral grids. We rely on four ingredients: (i) sequential splitting of the full problem into a pressure and a transport problem, (ii) ordering of grid cells based on intercell fluxes to localize the nonlinear transport solves, (iii) higher-order discontinuous Galerkin (dG) spatial discretization with order adaptivity for the component transport, and (iv) a dynamic coarsening and refinement procedure. For purely cocurrent flow, and in the absence of capillary forces, the nonlinear transport system can be perturbed to a lower block-triangular form. With counter-current flow caused by gravity or capillary forces, the nonlinear system of discrete transport equations will contain larger blocks of mutually dependent cells on the diagonal. In either case, the transport subproblem can be solved efficiently cell-by-cell or block-by-block because of the natural localization in the dG scheme. In addition, we discuss how adaptive grid and order refinement can effectively improve accuracy. We demonstrate the applicability of the proposed solver through a number of examples, ranging from simple conceptual problems with PEBI grids in two dimensions, to realistic reservoir models in three dimensions. We compare our new solver to the standard upstream-mobility-weighting scheme and to a second-order WENO scheme.
Accurate numerical modeling of fluid transport is essential in reservoir management. Higher-order methods help to improve accuracy by reducing the numerical diffusion, which is common for all first order methods. In this paper, we present an implementation of a MUSCL-type second-order finite volume method and demonstrate its capabilities on 2D and 3D unstructured grids. This includes corner point grids that are typically used in reservoir modeling.
A second order finite volume method is compared to standard first order method in terms of accuracy, performance and an ability to handle nonlinearities. There are several ways to build a second order finite volume method. In this paper we choose an optimization-based strategy to compute the steepest possible linear reconstruction. At the same time, a steepness-limiting procedure is included in the optimization as constraint. This ensures that the steepest possible reconstruction that does not lead to oscillations is computed. As a result, sharper fronts compared to standard schemes are obtained.
The paper demonstrates the described method on several benchmark cases with emphasis on relevant for practical reservoir simulation test cases. In particular, we use Norne field open data set, which enables cross validation with other implementations. We test the method on the transport case, where an analytical solution is known, to verify convergence behavior and to isolate the errors. Furthermore, the performance of first- and second-order methods is compared on multiphase flow problems typical for improved oil recovery: solvent and CO2 injection. The second order method shows superior performance in terms of accuracy.
This paper verifies the desirable properties of higher order method for reservoir simulation. Moreover, all the described implementations are available in an open source reservoir simulator Open Porous Media (OPM). As a result, these methods are accessible for reservoir engineers and can be used with industry standard modeling setups.
Modelling multiscale-multiphysics geology at field scales is non-trivial due to computational resources and data availability. At such scales it is common to use implicit modelling approaches as they remain a practical method of understanding the first order processes of complex systems. In this work we introduce a numerical framework for the simulation of geomechanical dual-continuum materials. Our framework is written as part of the open source MATLAB Reservoir Simulation Toolbox (MRST). We discretise the flow and mechanics problems using the finite volume method (FVM) and virtual element method (VEM) respectively. The result is a framework that ensures local mass conservation with respect to flow and is robust with respect to gridding. Solution of the coupled linear system can be achieved with either fully coupled or fixed-stress split solution strategies. We demonstrate our framework on an analytical comparison case and on a 3D geological grid case. In the former we observe a good match between analytical and numerical results, for both fully coupled and fixed-stress split strategies. In the latter, the geological model is gridded using a corner point grid that contains degenerate cells as well as hanging nodes. For the geological case, we observe physically plausible and intuitive results given the boundary conditions of the problem. Our initial testing with the framework suggests that the FEM-VEM discretisation has potential for conducting practical geomechanical studies of multiscale systems.
The ensemble based methods (especially various forms of iterative ensemble smoothers) have been proven to be effective in calibrating multiple reservoir models, so that they are consistent with historical production data. However, due to the complex nature of hydrocarbon reservoirs, the model calibration is never perfect, it is always a simplified version of reality with coarse representation and unmodeled physical processes. This flaw in the model that causes mismatch between actual observations and simulated data when ‘perfect’ model parameters are used as model input is known as ‘model error’. Assimilation of data without accounting for this model error can result in incorrect adjustment to model parameters, underestimation of prediction uncertainties and bias in forecasts.
In this paper, we investigate the benefit of recognising and accounting for model error when an iterative ensemble smoother is used to assimilate production data. The correlated ‘total error’ (combination of model error and observation error) are estimated from the data residual after a standard history matching using Levenberg-Marquardt form of iterative ensemble smoother (LM-EnRML). This total error is then used in further data assimilations to improve the model prediction and uncertain quantification from the final updated model ensemble. We first illustrate the method using a synthetic 2D five spot case, where some model errors are deliberately introduced, and the results are closely examined against the known ‘true’ model. Then the Norne field case is used to further evaluate the method.
The Norne model has previously been history matched using the LM-EnRML (
Luo, Xiaodong (International Research Institute of Stavanger) | Lorentzen, Rolf J. (International Research Institute of Stavanger) | Valestrand, Randi (International Research Institute of Stavanger) | Evensen, Geir (International Research Institute of Stavanger and Nansen Environmental and Remote Sensing Center)
Ensemble-based methods are among the state-of-the-art history-matching algorithms. However, in practice, they often suffer from ensemble collapse, a phenomenon that deteriorates history-matching performance. It is customary to equip an ensemble history-matching algorithm with a localization scheme to prevent ensemble collapse. Conventional localization methods use distances between the physical locations of model variables and observations to modify the degree of the observations’ influence on model updates. Distance-based localization methods work well in many problems, but they also suffer from dependence on the physical locations of both model variables and observations, the challenges in dealing with nonlocal and time-lapse measurements, and the nonadaptivity to handling different types of model variables.
To enhance the applicability of localization to various history-matching problems, we adopt an adaptive localization scheme that exploits the correlations between model variables and simulated observations. We elaborate how correlation-based adaptive localization can overcome or mitigate issues arising in conventional distance-based localization.
To demonstrate the efficacy of correlation-based adaptive localization, we adopt an iterative ensemble smoother (iES) with the proposed localization scheme to history match the real production data of the Norne Field model, and we compare the history-matching results with those obtained by using the iES with distance-based localization. Our study indicates that when compared with distance-based localization, correlation-based localization not only achieves close or better performance in terms of data mismatch, but also is more convenient to use in practical history-matching problems. As a result, the proposed correlation-based localization scheme might serve as a viable alternative to conventional distance-based localization.
Compositional simulation is attractive for a wide variety of applications in reservoir simulation and it is especially valuable when modeling gas injection for enhanced oil recovery. Since the nonlinear behavior of gas injection is sensitive to the resolution of the simulation grid used, it is important to use a fine grid to accurately resolve the gas front and the pressure propagation. Unfortunately, discretizing a compositional flow system with many components on a high-resolution geological model leads to very large and poorly conditioned linear systems, and the high computational cost of solving these systems tends to render fieldscale simulations infeasible. An additional challenge is the need for phase-equilibrium calculations, which often represent a large fraction of the computational time when both gas and liquid are present. We present a multiscale solver for compositional three-phase flow problems in which the behavior of the liquid and vapor phases are described by generalized cubic equations of state. The solver relies on a total-mass and overall composition based sequential solution strategy for the flow and transport equations and uses restricted smoothing to compute multiscale basis functions on unstructured grids with general polyhedral cell geometries. The resulting method computes approximate pressure solutions (to within a prescribed residual tolerance) that have conservative fluxes on the reduced computational grid, the original fine-scale grid, or any intermediate partition. The new method is verified against existing compositional simulators on conceptual models and validated on more complex cases represented on both unstructured and corner-point grids with strong heterogeneity, faults, pinched and eroded cells, etc. The resulting implementation is the first demonstrated multiscale method applicable to general compositional problems relevant for the petroleum industry, which includes a cubic equation of state and stratigraphic grids.
A feasibility study of time lapse CSEM reservoir monitoring with vertical dipoles is performed in a realistic setting. The starting point is a full field reservoir model of the Norne field. By doing reservoir simulation the changes in pressure and saturation over time is calculated. The calculated changes are assumed to mimic the real changes in the reservoir. By using Archies law with saturation and porosity data a resitivity model is created. A necessary upscaling from the reservoir model is done. Inverted resitivities are calculated at several points in time. It is found that the changes in resitivity should be observable after inversion after about 4 years of production, taking measurement noise and other sources of errors into account. The forward and inversion 3D CSEM code of Newman and Commer (2005) is used.
Presentation Date: Wednesday, October 19, 2016
Start Time: 4:25:00 PM
Presentation Type: ORAL