|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
History matching is the most time-consuming phase in any reservoir-simulation study. As a means of accelerating reservoir simulations, a 2018 study proposed an approach in which a reservoir is treated as a combination of multiple interconnected compartments that, under a range of uncertainty, can capture the reservoir’s response during a recovery process. In this work, the authors extend that approach to represent a reservoir in a multiscale form consisting of multiple interconnected segments. To identify segments of the reservoir, spatial, temporal, and spatiotemporal unsupervised data-mining clustering techniques are used. Then, a novel nonlocal formulation for flow in porous media is presented in which the reservoir is represented by an adjacency matrix describing the neighbor and non-neighbor connections of comprising compartments.
In the complete paper, the authors present a novel methodology to model interwell connectivity in mature waterfloods and achieve an improved reservoir-energy distribution and sweep pattern to maximize production performance by adjusting injection and production strategy on the well-control level. A Drilling Advisory System (DAS) is a rig-based drilling-surveillance and -optimization platform that encourages regular drilloff tests, carefully monitors drilling performance, and provides recommendations for controllable drilling parameters to help improve the overall drilling process. This paper proposes a framework based on proxies and rejection sampling (filtering) to perform multiple history-matching runs with a manageable number of reservoir simulations.
The industry increasingly relies on forecasts from reservoir models for reservoir management and decision making. However, because forecasts from reservoir models carry large uncertainties, calibrating them as soon as data come in is crucial. The complete paper explores the use of multilevel derivative-free optimization for history matching, with model properties described using principal component analysis (PCA) -based parameterization techniques. The results of the authors’ research showed promising benefits from the use of a systematic procedure of model diagnostics, model improvement, and model-error quantification during data assimilations. A challenging problem of automated history-matching work flows is ensuring that, after applying updates to previous models, the resulting history-matched models remain consistent geologically.
This course covers introductory and advanced concepts in streamline simulation and its applications. We will review the theory of streamlines and streamtubes in multi-dimensions. Applications include slow visualization, swept volume calculations, rate allocation and pattern balancing, waterflooding management and optimization, solvent flooding, ranking geostatistical realizations, upscaling/upgridding, history matching and dynamic reservoir characterization. Discussions will include the strengths and limitations of streamline modeling compared with finite difference simulation. PC-Windows based computer programs are used to illustrate the concepts.
Relative permeability and capillary pressure are the key parameters of the multiphase flow in a reservoir. To ensure an accurate determination of these functions in the areas of interest, the core flooding and centrifuge experiments on the relevant core samples need to be interpreted meticulously. In this work, relative permeability and capillary pressure functions are determined synchronously by history matching of multiple experiments simultaneously in order to increase the precision of results based on additional constraints coming from extra measurements. To take into account the underlying physics without making crude assumptions, the Special Core Analysis (SCAL) experiments are chosen to be simulated instead of using well know simplified analytical or semianalytical solutions. Corresponding numerical models are implemented with MRST (Lie, 2019) library. The history matching approach is based on the adjoint gradient method for the constrained optimization problem. Relative permeability and capillary pressure curves, which are the objectives of history matching, within current implementation can have a variety of representations as Corey, LET, B-Splines and NURBS. For the purpose of analyzing the influence of correlations on the history matching results in this study, the interpretation process with assumed analytical correlations is compared to history matching based on generic NURBS representation of relevant functions.
Polymer flooding offers the potential to recover more oil from reservoirs but requires significant investments which necessitate a robust analysis of economic upsides and downsides. Key uncertainties in designing a polymer flood are often reservoir geology and polymer degradation. The objective of this study is to understand the impact of geological uncertainties and history matching techniques on designing the optimal strategy and quantifying the economic risks of polymer flooding in a heterogeneous clastic reservoir.
We applied two different history matching techniques (adjoint-based and a stochastic algorithm) to match data from a prolonged waterflood in the Watt Field, a semi-synthetic reservoir that contains a wide range of geological and interpretational uncertainties. An ensemble of reservoir models is available for the Watt Field, and history matching was carried out for the entire ensemble using both techniques. Next, sensitivity studies were carried out to identify first-order parameters that impact the Net Present Value (NPV). These parameters were then deployed in an experimental design study using a Latin Hypercube to generate training runs from which a proxy model was created. The proxy model was constructed using polynomial regression and validated using further full-physics simulations. A particle swarm optimisation algorithm was then used to optimize the NPV for the polymer flood. The same approach was used to optimise a standard water flood for comparison. Optimisations of the polymer flood and water flood were performed for the history matched model ensemble and the original ensemble.
The sensitivity studies showed that polymer concentration, location of polymer injection wells and time to commence polymer injection are key to optimizing the polymer flood. The optimal strategy to deploy the polymer flood and maximize NPV varies based on the history matching technique. The average NPV is predicted to be higher in the stochastic history matching compared to the adjoint technique. The variance in NPV is also higher for the stochastic history matching technique. This is due to the ability of the stochastic algorithm to explore the parameter space more broadly, which created situations where the oil in place is shifted upwards, resulting in higher NPV. Optimizing a history matched ensemble leads to a narrow variance in absolute NPV compared to history matching the original ensemble. This is because the uncertainties associated with polymer degradation are not captured during history matching. The result of cross comparison, where an optimal polymer design strategy for one ensemble member is deployed to the other ensemble members, predicted a decline in NPV but surprisingly still shows that the overall NPV is higher than for an optimized water food. This indicates that a polymer flood could be beneficial compared to a water flood, even if geological uncertainties are not captured properly.
History matching field performance is a time-consuming, complex and non-unique inverse problem that yields multiple plausible solutions. This is due to the inherent uncertainty associated with geological and flow modeling. The history matching must be performed diligently with the ultimate objective of availing reliable prediction tools for managing the oil and gas assets. Our work capitalizes on the latest development in ensemble Kalman techniques, namely, the Ensemble Kalman Filter and Smoother (EnKF/S) to properly quantify and manage reservoir models’ uncertainty throughout the process of model calibration and history matching.
Sequential and iterative EnKF/S algorithms have been developed to overcome the shortcomings of the existing methods such as the lack of data assimilation capabilities and abilities to quantify and manage uncertainties, in addition to the huge number of simulations runs required to complete a study. An initial ensemble of 40 to 50 equally probable reservoir models was generated with variable areal, vertical permeability and porosity. The initial ensemble captured the most influencing reservoir properties, which will be propagated and honored by the subsequent ensemble iterations. Data misfits between the field historical data and simulation data are calculated for each of the realizations of reservoir models to quantify the impact of reservoir uncertainty, and to perform the necessary changes on horizontal, vertical permeability and porosity values for the next iteration. Each generation of the optimization process reduces the data misfit compared to the previous iteration. The process continues until a satisfactory field level and well level history match is reached or when there is no more improvement.
In this study, an application of EnKF/S is demonstrated for history matching of a faulted reservoir model under waterflooding conditions. The different implementations of EnKF/S were compared. EnKF/S preserved key geological features of the reservoir model throughout the history matching process. During this study, EnKF/S served as a bridge between classical control theory solutions and Bayesian probabilistic solutions of sequential inverse problems. EnKF/S methods demonstrated good tracking qualities while giving some estimate of uncertainty as well.
The updated reservoir properties (horizontal, vertical permeability and porosity values) are conditioned throughout the EnKF/S processes (cycles), maintaining consistency with the initial geological understanding. The workflow resulted in enhanced history match quality in shorter turnaround time with much fewer simulation runs than the traditional genetic or Evolutionary algorithms. The geological realism of the model is retained for robust prediction and development planning.
One of the final goals of any reservoir characterization study is to deliver reliable production forecasts. This is definitely a challenging task as the fluid flow dynamics is governed by non-linear equations: a small perturbation in the reservoir model inputs might have a large impact on the modelling outputs, thus the forecasts. Also, depending on the maturity of a project, engineers have various amounts and types of data to deal with and to history match through an optimization process.
Considering the case of a mature asset, for which massive datasets of various types are available, the standard history matching process is based on the minimization of a single objective function (history matching criteria), computed through weighted least squares formulation. The difficulty is then to (user) define properly the weight of each data set before the summation itself into the single objective function is performed.
To avoid this difficulty, two currently available but yet prospective – in geosciences application - optimization techniques are considered. The former is based on the definition of multiple objective functions (based on data types and/or location on the field) coupled to an optimization process. If all the objective functions minimize trends together, the user has still the flexibility to assess one by one the individual objective functions minimization. On the contrary, if the objective functions are minimizing/maximizing trends in competition (e.g. because of noisy data), then the derived Pareto front (in 2D case) would identify the location of optimal compromises. The latter is a sequential optimization approach based on single-objective constrained optimizations: optimize each objective at a time with constraints on the other objectives. The thresholds defined for the constraints on the objectives are derived from the results of the previous optimization results. This pragmatic approach allows to prioritize the objectives and tunes the expected accuracy on each data type.
Both approaches are applied to a real field gas storage asset with more than 40 years of exploitation history and various data types e.g. pressures, saturations as well as gas breakthrough of control wells, leading to the definition of multiple (possibly more than 2) objective functions. Both show promising results in terms of history matching quality and in terms of flexibility as the user might be able to consider, define and update dynamically alternative history matching strategies. These approaches might be considered as alternatives to the standard one for the history matching process, preliminary to the production forecasts computation, even if the associated challenges and complexity are growing accordingly to the number of objective functions.
Ferreira, Carla Janaina (Durham University and University of Campinas) | Vernon, Ian (Durham University) | Caiado, Camila (Durham University) | Formentin, Helena Nandi (Durham University and University of Campinas) | Avansi, Guilherme Daniel (University of Campinas) | Goldstein, Michael (Durham University) | Schiozer, Denis José (University of Campinas)
When performing classic uncertainty reduction according to dynamic data, a large number of reservoir simulations need to be evaluated at high computational cost. As an alternative, we construct Bayesian emulators that mimic the dominant behavior of the reservoir simulator, and which are several orders of magnitude faster to evaluate. We combine these emulators within an iterative procedure that involves substantial but appropriate dimensional reduction of the output space (which represents the reservoir physical behavior, such as production data), enabling a more effective and efficient uncertainty reduction on the input space (representing uncertain reservoir parameters) than traditional methods, and with a more comprehensive understanding of the associated uncertainties. This study uses the emulation-based Bayesian history-matching (BHM) uncertainty analysis for the uncertainty reduction of complex models, which is designed to address problems with a high number of both input and output parameters. We detail how to efficiently choose sets of outputs that are suitable for emulation and that are highly informative to reduce the input-parameter space and investigate different classes of outputs and objective functions. We use output emulators and implausibility analysis iteratively to perform uncertainty reduction in the input-parameter space, and we discuss the strengths and weaknesses of certain popular classes of objective functions in this context. We demonstrate our approach through an application to a benchmark synthetic model (built using public data from a Brazilian offshore field) in an early stage of development using 4 years of historical data and four producers. This study investigates traditional simulation outputs (e.g., production data) and also novel classes of outputs, such as misfit indices and summaries of outputs. We show that despite there being a large number (2,136) of possible outputs, only very few (16) were sufficient to represent the available information; these informative outputs were used using fast and efficient emulators at each iteration (or wave) of the history match to perform the uncertainty-reduction procedure successfully. Using this small set of outputs, we were able to substantially reduce the input space by removing 99.8% of the original volume. We found that a small set of physically meaningful individual production outputs were the most informative at early waves, which once emulated, resulted in the highest uncertainty reduction in the input-parameter space, while more complex but popular objective functions that combine several outputs were only modestly useful at later waves. The latter point is because objective functions such as misfit indices have complex surfaces that can lead to low-quality emulators and hence result in noninformative outputs. We present an iterative emulator-based Bayesian uncertainty-reduction process in which all possible input-parameter configurations that lead to statistically acceptable matches between the simulated and observed data are identified. This methodology presents four central characteristics: incorporation of a powerful dimension reduction on the output space, resulting in significantly increased efficiency; effective reduction of the input space; computational efficiency, and provision of a better understanding of the complex geometry of the input and output spaces.