Reliability of subsurface assessment for different field development scenarios depends on how effective the uncertainty in production forecast is quantified. Currently there is a body of work in the literature on different methods to quantify the uncertainty in production forecast. The objective of this paper is to revisit and compare these probabilistic uncertainty quantification techniques through their applications to assisted history matching of a deep-water offshore waterflood field. The paper will address the benefits, limitations, and the best criteria for applicability of each technique.
Three probabilistic history matching techniques commonly practiced in the industry are discussed. These are Design-of-Experiment (DoE) with rejection sampling from proxy, Ensemble Smoother (ES) and Genetic Algorithm (GA). The model used for this study is an offshore waterflood field in Gulf-of-Mexico. Posterior distributions of global subsurface uncertainties (e.g. regional pore volume and oil-water contact) were estimated using each technique conditioned to the injection and production data.
The three probabilistic history matching techniques were applied to a deep-water field with 13 years of production history. The first 8 years of production data was used for the history matching and estimate of the posterior distribution of uncertainty in geologic parameters. While the convergence behavior and shape of the posterior distributions were different, consistent posterior means were obtained from Bayesian workflows such as DoE or ES. In contrast, the application of GA showed differences in posterior distribution of geological uncertainty parameters, especially those that had small sensitivity to the production data. We then conducted production forecast by including infill wells and evaluated the production performance using sample means of posterior geologic uncertainty parameters. The robustness of the solution was examined by performing history matching multiple times using different initial sample points (e.g. random seed). This confirmed that heuristic optimization techniques such as GA were unstable since parameter setup for the optimizer had a large impact on uncertainty characterization and production performance.
This study shows the guideline to obtain the stable solution from the history matching techniques used for different conditions such as number of simulation model realizations and uncertainty parameters, and number of datapoints (e.g. maturity of the reservoir development). These guidelines will greatly help the decision-making process in selection of best development options.
Zalavadia, Hardikkumar (Texas A&M University) | Sankaran, Sathish (Anadarko Petroleum Corporation) | Kara, Mustafa (Anadarko Petroleum Corporation) | Sun, Wenyue (Anadarko Petroleum Corporation) | Gildin, Eduardo (Texas A&M University)
Model-based field development planning and optimization often require computationally intensive reservoir simulations, where the models need to be run several times in the context of input uncertainty or seeking optimal results. Reduced Order Modeling (ROM) methods are a class of techniques that are applied to reservoir simulation to reduce model complexity and speed up computations, especially for large scale or complex models that may be quite useful for such optimization problems. While intrusive ROM methods (such as proper orthogonal decomposition (POD) and its extensions, trajectory piece-wise linearization (TPWL), Discrete Empirical Interpolation Method (DEIM) etc.) have been proposed for application to reservoir simulation problems, these remain inaccessible or unusable for a large number of practical applications that use commercial simulators.
In this paper, we describe a novel application of a non-intrusive ROM method, namely dynamic mode decomposition (DMD). We specifically look at reducing the time complexity involved in well control optimization problem, using a variant of DMD called DMDc (DMD with control). We propose a workflow using a training dataset of the wells and predict the state solution (pressure and saturation) for a new set of bottomhole pressure profiles encountered during the optimization runs. We use a novel strategy to select the basis dimensions to prevent unstable solutions. Since the objective function of the optimization problem is usually based on fluid production profiles, we propose a strategy to predict the fluid production rates from the predicted states from DMDc using machine learning techniques. The features for this machine learning problem are designed based on the physics of fluid flow through well perforations, which result in very accurate rate predictions. We compare the proposed methodology using another variant of DMD called ioDMD (input-ouput DMD) for system identification to predict output production flow rates.
The methodology is demonstrated on a benchmark case and a Gulf of Mexico deepwater field that shows significant time reduction in production control optimization problem with about 30 – 40 times speedup using the proposed DMDc workflow as compared to fine scale simulations, while preserving the accuracy of the solutions. The proposed "non-intrusive" method in this paper to reduce model complexity can substantially increase the range of application of ROM methods for practical field development and reservoir management.
Downhole fluid sampling is ubiquitous during exploration and appraisal because formation fluid properties have a strong impact on field development decisions. Efficient planning of sampling operations and interpretation of obtained data require a model-based approach. We present a framework for forward and inverse modeling of filtrate contamination cleanup during fluid sampling. The framework consists of a deep learning (DL) proxy forward model coupled with a Markov Chain Monte Carlo (MCMC) approach for the inverse model.
The DL forward model is trained using precomputed numerical simulations of immiscible filtrate cleanup over a wide range of in situ conditions. The forward model consists of a multilayer neural network with both recurrent and linear layers, where inputs are defined by a combination of reservoir and fluid properties. A model training and selection process is presented, including network depth and layer size impact assessment. The inverse framework consists of an MCMC algorithm that stochastically explores the solution space using the likelihood of the observed data computed as the mismatch between the observations and the model predictions.
The developed DL forward model achieved up to 50% increased accuracy compared with prior proxy models based on Gaussian process regression. Additionally, the new approach reduced the memory footprint by a factor of ten. The same model architecture and training process proved applicable to multiple sampling probe geometries without compromising performance. These attributes, combined with the speed of the model, enabled its use in real-time inversion applications. Furthermore, the DL forward model is amendable to incremental improvements if new training data becomes available.
Flowline measurements acquired during cleanup and sampling hold valuable information about formation and fluid properties that may be uncovered through an inversion process. Using measurements of water cut and pressure, the MCMC inverse model achieved 93% less calls to the forward model compared to conventional gradient-based optimization along with comparable quality of history matches. Moreover, by obtaining estimates of the full posterior parameter distributions, the presented model enables more robust uncertainty quantification.
We develop a novel ensemble model-maturation method that is based on the Randomized Maximum Likelihood (RML) technique and adjoint-based computation of objective function gradients. The new approach is especially relevant for rich data sets with time-lapse information content. The inversion method that solves the model-maturation problem takes advantage of the adjoint-based computation of objective function gradients for a very large number of model parameters at the cost of a forward and a backward (adjoint) simulation. The inversion algorithm calibrates model parameters to arbitrary types of production data including time-lapse reservoir-pressure traces by use of a weighted and regularized objective function. We have also developed a new and effective multigrid preconditioning protocol for accelerated iterative linear solutions of the adjoint-simulation step for models with multiple levels of local grid refinement. The protocol is based on a geometric multigrid (GMG) preconditioning technique. Within the model-maturation workflow, a machine-learning technique is applied to establish links between the mesh-based inversion results (e.g., permeability-multiplier fields) and geologic modeling parameters inside a static model (e.g., object dimensions, etc.). Our workflow integrates the learnings from inversion back into the static model, and thereby, ensures the geologic consistency of the static model while improving the quality of ensuing dynamic model in terms of honoring production and time-lapse data, and reducing forecast uncertainty. This use of machine learning to post-process the model-maturation outcome effectively converts the conventional continuous-parameter history-matching result into a discrete tomographic inversion result constrained to geological rules encoded in training images.
We demonstrate the practical utilization of the adjoint-based model-maturation method on a large time-lapse reservoir-pressure data set using an ensemble of full-field models from a reservoir case study. The model-maturation technique effectively identifies the permeability modification zones that are consistent with alternative geological interpretations and proposes updates to the static model. Upon these updates, the model not only agrees better with the time-lapse reservoir-pressure data but also better honors the tubing-head pressure as well as production logging data. We also provide computational performance indicators that demonstrate the accelerated convergence characteristics of the new iterative linear solver for adjoint equations.
Reservoir simulation optimization under uncertainty typically invokes a sense of anxiety mainly because of a lack of a systematic criterion to choose between different development scenarios under uncertainty, how to go about doing well placement and optimizing well controls in the face of a large uncertainty ensemble of static realisations, and most of all the large number of simulation runs that potentially needs to be conducted. This is exacerbated when the models are large and require many hours to run. Moreover, even with the prevalence of distributed and parallel computing clusters, there is still a limited amount of computing resources available when spread out over the number of reservoir engineers within a company. Time and budget constraints also contribute to complicating this process. Furthermore, with the requirement of an inordinately large number of simulation runs comes the dilemma as to which optimizer to choose that would help speed up the process.
This paper first starts off with a brief background into historical attempts at tackling this problem by delving into the literature. Then it discusses a rigorous criterion for optimization under uncertainty viz. stochastic dominance, hitherto little known or used in the industry. A commonly used greenfield case study which is an ensemble set of uncertainty realisations is then introduced, which the rest of the paper will be based on. The ensemble is a pre-generated set of fifty realisations designed specifically for this problem. Two challenging areas will then be addressed viz. well placement optmisation under uncertainty, and well controls optimization under uncertainty.
Finally, a comparison between the simplex, proxy response surface, differential evolution and particle swarm optimization methods is made in the optimization of well controls. Hence the paper aims to give a complete picture on how to go about reservoir simulation optimization under uncertainty, with a drastically reduced amount of computational runs that needs to be conducted. Practical and sensible formulation of the optimization problemcan go a long way to making this process more understandable and easier to implement.
Data-Driven subsurface modeling technology has been proven, for the past few years, to yield technical and commercial success in several oil fields worldwide. A data-driven model is constructed for the first time for an oil field onshore Abu Dhabi, and used for evaluation of a reservoir with substantial reserves and comprehensive development plan; for the purpose of predicting production rates, dynamic reservoir pressure and water saturation, improving reservoir understanding, supporting field development optimization and identifying optimum infill well locations. The objective is to provide the asset with a decision-support tool to make better field development planning and management.
The subject reservoir is a low permeability carbonate reservoir and characterized by lateral and vertical variations in its reservoir rocks and fluid properties. More than 8 years of Phase-I development and production/injection data and extensive amount of well tests and log data (SCAL, PVT, MDT) from more than 37 wells were used to construct the Data Driven Model for this asset.
This new modeling technology, (TDM), integrates reservoir engineering analytical techniques with Artificial Intelligence, Machine Learning & Data Mining in order to formulate an empirical and spatiotemporally calibrated full field model. In this work, it is leveraged with other conventional reservoir modeling and management tools such as streamline modeling, isobaric maps and flooding conformance.
Several analyses were performed using the full field data-driven model; complementing the existing conventional numerical model. The accomplishments of the data-driven reservoir model for this project included, but not limited to, comprehensive history matching (including blind validation) and then forecast of Oil rate, GOR, WC, reservoir pressure and water saturation, injection optimization, and choke size optimization. The results generated by the data-driven model proved to be quite eye-opening for the asset management; as the model was able to identify potential areas of improving field efficiency and cost reduction.
When combined with numerical techniques, the calibrated data-driven model assist to obtain a reliable short term forecast in a shorter time and help make quick decisions on day-to-day operational optimization aspects. The use of facts (all field measurements) instead of human biases, pre-conceived notions, and gross approximations distinguishes data-driven modeling from other existing modeling technologies. Its innovative combination of Artificial Intelligence and Machine Learning (the technologies that are transforming all industries in the 21st century) with reservoir engineering, reservoir modeling and reservoir management clearly demonstrates the potentials that these pattern recognition technologies offer to the upstream oil and gas industry for its realistic digital transformation.
A statistical screening methodology is presented to address uncertainty related to main geological assumptions in green field modeling. The goals are the identification of the entire range of uncertainty on production, learning which are the most impacting geological uncertain inputs and understanding the relationships between geological scenarios and classes of dynamic behavior.
The paper presents the methodology and an example application to a green field case study. The method is applied on an ensemble of reservoir models created by combining geological parameters across their range of uncertainty. The ensemble of models is then simulated with a selected development strategy and dynamic responses are grouped in classes of outcome through clustering algorithms. Ensemble responses are visualized on a multidimensional stacking plot, as a function of the geological input, and the most influential parameters are identified by axes sorting on the plot. Geological scenarios are then classified on dynamic responses through classification tree algorithms. Finally, a representative set of models is selected from the geological scenarios.
The example study application shows a final oil recovery uncertainty range of 4:1, which is reasonable for a green field in lack of data. Such high range of uncertainty could hardly be found by common risk assessment based on fixed geological assumptions, which often tend to underestimate uncertainty on forecasts. Ensemble outcomes are grouped in four classes by oil recovery, plateau strength, produced water, and breakthrough time. The adoption of such clustering features gives a broad understanding of the reservoir dynamic response. The most influential geological inputs among the examined structural and sedimentological parameters in the example application result to be the fault orientation and channel fraction. This screening result highlights the main drivers of geological uncertainty and is useful for the following scenario classification phase. Classification of the geological scenarios leads to five classes of geological parameter sets, each linked to a main class of dynamic behavior, and finally to five representative models. These five models constitute an effective sampling of the geological uncertainty space which also captures the different types of dynamic response.
This paper will contribute to widen the engineering experience on the use of machine learning for risk analysis by presenting an application on a real field case study to explore the relationship between geological uncertainty and reservoir dynamic behavior.
In this work we discuss the successful application of our previously developed automated scenario reduction approach applied to life-cycle optimization of a real field case. The inherent uncertainty present in the description of reservoir properties motivates the use of an ensemble of model scenarios to achieve an optimized robust reservoir development strategy. In order to accurately span the range of uncertainties it is imperative to build a relatively large ensemble of model scenarios. The size of the ensemble is directly proportional to the computational effort required in robust optimization. For high-dimensional, complex field case models this implies that a large ensemble of model scenarios which albeit accurately captures the inherent uncertainties would be computationally infeasible to be utilized for robust optimization. One of the ways to circumvent this problem is to work with a reduced subset of model scenarios. Methods based on heuristics and ad-hoc rules exist to select this reduced subset. However, in most of the cases, the optimal number of model realizations must be known upfront. Excessively small number of realizations may result in a subset that does not always capture the span of uncertainties present, leading to sub-optimal optimization results. This raises the question on how to effectively select a subset that contains an optimal number of realizations which both is able to capture the uncertainties present and allow for a computationally efficient robust optimization. To answer this question we have developed an automated framework to select the reduced ensemble which has been applied to an original ensemble of 300 equiprobable model scenarios of a real field case. The methodology relies on the fact that, ideally, the distance between the cumulative distribution functions (CDF) of the objective function (OF) of the full and reduced ensembles should be minimal. This allows the method to determine the smallest subset of realizations that both spans the range of uncertainties and provides an OF CDF that is representative of the full ensemble based on a statistical metric. In this real field case application we optimize the injection rates throughout the assets life-cycle with expected cumulative oil production as the OF. The newly developed framework selected a small subset of 17 model scenarios out of the original ensemble which was used for robust optimization. The optimal injection strategy achieved an average increase of 6% in cumulative oil production with a significant reduction, approximately 90%, in the computational effort. Validation of this optimal strategy over the original ensemble lead to very similar improvements in cumulative oil production, highlighting the reliability and accuracy of our framework.
Waterflooding is the main technic to recover hydrocarbons in reservoirs. For a given set of wells (injectors and producers), the choice of injection/production parameters such as pressures, flow rates, and locations of these boundary conditions have a significant impact on the operating life of the wells. As a large number of combinations of these parameters are possible, one of the critical decision to make is to identify an optimal set of these parameters. Using the reservoir simulator directly to evaluate the impact of these sets being unrealistic considering the required number of simulations, a common approach consists of using response surfaces to approximate the reservoir simulator outputs. Several techniques involving proxies model (e.g., kriging, polynomial, and artificial neural network) have been suggested to replace the reservoir simulations. This paper focalizes on the application of artificial neural networks (ANN) as it is commonly admitted that the ANNs are the most efficient one due to their universal approximation capacity, i.e., capacity to reproduce any continuous function. This paper presents a complete workflow to optimize well parameters under waterflooding using an artificial neural network as a proxy model. The proposed methodology allows evaluating different production configurations that maximize the NPV according to a given risk. The optimized solutions can be analyzed with the efficient frontier plot and the Sharpe ratios. An application of the workflow to the Brugge field is presented in order to optimize the waterflooding strategy.
Al-Jenaibi, Faisal (ADNOC - Upstream) | Shelepov, Konstantin (Rock Flow Dynamics) | Kuzevanov, Maksim (Rock Flow Dynamics) | Gusarov, Evgenii (Rock Flow Dynamics) | Bogachev, Kirill (Rock Flow Dynamics)
The application of intelligent algorithms that use clever simplifications and methods to solve computationallycomplex problems are rapidly displacing traditional methods in the petroleum industry. The latest forward-thinking approaches inhistory matching and uncertainty quantification were applied on a dynamic model that has unknown permeability model. The original perm-poro profile was constructed based on synthetic data to compare Assisted History Matching (AHM)approach to the exact solution. It is assumed that relative permeabilities, endpoints, or any parameter other than absolute permeability to match oil/water/gas rates, gas-oil ratio, water injection rate, watercut and bottomhole pressure cannot be modified.
The standard approach is to match a model via permeability variation is to split the grid into several regions. However, this process is a complete guess as it is unclear in advance how to select regions. The geological prerequisites for such splitting usually do not exist. Moreover, the values of permeability and porosity in different grid blocks are correlated. Independent change of these values for each region distortscorrelations or make the model unphysical.
The proposed alternative involves the decomposition of permeability model into spectrum amplitudes using Discrete Cosine Transformation (DCT), which is a form of Fourier Transform. The sum of all amplitudes in DCT is equal to the original property distribution. Uncertain permeability model typically involves subjective judgment, and several optimization runs to construct uncertainty matrix. However, the proposed multi-objective Particle Swarm Optimization (PSO) helps to reduce randomness and find optimal undominated by any other objective solution with fewer runs. Further optimization of Flexi-PSO algorithm is performed on its constituting components such as swarm size, inertia, nostalgia, sociality, damping factor, neighbor count, neighborliness, the proportion of explorers, egoism, community and relative critical distance to increase the speed of convergence. Additionally, the clustering technique, such as Principal Component Analysis (PCA), is suggested as a mean to reduce the space dimensionality of resulted solutions while ensuring the diversity of selected cluster centers.
The presentedset of methodshelps to achieve a qualitative and quantitative match with respect to any property, reduce the number of uncertainty parameters, setup ageneric and efficient approach towards assisted history matching.