Recent studies have indicated that Huff-n-Puff (HNP) gas injection has the potential to recover an additional 30-70% oil from multi-fractured horizontal wells in shale reservoirs. Nonetheless, this technique is very sensitive to production constraints and is impacted by uncertainty related to measurement quality (particularly frequency and resolution), and lack of constraining data. In this paper, a Bayesian workflow is provided to optimize the HNP process under uncertainty using a Duvernay shale well as an example.
Compositional simulations are conducted which incorporate a tuned PVT model and a set of measured cyclic injection/compaction pressure-sensitive permeability data. Markov chain Monte Carlo (McMC) is used to estimate the posterior distributions of the model uncertain variables by matching the primary production data. The McMC process is accelerated by employing an accurate proxy model (kriging) which is updated using a highly adaptive sampling algorithm. Gaussian Processes are then used to optimize the HNP control variables by maximizing the lower confidence interval (μ-σ) of cumulative oil production (after 10 years) across a fixed ensemble of uncertain variables sampled from posterior distributions.
The uncertain variable space includes several parameters representing reservoir and fracture properties. The posterior distributions for some parameters, such as primary fracture permeability and effective half-length, are narrower, while wider distributions are obtained for other parameters. The results indicate that the impact of uncertain variables on HNP performance is nonlinear. Some uncertain variables (such as molecular diffusion) that do not show strong sensitivity during the primary production strongly impact gas injection HNP performance. The results of optimization under uncertainty confirm that the lower confidence interval of cumulative oil production can be maximized by an injection time of around 1.5 months, a production time of around 2.5 months, and very short soaking times. In addition, a maximum injection rate and a flowing bottomhole pressure around the bubble point are required to ensure maximum incremental recovery. Analysis of the objective function surface highlights some other sets of production constraints with competitive results. Finally, the optimal set of production constraints, in combination with an ensemble of uncertain variables, results in a median HNP cumulative oil production that is 30% greater than that for primary production.
The application of a Bayesian framework for optimizing the HNP performance in a real shale reservoir is introduced for the first time. This work provides practical guidelines for the efficient application of advanced machine learning techniques for optimization under uncertainty, resulting in better decision making.
Karami Moghadam, Ali (University of Calgary/Computer Modelling Group Ltd) | Sahaf, Zahra (University of Calgary/Computer Modelling Group Ltd) | Chen, Zhangxing John (University of Calgary) | Costa Sousa, Mario (University of Calgary) | Yang, Chaodong (Computer Modelling Group Ltd) | Nghiem, Long (Computer Modelling Group Ltd)
In reservoir engineering applications finding the impact of input parameters and parameter combinations on decision variables of the model is very critical. In order to accurately quantify the parameter uncertainties within the reservoir modelling workflow, different types of reservoir as well as geophysical, and geological parameters need to be considered. However, while different sensitivity analysis methods are available to deal with the uncertainty of reservoir parameters, one of the challenges in reservoir modeling remains to be accounting for geological scenario uncertainty. In this paper, a sensitivity analysis approach is discussed which is able to accurately quantify the sensitivity of geological complexity on reservoir model behavior and production forecasts. The method is applied to a complex reservoir system with geological uncertainties by including different types of input parameters in the study while also handling parameter interactions.
Using clustering algorithms and multi-dimensional scaling, the sensitivity analysis method discussed in this paper classifies the model objective functions into a group of discrete classes. This classification is done according to a definition of similarity among the model ensemble obtained from design of experiments. The analysis is then performed through comparison of the parameter frequency distribution in each class versus the whole population. This sensitivity measure can quantify the model response impact from the parameters as well as their pairwise interactions.
It is observed that in cases where class-conditional parameter frequency distributions do not differ significantly from the ensemble parameter frequency distribution, the model response is insensitive to the parameter. In contrast, sensitivity of the model response to the parameter is indicated by differences in the frequency distributions. This provides a flexible measure of sensitivity that works for all distributions, and is robust for small sample sizes, hence providing a computational advantage even in cases of using CPU exhausting simulations. In addition, it allows for a simple and intuitive interpretation of model sensitivities. This approach is first verified by comparison of the results with a well-known complex test function. It is then utilized to study a large reservoir model while incorporating the reservoir’s geological uncertainty in the analysis. Through the case studies provided in this paper, the proposed method demonstrates a good agreement with the results from the other sensitivity analysis approaches.
Beside manageable computational cost, the other advantages include overcoming the limitations of other sensitivity analysis approaches to account for all types of inputs including categorical or scenario-based parameters as well as quantification of parameter interactions. At the same time, no assumptions on the nature of the parameters or their prior distributions nor on the response functions are made.
Hajizadeh, Yasin (Computer Modelling Group) | Nghiem, Long (Computer Modelling Group) | Mirzabozorg, Arash (University of Calgary) | Yang, Chaodong (Computer Modelling Group) | Li, Heng (Computer Modelling Group) | Costa Sousa, Mario (University of Calgary)
Copyright 2013, Society of Petroleum Engineers This paper was prepared for presentation at the SPE Reservoir Simulation Symposium held in The Woodlands, Texas USA, 18-20 February 2013. This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents of the paper have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect any position of the Society of Petroleum Engineers, its officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright. Abstract Current frameworks for optimization and assisted history matching lack the ability to control and guide the sampling engine and to incorporate geo-engineering knowledge. Defining the interactions between uncertain parameters and handling multiple constraints are also arduous tasks. Despite recent advances in adaptive population-based sampling algorithms and other gradient and ensemble-based methods, these specific drawbacks have left engineers with several history-matched models that are inconsistent with the physical and geological knowledge of the field. We introduce a novel rule-based framework based on fuzzy reasoning to integrate engineering knowledge with optimization and assisted history matching workflows.
Assisted history matching frameworks powered by stochastic population-based sampling algorithms have been a popular choice for real-life reservoir management problems for the past decade. These methods provide an ensemble of history-matched models which can be used to quantify the uncertainty of future field performance. As a critique, population-based algorithms are generally considered black-boxes with little knowledge of their performance during history matching. In most cases, the misfit value is used as the only criteria to monitor the sampling algorithms and assess their quality.
This paper applies three recently developed multidimensional projection schemes as a novel interactive, exploratory visualization tool for gaining insights to the sampling performance of population-based algorithms and comparing multiple runs in history matching. We use Least Square Projection (LSP), Projection by Clustering (ProjClus) and Principle Component Analysis (PCA) to examine the relationship between exploration of search space and the uncertainty in predictions of reservoir production. These projection techniques provide a mapping of the high dimensional search space into a 2D space by trying to maintain the distance relationships between sampled points. The application of multidimensional projection is illustrated for history matching of the benchmark PUNQ-S3 model using ant colony, differential evolution, particle swarm and the neighbourhood algorithms.
We conclude that multi-dimensional projection algorithms are valuable diagnostic tools that should accompany assisted history matching workflows in order to evaluate their performance and compare ensembles of history-matched models. Using the projection tools, we show that misfit value - as an indicator of match quality - is not the only important factor in making reliable predictions. We demonstrate that exploration of the search space is also a critical element in the uncertainty quantification workflow which can be monitored with multidimensional projection schemes.