Characterizing the fractures is an important task to improve the understanding and utilization of hydraulic fracturing. As an approach to augment and improve on the existing methods, time-lapse electric potential measurements could be used to characterize subsurface features. In this study we investigated the characterization of fracture length and fracture density by using time-lapse electric potential data. A new borehole ERT (electric resistivity tomography) method designed specifically for hydraulic fracture characterization is proposed to better capture reservoir dynamics during hydraulic fracturing. This method uses high resolution electric potential data by implementing electrodes in or near boreholes and monitor electric potential distribution near the horizontal fracture zone. The time-lapse electric potential data generated by this tool were simulated and subsequently used to analyze fracture characteristics. Inverse analysis was then performed on the electric potential data to estimate fracture length and fracture density. Last, we performed sensitivity analysis to examine the robustness of the estimates in nonideal environments. The results of this work show that time-lapse electric potential data are capable of capturing flow dynamics during the fracturing process. Using the proposed borehole ERT method we successfully estimated the true fracture length and true fracture density of a constructed fracture model. We were able to determine the best locations in the constructed reservoir to place the electrodes, and through sensitivity analysis we found the maximum noise level of the electric potential data that can still allow the proposed method to make robust fracture length and fracture density estimates.
Our proposed method offers a new approach to make robust estimates of fracture length and fracture density. Electric potential data have been used mostly for well logging in the past. This study demonstrates a novel way of using electric potential data in unconventional development and opens possibilities for more applications such as production monitoring.
Characterization of key parameters in unconventional assets continues to be challenging due to the geologic heterogeneity of such resources and the uncertainty associated with fracture geometry in stimulated rock. Limited data and the accelerating pace of asset development in plays like the Permian present an increasing need for an efficient and robust assisted history matching methodology that produces better insights for asset development planning decisions, e.g. well spacing.
A multi-scenario approach is presented to build an ensemble of history matched models that take into account existing uncertainty in reservoir description and well completions. We discuss parametrization of key uncertainties in the reservoir rock, fluid properties, fracture geometry and the effective permeability of stimulated rock. Ensemble-based assisted history matching algorithms are utilized to reduce and characterize the uncertainties in the model parameters by honoring various types of data including field dynamic data and measurements. We discuss the implementation of automated schemes for weighting of various types of data in the ensemble-based history matching algorithms. These schemes are introduced to define the history matching objective functions from various types of data including bottomhole pressure data, and the oil, water and gas productions rates. The computational results show that our adaptive scheme obtains better history match solutions.
The presented multi-scenario approach, coupled with the ability to efficiently run a high number of scenarios, enables better understanding of reservoir and fracture properties and shortens the learning curve for new development in unconventional assets. The shown case study illustrates a comprehensive analysis, using thousands of simulation cases, to obtain multiple history match solutions. Given the non-uniqueness of reservoir history matched models presented in the scenarios, this workflow improves forecasting ability and enables robust business decision makings under uncertainty.
You, Junyu (Petroleum Recovery Research Center) | Ampomah, William (Petroleum Recovery Research Center) | Sun, Qian (Petroleum Recovery Research Center) | Kutsienyo, Eusebius Junior (Petroleum Recovery Research Center) | Balch, Robert Scott (Petroleum Recovery Research Center) | Cather, Martha (Petroleum Recovery Research Center)
In this paper, a hybrid scheme that couples artificial neural network (ANN) and multi-objective optimizers is structured to co-optimize oil recovery and carbon storage of CO2 - EOR processes. The workflow is developed and validated employing an injection-pattern-based model. A field scale case study is presented to demonstrate the practicability of the workflow.
An injection-pattern based reservoir model employing a compositional numerical simulator is established to develop and test the hybrid-optimization workflow. Such a scheme aims at optimizing objective functions including oil recovery factor, CO2 storage and project net present value (NPV). An ANN expert system is trained and employed as a proxy of the high-fidelity model in the optimization process. The ANN model is trained by a robust optimization procedure which is competent to find the best architecture. Particle swarm optimization (PSO) is coupled with the developed proxy model to optimize a weight-aggregated objective function, and multi-objective functions by a Pareto front approach. A field case study is included in this paper. The reservoir model is well-tuned via a rigorous history matching process using the available field data. The aforementioned workflow is deployed to optimize the tertiary recovery stage of the field development.
In this paper, the validation results of the proxy model will be compared against results from the high-fidelity numerical models. Investigations focus on comparing the optimum solution found by the aggregative objective function and the solution repository (Pareto front) generated by the multi-objective optimization process. The optimization results provide significant insight to the decision-making process of CO2 - EOR project when multiple objective functions are considered.
This study develops a novel hybrid-optimization workflow for CO2 - EOR projects considering multiple objective functions. The robustness of the development is confirmed via a field case study. Moreover, this work investigates the relationship between the solutions of the aggregative objective function and the Pareto front, which provides constraints and reduces uncertainties involved by the multi-objective optimization process.
We develop a novel ensemble model-maturation method that is based on the Randomized Maximum Likelihood (RML) technique and adjoint-based computation of objective function gradients. The new approach is especially relevant for rich data sets with time-lapse information content. The inversion method that solves the model-maturation problem takes advantage of the adjoint-based computation of objective function gradients for a very large number of model parameters at the cost of a forward and a backward (adjoint) simulation. The inversion algorithm calibrates model parameters to arbitrary types of production data including time-lapse reservoir-pressure traces by use of a weighted and regularized objective function. We have also developed a new and effective multigrid preconditioning protocol for accelerated iterative linear solutions of the adjoint-simulation step for models with multiple levels of local grid refinement. The protocol is based on a geometric multigrid (GMG) preconditioning technique. Within the model-maturation workflow, a machine-learning technique is applied to establish links between the mesh-based inversion results (e.g., permeability-multiplier fields) and geologic modeling parameters inside a static model (e.g., object dimensions, etc.). Our workflow integrates the learnings from inversion back into the static model, and thereby, ensures the geologic consistency of the static model while improving the quality of ensuing dynamic model in terms of honoring production and time-lapse data, and reducing forecast uncertainty. This use of machine learning to post-process the model-maturation outcome effectively converts the conventional continuous-parameter history-matching result into a discrete tomographic inversion result constrained to geological rules encoded in training images.
We demonstrate the practical utilization of the adjoint-based model-maturation method on a large time-lapse reservoir-pressure data set using an ensemble of full-field models from a reservoir case study. The model-maturation technique effectively identifies the permeability modification zones that are consistent with alternative geological interpretations and proposes updates to the static model. Upon these updates, the model not only agrees better with the time-lapse reservoir-pressure data but also better honors the tubing-head pressure as well as production logging data. We also provide computational performance indicators that demonstrate the accelerated convergence characteristics of the new iterative linear solver for adjoint equations.
Reservoir simulation optimization under uncertainty typically invokes a sense of anxiety mainly because of a lack of a systematic criterion to choose between different development scenarios under uncertainty, how to go about doing well placement and optimizing well controls in the face of a large uncertainty ensemble of static realisations, and most of all the large number of simulation runs that potentially needs to be conducted. This is exacerbated when the models are large and require many hours to run. Moreover, even with the prevalence of distributed and parallel computing clusters, there is still a limited amount of computing resources available when spread out over the number of reservoir engineers within a company. Time and budget constraints also contribute to complicating this process. Furthermore, with the requirement of an inordinately large number of simulation runs comes the dilemma as to which optimizer to choose that would help speed up the process.
This paper first starts off with a brief background into historical attempts at tackling this problem by delving into the literature. Then it discusses a rigorous criterion for optimization under uncertainty viz. stochastic dominance, hitherto little known or used in the industry. A commonly used greenfield case study which is an ensemble set of uncertainty realisations is then introduced, which the rest of the paper will be based on. The ensemble is a pre-generated set of fifty realisations designed specifically for this problem. Two challenging areas will then be addressed viz. well placement optmisation under uncertainty, and well controls optimization under uncertainty.
Finally, a comparison between the simplex, proxy response surface, differential evolution and particle swarm optimization methods is made in the optimization of well controls. Hence the paper aims to give a complete picture on how to go about reservoir simulation optimization under uncertainty, with a drastically reduced amount of computational runs that needs to be conducted. Practical and sensible formulation of the optimization problemcan go a long way to making this process more understandable and easier to implement.
Improved reservoir knowledge is key to extracting additional value from existing oil and gas assets. However, given the uncertainty in the subsurface, it is always a question if our current development strategy is the most robust choice, or if there are alternatives that can further increase the value of our field. This paper presents a novel solution that enables the asset team to answer these questions in a new way. Furthermore, the solution helps teams quickly identify and screen new opportunities that ultimately increase both subsurface understanding and the value of the field. The solution combines a quasi- Newton gradient based numerical optimization scheme with a stochastic simplex approximate gradient (StoSAG) algorithm. Because the algorithm is non-intrusive with respect to the fluid flow simulator, we can directly apply the solution on any flow optimization problem without the need to access the simulator source code. The solution is implemented using a microservice architecture that allows for efficient scaling and deployment either on cloud-based or internal systems. We demonstrate the proposed solution on a field containing 11 oil producers and 7 water injectors by optimizing the water injection and oil production rates. The machine learning algorithm allows us to quickly explore different drainage strategies, given the current understanding and associated uncertainties of the reservoir.
Specifically, the software solution suggests that 6 of the 18 pre-defined well targets are high risk and/or of little value. Running a second development scenario where we do not drill these six wells reduces the investment cost of this field by 163 MUSD and increases the expected net present value per well of the field by 48 percent. Compared with the reactive control drainage strategy approach, we increase the expected net present value of the field by 9.0 %, while simultaneously lowering the associated risk.
Ismail, Ahmed (ADNOC ONSHORE) | Hazem, Yasser (ADNOC ONSHORE) | Al Obaidi, Mazna Naji (ADNOC ONSHORE) | Bogachev, Kirill (Rock Flow Dynamics) | Gusarov, Evgenii (Rock Flow Dynamics) | Shelepov, Konstantin (Rock Flow Dynamics) | Kuzevanov, Maksim (Rock Flow Dynamics)
This paper proposes an assisted history matching (AHM) and uncertainty analysis workflow that was applied to facilitate the history matching of a giant carbonate reservoir in Middle East. The objective was to identify and quantify reservoir uncertainties and assess their impact on the field performance. In addition, to create a sufficient number of realizations to allow combinations of all uncertainties to capture a combined effect.
A real field case is represented by a consistent workflow that iteratively updates the ranges and number of reservoir uncertainties constrained by the actual measurements. The process has the following steps: definition of global uncertainty, sensitivity analysis, exclusion of less influential parameters, experimental design, revision of uncertainty matrix, and run optimization algorithms. The approach was firstly implemented at a global level and then continued to a regional level. The primary objective function is consisted of oil and water production mismatches, and the plan is to upgrade the objective function to include more parameters for further model HM enhancements.
Initially, the workflow was based on five uncertainty parameters. Ten sensitivity analysis cases were performed and tornado chart analysis suggested excluding some parameters that have less impact on the match quality, hence the objective function. Next, experimental design using Latin Hypercube was performed which allows seeing a combined effect of uncertainty parameters. During several experimental design iterations, the uncertainty parameter matrix was revised and a total number of uncertainty parameters was increased from 5 to 17. Finally, a total number of 260 experimental cases were completed, however, no good history match case was obtained. Therefore, a transition from the global level to a regional level was performed. The most sensitive identified uncertainties at global level were absolute permeability, vertical permeability anisotropy, pore volume and fault transmissibility. At the regional level, additional permeability multipliers for well regions were added to the uncertainty matrix. After that, a good quality matched cases were obtained.
Field scale and complexity were the main drive to implement AHM workflow. In a giant carbonate reservoir with long history and complex geology, a classical history matching method with unique solution cannot assure an accurate model predictability. The key advantages of this approach were the facilitating of the HM process and reducing of the total calculation time.
In this work we discuss the successful application of our previously developed automated scenario reduction approach applied to life-cycle optimization of a real field case. The inherent uncertainty present in the description of reservoir properties motivates the use of an ensemble of model scenarios to achieve an optimized robust reservoir development strategy. In order to accurately span the range of uncertainties it is imperative to build a relatively large ensemble of model scenarios. The size of the ensemble is directly proportional to the computational effort required in robust optimization. For high-dimensional, complex field case models this implies that a large ensemble of model scenarios which albeit accurately captures the inherent uncertainties would be computationally infeasible to be utilized for robust optimization. One of the ways to circumvent this problem is to work with a reduced subset of model scenarios. Methods based on heuristics and ad-hoc rules exist to select this reduced subset. However, in most of the cases, the optimal number of model realizations must be known upfront. Excessively small number of realizations may result in a subset that does not always capture the span of uncertainties present, leading to sub-optimal optimization results. This raises the question on how to effectively select a subset that contains an optimal number of realizations which both is able to capture the uncertainties present and allow for a computationally efficient robust optimization. To answer this question we have developed an automated framework to select the reduced ensemble which has been applied to an original ensemble of 300 equiprobable model scenarios of a real field case. The methodology relies on the fact that, ideally, the distance between the cumulative distribution functions (CDF) of the objective function (OF) of the full and reduced ensembles should be minimal. This allows the method to determine the smallest subset of realizations that both spans the range of uncertainties and provides an OF CDF that is representative of the full ensemble based on a statistical metric. In this real field case application we optimize the injection rates throughout the assets life-cycle with expected cumulative oil production as the OF. The newly developed framework selected a small subset of 17 model scenarios out of the original ensemble which was used for robust optimization. The optimal injection strategy achieved an average increase of 6% in cumulative oil production with a significant reduction, approximately 90%, in the computational effort. Validation of this optimal strategy over the original ensemble lead to very similar improvements in cumulative oil production, highlighting the reliability and accuracy of our framework.
Waterflooding is the main technic to recover hydrocarbons in reservoirs. For a given set of wells (injectors and producers), the choice of injection/production parameters such as pressures, flow rates, and locations of these boundary conditions have a significant impact on the operating life of the wells. As a large number of combinations of these parameters are possible, one of the critical decision to make is to identify an optimal set of these parameters. Using the reservoir simulator directly to evaluate the impact of these sets being unrealistic considering the required number of simulations, a common approach consists of using response surfaces to approximate the reservoir simulator outputs. Several techniques involving proxies model (e.g., kriging, polynomial, and artificial neural network) have been suggested to replace the reservoir simulations. This paper focalizes on the application of artificial neural networks (ANN) as it is commonly admitted that the ANNs are the most efficient one due to their universal approximation capacity, i.e., capacity to reproduce any continuous function. This paper presents a complete workflow to optimize well parameters under waterflooding using an artificial neural network as a proxy model. The proposed methodology allows evaluating different production configurations that maximize the NPV according to a given risk. The optimized solutions can be analyzed with the efficient frontier plot and the Sharpe ratios. An application of the workflow to the Brugge field is presented in order to optimize the waterflooding strategy.
Al-Jenaibi, Faisal (ADNOC - Upstream) | Shelepov, Konstantin (Rock Flow Dynamics) | Kuzevanov, Maksim (Rock Flow Dynamics) | Gusarov, Evgenii (Rock Flow Dynamics) | Bogachev, Kirill (Rock Flow Dynamics)
The application of intelligent algorithms that use clever simplifications and methods to solve computationallycomplex problems are rapidly displacing traditional methods in the petroleum industry. The latest forward-thinking approaches inhistory matching and uncertainty quantification were applied on a dynamic model that has unknown permeability model. The original perm-poro profile was constructed based on synthetic data to compare Assisted History Matching (AHM)approach to the exact solution. It is assumed that relative permeabilities, endpoints, or any parameter other than absolute permeability to match oil/water/gas rates, gas-oil ratio, water injection rate, watercut and bottomhole pressure cannot be modified.
The standard approach is to match a model via permeability variation is to split the grid into several regions. However, this process is a complete guess as it is unclear in advance how to select regions. The geological prerequisites for such splitting usually do not exist. Moreover, the values of permeability and porosity in different grid blocks are correlated. Independent change of these values for each region distortscorrelations or make the model unphysical.
The proposed alternative involves the decomposition of permeability model into spectrum amplitudes using Discrete Cosine Transformation (DCT), which is a form of Fourier Transform. The sum of all amplitudes in DCT is equal to the original property distribution. Uncertain permeability model typically involves subjective judgment, and several optimization runs to construct uncertainty matrix. However, the proposed multi-objective Particle Swarm Optimization (PSO) helps to reduce randomness and find optimal undominated by any other objective solution with fewer runs. Further optimization of Flexi-PSO algorithm is performed on its constituting components such as swarm size, inertia, nostalgia, sociality, damping factor, neighbor count, neighborliness, the proportion of explorers, egoism, community and relative critical distance to increase the speed of convergence. Additionally, the clustering technique, such as Principal Component Analysis (PCA), is suggested as a mean to reduce the space dimensionality of resulted solutions while ensuring the diversity of selected cluster centers.
The presentedset of methodshelps to achieve a qualitative and quantitative match with respect to any property, reduce the number of uncertainty parameters, setup ageneric and efficient approach towards assisted history matching.