Behm, Ed (Occidental) | Al Asimi, Mohammed (Occidental of Oman) | Al Maskari, Sara (Occidental of Oman) | Juna, Wladimir (Occidental) | Klie, Hector (DeepCast LLC) | Le, Duc (DeepCast LLC) | Lutidze, Giorgiy (Occidental) | Rastegar, Reza (Occidental) | Reynolds, Albert (University of Tulsa) | Tathed, Vinit (Occidental of Oman) | Younis, Rami (University of Tulsa) | Zhang, Yuchen (Occidental)
Occidental Mukhaizna completed a steamflood field optimization demonstration project involving about 100 Mukhaizna wells from Mid-December 2018 to Mid-March 2019. The field demonstration involves a data analytics process that provides recommendations on the best steam injection allocation among wells in order to improve overall steamflood performance. The process uses a low fidelity physics-based proxy model and cloud-based parallel processing. A field optimization engineer history matches and anchors a proxy model to current well and field operating constraints. The engineer completes hundreds of forward runs as part of an optimization algorithm to identify scenarios most likely to help increase value (oil production per steam injected) over the short term in the field, while honoring all producing and injection well operating ranges. The reservoir management team vets the rate change ideas generated and provides their recommendations for changes so the likely best and most practical overall scenario is implemented. The process is refreshed monthly so field performance results are included immediately, and the optimization process is kept evergreen. The field results so far have been encouraging, yielding an increase in oil production that has exceeded expectations. This paper will describe the data analytics field optimization process and workflow, present the baseline performance versus field demonstration results, and share lessons learned.
Ismail, Ahmed (ADNOC ONSHORE) | Hazem, Yasser (ADNOC ONSHORE) | Al Obaidi, Mazna Naji (ADNOC ONSHORE) | Bogachev, Kirill (Rock Flow Dynamics) | Gusarov, Evgenii (Rock Flow Dynamics) | Shelepov, Konstantin (Rock Flow Dynamics) | Kuzevanov, Maksim (Rock Flow Dynamics)
This paper proposes an assisted history matching (AHM) and uncertainty analysis workflow that was applied to facilitate the history matching of a giant carbonate reservoir in Middle East. The objective was to identify and quantify reservoir uncertainties and assess their impact on the field performance. In addition, to create a sufficient number of realizations to allow combinations of all uncertainties to capture a combined effect.
A real field case is represented by a consistent workflow that iteratively updates the ranges and number of reservoir uncertainties constrained by the actual measurements. The process has the following steps: definition of global uncertainty, sensitivity analysis, exclusion of less influential parameters, experimental design, revision of uncertainty matrix, and run optimization algorithms. The approach was firstly implemented at a global level and then continued to a regional level. The primary objective function is consisted of oil and water production mismatches, and the plan is to upgrade the objective function to include more parameters for further model HM enhancements.
Initially, the workflow was based on five uncertainty parameters. Ten sensitivity analysis cases were performed and tornado chart analysis suggested excluding some parameters that have less impact on the match quality, hence the objective function. Next, experimental design using Latin Hypercube was performed which allows seeing a combined effect of uncertainty parameters. During several experimental design iterations, the uncertainty parameter matrix was revised and a total number of uncertainty parameters was increased from 5 to 17. Finally, a total number of 260 experimental cases were completed, however, no good history match case was obtained. Therefore, a transition from the global level to a regional level was performed. The most sensitive identified uncertainties at global level were absolute permeability, vertical permeability anisotropy, pore volume and fault transmissibility. At the regional level, additional permeability multipliers for well regions were added to the uncertainty matrix. After that, a good quality matched cases were obtained.
Field scale and complexity were the main drive to implement AHM workflow. In a giant carbonate reservoir with long history and complex geology, a classical history matching method with unique solution cannot assure an accurate model predictability. The key advantages of this approach were the facilitating of the HM process and reducing of the total calculation time.
In this paper we present the framework and results of a benchmarking study to validate performance of conceptually different methods for computer-assisted history matching (AHM) under reservoir uncertainty on an example of complex waterflooding process.
Traditionally reservoir models were manually reconciled with production data, relying mostly on workflows based on engineering judgment and established best practices. The main disadvantage of the manual HM process is that the reservoir simulation disengages from the geological model and fails to quantify reservoir uncertainty. The oil and gas industry has advanced in developing methods for AHM that enable producing geologically-consistent reservoir simulation models with robust uncertainty quantification and high predictive value. In our study we build distinctive, computationally-intensive AHM workflows covering global multi-objective stochastic optimization and streamline sensitivity-based inversion to perform dynamic real-field model update and history matching.
We use the model of faulted reservoir under a waterflooding improved oil recovery regime. A comprehensive geo-modeling workflow incorporates facies, petrophysical properties, and saturation height function with uncertainty quantification performed on rock types, porosity, permeability, Kv/Kh, water saturation and fault transmissibility. During dynamic model update, the uncertainty workflows were executed as data assimilation on the sufficiently diverse prior ensemble of geomodel realizations, and as a Closed (Big) Loop process, where geomodel realizations are parameterized and updated simultaneously (on-the-fly) using global optimization. The misfit function, subject to minimization, incorporates oil and water rates and static wellhead pressure.
The study concludes that different AHM techniques demonstrate different convergence performance. The overall success dramatically depends on the uncertainty quantification of the initial geomodel. Practical reservoir simulation times were achieved through utilization of a Massive Parallel Processing deployed on an HPC cluster. However, most importantly we demonstrate that, by deploying advanced AHM workflows, the updates of geologically and structurally complex models, with tens of millions grid-cells and large number of matched wells can render geology well-conditioned realizations, without necessity of introducing non-geological features in order to achieve a satisfactory match. Thus, the validated practices add a tangible business value to the process of integrated reservoir modeling by delivering a robust simulation model with high predictive value for future field development planning.
Chen, Hongquan (Texas A&M University) | Yang, Changdong (Texas A&M University) | Datta-Gupta, Akhil (Texas A&M University) | Zhang, Jianye (Institution of Exploration and Development of Tarim Oilfield Company-Petro China) | Chen, Liqun (Institution of Exploration and Development of Tarim Oilfield Company-Petro China) | Liu, Lei (Institution of Exploration and Development of Tarim Oilfield Company-Petro China) | Chen, Baoxin (Institution of Exploration and Development of Tarim Oilfield Company-Petro China) | Cui, Xiaofei (Optimization Petroleum Technology, Inc.) | Shi, Fashun (Optimization Petroleum Technology, Inc.) | Bahar, Asnul (Kelkar and Associates, Inc.)
History matching of million-cell reservoir models still remains an outstanding challenge for the industry. This paper presents a hierarchical multi-scale approach to history matching high resolution dual porosity reservoir models using a combination of evolutionary algorithm and streamline method. The efficacy of the approach is demonstrated through application to a high pressure high temperature (HPHT) fractured gas reservoir in the Tarim basin, China with wells located at an average depth of 7500 meters.
Our proposed multi-scale history matching approach consists of two-stages: global and local. For the global stage, we calibrate coarse-scale static and dynamic parameters using an evolutionary algorithm. The global calibration uses coarse-scale simulations and applies regional multipliers to match RFT data, well bottom hole pressures, and field average pressure. For the local stage, we calibrate fracture permeability using streamline based sensitivities to further match well bottom-hole pressures. The streamlines are derived from the fracture cell fluxes and the sensitivities are analytically computed for highly compressible flow. The sensitivities are validated by comparison with the pertubation method.
The proposed hierarchical multiscale history matching workflow is applied to a faulted and highly fractured deep gas reservoir in the Tarim basin, China. The excessive well cost arising from the large well depth (7500 meters) and high pressure (18000 psi) necessitates optimal field development with limited number of wells. The fracture properties of dual porosity model are upscaled from a highly dense discrete fracture network model generated based on well data and seismic attributes. The history matching includes RFT data, static pressure data and flowing bottom-hole pressure data in producing wells. Field average pressure and RFT (static pressure) data were well matched during the global stage using coarse scale models while flowing bottom-hole pressure is further matched during the local stage calibration using fine scale models. Streamline method has been applied previously mainly to incompressible or slightly compressible flow. However in this application, the results show that the modified streamline-based sensitivity can also significantly reduce data misfit for highly compressible flow. The history matched models are used to visualize well drainage volumes using streamlines. The well drainage volumes in conjunction with static reservoir properties are used to define a ‘depletion capacity map’ which is then used for optimal infill well placement.
The novelty of our approach lies in the application of streamlines derived from dual porosity finite-difference simulation to facilitate history matching and well placement optimization in a tight gas reservoir. The newly developed streamline-based analytical sensitivities are suitable for highly compressible flow. To our knowledge, this is the first time streamlines have been used to facilitate history matching and optimal well placement for gas reservoirs.
QGC's current full-field reservoir model comprises hundreds to thousands of CSG wells. This presents a considerable challenge from a history-matching standpoint compared to a conventional workflow where well-level adjustments may be made on one well at a time. In QGC, a model with an improved well-level match is desired as the resulting well forecast will enable decisions on a well-level to be made more confidently, such as the prioritization of well workovers. Previously a field-level history-match was deemed acceptable when the model was only used for field development planning.
The method parameterizes the well-level relative error in simulated production from the model versus observed production. The workflow utilizes this data, known as well-level modifiers, to alter subsurface properties. This has been achieved with a semi-automated workflow to make the process efficient and repeatable, but also to enable engineering judgement to be incorporated in the history-matching process. The feedback loop is also an essential component of the workflow as it allows the well-level modifiers to be sense checked against the regional geological trends. This further encourages collaboration within a multi-disciplinary team.
These well-level modifiers can also be used to create history-match metrics, which can be spatially mapped to help target specific areas for improvement in history-match quality. Some powerful use of visualization techniques discussed in this paper has not only minimized the mismatch but ensures the characteristics of the production history and geological trends are honoured to assure the robustness of the history-match and the resulting model predictability. The workflow has significantly reduced the time and efforts spent in delivering an improved well forecast when required. The technical development community in QGC has actively nurtured a culture of ideas sharing and innovation, which made the development of this workflow possible.
Interwell tracers have been shown to provide invaluable information about reservoir dynamics, well connectivity, and fluid flow allocations. However, tracer tests are often applied sporadically because their immediate returns of investments are not readily apparent to a resource-holder. Here, we rigorously demonstrate that tracer data can indeed improve reservoir history matching, and, more importantly, improve future production, using reservoir simulations on benchmark problems. Sensitivity studies and the limitations of tracer data are also provided.
The numerical experiments were divided in two sections. First, production data with or without tracer data from reference fields were collected for the first water flooding periods for history matching. Second, the history matched models from the first section were used for production optimization for the next water flooding periods. The ensemble smoother with multiple data assimilation (ES-MDA) was used for the history matching processes for the first part of the numerical experiments, and the modified robust ensemble-based optimization (EnOpt) was adopted to maximize the net present value (NPV) for the second part of the numerical experiments.
The three-dimensional channelized "Egg Model" was chosen as the initial benchmark problem. From the first part of the numerical experiments, using the same hyper-parameters, it was observed that history matching including tracer data resulted in a better match of the field production rates with smaller standard deviations. In addition, history matching including tracer data resulted in more distinct geological features when observing the history matched permeability maps. From the second part of the numerical experiments, we observed that the geological models history matched including tracer data resulted in better production optimization with higher NPV produced. In the specific case of the Egg Model, +4.3% increase of the NPV was observed.
To understand the sensitivity and the limitations of the tracer data, the same numerical experiments were performed on a library of reservoir models with different fracture patterns. After the history matching and production optimization simulations, we observed that including tracer data gave positive NPV increases ranging from +0.3% to +9.4% from 5 of the 7 test cases. It was observed that tracers were more effective for the non-homogeneously flooded reservoirs.
To the best of our knowledge, this paper is the first study that quantifies the benefits of tracers in the context of the improved production, measured in NPV. In a broader perspective, we believe this is the best way to test any new history matching algorithms or reservoir surveillance methods. In this work, we show that tracers can result in positive NPV in most situations, and oil producers using large-scale water flooding operations would benefit from performing more tracer tests in their operations.
Xu, Zhongyi (China University of Petroleum) | Cheng, Linsong (China University of Petroleum) | Wu, Yonghui (China University of Petroleum) | Huang, Pin JiaandShijun (China University of Petroleum) | Fang, Sidong (Petroleum Exploration and Production Research Institute, Sinopec) | Hao, Junlan (China University of Petroleum) | Gao, Yinshan (the Fifth Oil Production Plant of Petrochina Changqing Oilfield Company)
Accurate assessment of uncertainty of production performance is critical for successful planning and assets development. Particularly in deepwater scenarios, where the uncertainty in the fluid and reservoir characterization is of high level for the difficulties for well testing, fluid and core sampling. Therefore, specific methods are needed for efficient uncertainty quantification of production for deepwater reservoirs with limited information.
In this paper, we introduce a history matching method to assimilate static geological data and production data base on the ensemble Kalman filter (EnKF). The EnKF is independent of simulators, and is suitable for uncertainty assessment, reservoir monitoring and performance prediction. We tested this method with a deepwater case. We analyzed the effects of initial ensembles and production history. After that, the uncertainty of production prediction is quantified and the posterior distribution of cumulative production can be estimated.
The EnKF is shown to be efficient in updating fluid and reservoir heterogeneity. By sequentially assimilating observed data, the EnKF is suitable for reservoir monitoring and performance prediction. The results indicate that history match with plausible geology can be improved with the improvement in the generation of the initial ensemble. History matching with longer history can narrow the range of ultimate recovery distribution, thus the uncertainty can be decreased. And the results also show that the updated amounts of parameters in the ensemble is larger in the first iteration, and it will get smaller gradually with more data been assimilated. After the history matching, the ultimate recovery can be obtained with each set of parameters in the ensemble, the uncertainty can be quantified with the statistical frequency distribution of recovery.
The proposed methodology provides a practical means to assess uncertainty in history matching for deepwater fields. This method can also be used to project and risk management.
Wei, Chenji (PetroChina) | Wang, Yuhe (Texas A&M University at Qatar) | Ding, Yutao (PetroChina) | Li, Yong (PetroChina) | Shi, Jing (PetroChina) | Liu, Shuangshuang (PetroChina) | Tian, Changbing (PetroChina) | Li, Baozhu (PetroChina) | Xiong, Lihui (PetroChina) | Zhang, Qi (PetroChina)
This paper presents an uncertainty assessment project using Artificial Neural Network (ANN) for a giant multi-layered sandstone reservoir in Middle East, which contains several uncertainties and associated risks. Uncertainty quantification in history matching, production forecasting and optimization approaches often requires hundreds of thousands of forward flow simulations to explore the uncertain parameter space, causing forbidden computational time requirement, especially for large-scale reservoir models. In order to bypass this limitation, one can use a proxy to replace the time-consuming flow simulator. In this work, an optimized ANN is used as the proxy and an uncertainty assessment workflow is implemented for the giant Cretaceous multi-layered sandstone reservoir using a global optimizer. Using the ANN based uncertainty assessment framework, the impacts of the main uncertain parameters on production forecasting are assessed for this multi-layered sandstone reservoir. Then, field development optimization is also performed to optimize wells injection and production rates to maximize the economic measures considering uncertainties.
Multiobjective history matching has gained popularity in the last decade. It provides an ensemble of diverse set and good matched models that should lead to improved forecasting. Moreover, in some cases, multiobjective history matching provides faster and more-robust convergence than the single-objective approach. In multiobjective, objective components (usually groups of them) guide the algorithm to different areas of objective space that lead to a diverse set of optimal solutions. These algorithms are widely established and well-developed for problems with two or three objectives. Under an increasing number of objective components, such as in a reservoir model with multiple wells and production data, multiobjective-history-matching performance (convergence speed and match quality) can deteriorate. One effective approach is grouping objective components to reduce the number of objectives. However, the existing literature does not present sufficient information on appropriate grouping techniques and ways of combining objective components.
We present a novel technique to group the objective components depending on analysis of the nonparametric-conflict information obtained from a set with a limited number of initial solutions. By grouping the objectives depending on the conflict between them, we aim to achieve better performance in history matching. We apply this framework to history matching of an industry-standard reservoir model and a real-field case study. We also perform history-matching runs of groupings with different degree of conflict, and then analyze the performance among them with the statistical-significance test.
Our extensive simulation results show that the proposed conflict-based strategy can be used as a guideline to help select a grouping of the objective components in multiobjective history matching optimally. By calculating the conflict between objectives a priori, we can identify which grouping scheme will result in a better performance. This technique can significantly improve the fitness quality of the matched model given the same number of flow simulations, and can also obtain a diverse set of models.
Assisted history matching approach has been arisen in the last few decades in an attempt to make the process of history matching faster and easier. Assisted history matching simply involves converting the history matching problem to an optimization problem. One main aspect of the assisted history matching is building proxy model that interpolates the relationship between the objective function and the history matching parameters. Several proxy modeling techniques are introduced in the literature, some are useful and some are not.
This paper provides a comparative study between four powerful proxy modeling techniques in assisted history matching; Thin Plate Spline, Radial Basis Function, Kriging, and Artificial Neural Network. Two test problems of different reservoir engineering approaches (material balance and reservoir simulation) are used to test and compare the performance of the studied proxy methods on solving assisted history matching problems. To make the comparison reliable, a performance indicator is developed to compare between the four studied techniques. The performance indicator represents the relative error between the estimated values of history matching parameters calculated using the studied proxy modeling methods and their exact solutions.
The results of this work indicate that the Kriging and artificial neural network proxy techniques are more efficient and superior to thin plate spline and radial basis function.
The significant increasing usage of reservoir models poses a lot of challenges in making models deign and calibration easier and achieved in a timely manner. One major challenge in building reservoir analytical or numerical models is the model calibration process. Reservoir models can only be trusted and predictive after good calibration with actual measured data. What makes model calibration, history matching, challengeable is that it is a non-uniqueness solution process. Same history matching can be achieved using several combination sets of the same history matching parameters. As a consequence, the assisted history matching techniques have been arisen in an attempt to make the process easier.