The ensemble methods show the ability to obtain the geometry of natural fractures in high-permeability reservoirs. However, hydraulic fractures in unconventional reservoirs are different and the ability of these methods to invert production data to determine hydraulic fracture geometry is unknown. In this work, Embedded Discrete Fractures method is selected as the simulation method. The EnKF and EnRML methods are chosen as the history matching methods. Several simulation cases are used to test these algorithms’ performance in both conventional and unconventional reservoirs. Also, a novel gradient history matching process has been proposed to solve part of the problems.
By comparing history matching results of different cases, Ensemble methods’ matching quality is very sensitive with prior information. Production of wells in low-permeability reservoirs is not sensitive with the unconnected fractures, which will lead to multiple minima in the objective function. It also causes big difficulties in the estimation of the gradient for ensemble methods. Thus, it is even more important to add prior information properly in the unconventional reservoirs’ history matching, which can narrow the problem dramatically. The potential and limitation of using a gradient method to solve this kind of problem are also shown in this paper.
Characterization of key parameters in unconventional assets continues to be challenging due to the geologic heterogeneity of such resources and the uncertainty associated with fracture geometry in stimulated rock. Limited data and the accelerating pace of asset development in plays like the Permian present an increasing need for an efficient and robust assisted history matching methodology that produces better insights for asset development planning decisions, e.g. well spacing.
A multi-scenario approach is presented to build an ensemble of history matched models that take into account existing uncertainty in reservoir description and well completions. We discuss parametrization of key uncertainties in the reservoir rock, fluid properties, fracture geometry and the effective permeability of stimulated rock. Ensemble-based assisted history matching algorithms are utilized to reduce and characterize the uncertainties in the model parameters by honoring various types of data including field dynamic data and measurements. We discuss the implementation of automated schemes for weighting of various types of data in the ensemble-based history matching algorithms. These schemes are introduced to define the history matching objective functions from various types of data including bottomhole pressure data, and the oil, water and gas productions rates. The computational results show that our adaptive scheme obtains better history match solutions.
The presented multi-scenario approach, coupled with the ability to efficiently run a high number of scenarios, enables better understanding of reservoir and fracture properties and shortens the learning curve for new development in unconventional assets. The shown case study illustrates a comprehensive analysis, using thousands of simulation cases, to obtain multiple history match solutions. Given the non-uniqueness of reservoir history matched models presented in the scenarios, this workflow improves forecasting ability and enables robust business decision makings under uncertainty.
PI (Productivity Index) degradation is a common issue in many oil fields. To obtain a highly reliable production forecast, it is critical to include well and completion performance in the analysis. A new workflow is developed to assess and incorporate the damage mechanisms at the wellbore, fracture and reservoir into production forecasting. Currently, most reservoir models use a skin factor to represent the combined well damages mechanisms. The skin factor is adjusted based on the user's experience or data analysis instead of physical modeling. In this workflow, a detailed model is built to explicitly simulate the damage mechanisms, assess the dynamic performance of the well and completion with depletion, and generate a physics-based proxy function for reservoir modeling. The new workflow closes the modeling gap in production forecasting and provides insights into which damage mechanisms impact PI degradation.
In the workflow, a detailed model is built, which includes an explicit wellbore, an explicit fracture and the reservoir. Subsurface rock and flow damage mechanisms are represented explicitly in the model. Running the model with an optimization tool, the damage mechanisms’ impact on productivity can be assessed separately or in a combination. A physics-based proxy is generated linking the change in productivity to typical well parameters such as cumulative production, drainage region depletion and drawdown. This proxy is then incorporated into a standard reservoir simulator through the utilization of scripts linking the PI evolution of the well to the typical well parameters stated above. The workflow increases the reliability of generated production forecasts by incorporating the best representation of the near wellbore flow patterns.
By varying the damage mechanism inputs the workflow is capable of history matching and forecasting the observed field behavior. The workflow has been validated for a high permeability, over pressured deep-water reservoir. The history match, PI prediction and damage mechanism analysis are presented in this paper. The new workflow can help assets to: (1) history match and forecast well performance under varying operating conditions; (2) identify the key damage mechanisms which allows for potential mitigation and remediation solutions and; (3) set operational limits that reduce the likelihood of future PI degradation and maintain current performance.
Reliability of subsurface assessment for different field development scenarios depends on how effective the uncertainty in production forecast is quantified. Currently there is a body of work in the literature on different methods to quantify the uncertainty in production forecast. The objective of this paper is to revisit and compare these probabilistic uncertainty quantification techniques through their applications to assisted history matching of a deep-water offshore waterflood field. The paper will address the benefits, limitations, and the best criteria for applicability of each technique.
Three probabilistic history matching techniques commonly practiced in the industry are discussed. These are Design-of-Experiment (DoE) with rejection sampling from proxy, Ensemble Smoother (ES) and Genetic Algorithm (GA). The model used for this study is an offshore waterflood field in Gulf-of-Mexico. Posterior distributions of global subsurface uncertainties (e.g. regional pore volume and oil-water contact) were estimated using each technique conditioned to the injection and production data.
The three probabilistic history matching techniques were applied to a deep-water field with 13 years of production history. The first 8 years of production data was used for the history matching and estimate of the posterior distribution of uncertainty in geologic parameters. While the convergence behavior and shape of the posterior distributions were different, consistent posterior means were obtained from Bayesian workflows such as DoE or ES. In contrast, the application of GA showed differences in posterior distribution of geological uncertainty parameters, especially those that had small sensitivity to the production data. We then conducted production forecast by including infill wells and evaluated the production performance using sample means of posterior geologic uncertainty parameters. The robustness of the solution was examined by performing history matching multiple times using different initial sample points (e.g. random seed). This confirmed that heuristic optimization techniques such as GA were unstable since parameter setup for the optimizer had a large impact on uncertainty characterization and production performance.
This study shows the guideline to obtain the stable solution from the history matching techniques used for different conditions such as number of simulation model realizations and uncertainty parameters, and number of datapoints (e.g. maturity of the reservoir development). These guidelines will greatly help the decision-making process in selection of best development options.
Downhole fluid sampling is ubiquitous during exploration and appraisal because formation fluid properties have a strong impact on field development decisions. Efficient planning of sampling operations and interpretation of obtained data require a model-based approach. We present a framework for forward and inverse modeling of filtrate contamination cleanup during fluid sampling. The framework consists of a deep learning (DL) proxy forward model coupled with a Markov Chain Monte Carlo (MCMC) approach for the inverse model.
The DL forward model is trained using precomputed numerical simulations of immiscible filtrate cleanup over a wide range of in situ conditions. The forward model consists of a multilayer neural network with both recurrent and linear layers, where inputs are defined by a combination of reservoir and fluid properties. A model training and selection process is presented, including network depth and layer size impact assessment. The inverse framework consists of an MCMC algorithm that stochastically explores the solution space using the likelihood of the observed data computed as the mismatch between the observations and the model predictions.
The developed DL forward model achieved up to 50% increased accuracy compared with prior proxy models based on Gaussian process regression. Additionally, the new approach reduced the memory footprint by a factor of ten. The same model architecture and training process proved applicable to multiple sampling probe geometries without compromising performance. These attributes, combined with the speed of the model, enabled its use in real-time inversion applications. Furthermore, the DL forward model is amendable to incremental improvements if new training data becomes available.
Flowline measurements acquired during cleanup and sampling hold valuable information about formation and fluid properties that may be uncovered through an inversion process. Using measurements of water cut and pressure, the MCMC inverse model achieved 93% less calls to the forward model compared to conventional gradient-based optimization along with comparable quality of history matches. Moreover, by obtaining estimates of the full posterior parameter distributions, the presented model enables more robust uncertainty quantification.
Proper characterization of heterogeneous rock properties and hydraulic fracture parameters is essential for optimizing well spacing and reliable estimation of EUR in unconventional reservoirs. High resolution characterization of matrix properties and complex fracture parameters requires efficient history matching of well production and pressure response. We propose a novel reservoir model parameterization method to reduce the number of unknowns, regularize the ill-posed problem and enhance the efficiency of history matching of unconventional reservoirs.
Our proposed method makes a low rank approximation of the spatial distribution of reservoir properties taking into account the varying model resolution of the matrix and hydraulic fractures. Typically, hydraulic fractures are represented with much higher resolution through local grid refinements compared to the matrix properties. In our approach, the spatial property distribution of both for matrix and fractures is represented using a few parameters via a linear transformation with multiresolution basis functions. The parameters in transform domain are then updated during model calibrations, substantially reducing the number of unknowns. The multiresolution basis functions are constructed by eigen-decomposition of an adaptively coarsened grid Laplacian corresponding to the data resolution. High property resolution at the area of interest through the adaptive resolution control while keeping the original grid structure improves quality of history matching, reduces simulation runtime and improves the efficiency of history matching.
We demonstrate the power and efficacy of our method using synthetic and field examples. First, we illustrate the effectiveness of the proposed multiresolution parameterization by comparing it with traditional method. For the field application, an unconventional tight oil reservoir model with a multi-stage hydraulic fractured well is calibrated using bottom-hole pressure and water cut history data. The hydraulic fractures as well as the stimulated reservoir volume (SRV) near the well are represented with higher grid resolution. In addition to matrix and fracture properties, the extent of the SRV and hydraulic fractures are also adjusted through history matching using a Multiobjective Genetic Algorithm. The calibrated ensemble of models are used to obtain bounds of production forecast.
Our proposed method is designed to calibrate reservoir and fracture properties with higher resolution in regions that have improved data resolution and higher sensitivity to the well performance data, for example the SRV region and the hydraulic fractures. This leads to a fast and efficient history matching workflow and enables us to make optimal development/completion plans in a reasonable time frame.
Tripoppoom, Sutthaporn (The University of Texas at Austin / PTT Exploration and Production PLC) | Yu, Wei (The University of Texas at Austin) | Sepehrnoori, Kamy (The University of Texas at Austin) | Miao, Jijun (The University of Texas at Austin / Sim Tech LLC)
Production data which is always available at no additional cost can give an invaluable information of fracture geometry and reservoir properties. However, in unconventional reservoirs, it is insufficient to characterize hydraulic fractures geometry and reservoir properties by only one realization because it cannot capture the non-uniqueness of history matching problem and subsurface uncertainties. Therefore, the objective of this study is to obtain multiple realizations in shale reservoirs by adopting Assisted History Matching (AHM).
We used Neural Network-Markov Chain Monte Carlo (NN-MCMC) algorithm in the proposed AHM workflow for shale reservoirs. The reason is that MCMC, one of AHM in the Bayesian statistics, has benefits of quantifying uncertainty without bias or being trapped in any local minima. Also, using MCMC with neural network (NN) as a proxy model unlocks the limitation of an infeasible number of simulation runs required by a traditional MCMC algorithm. The proposed AHM workflow also utilized the benefits of Embedded Discrete Fracture Model (EDFM) to model fractures with a higher computational efficiency than a traditional local grid refinement (LGR) method and more accuracy than the continuum approach.
We applied the proposed AHM workflow to an actual shale-gas well. We performed history matching on two cases including hydraulic fractures only and hydraulic fractures with natural fractures. The uncertain parameters for history matching consist of fracture geometry, fracture conductivity, matrix permeability, matrix and fracture water saturation, and relative permeability curves. For the case with natural fractures, we included number, length and conductivity of natural fractures as the additional uncertain parameters.
We found that, in this case, the NN-MCMC algorithm find the history match solutions around 30% from a total number of simulation runs. Also, we obtained the posterior distribution of each fracture parameter and reservoir property for both cases. Moreover, we found that the presence of natural fractures affects the posterior distribution. We observed significantly lower fracture height, lower fracture conductivity, higher fracture water saturation than the case without natural fractures because more fluid flow is enhanced by natural fractures. Lastly, the proposed AHM workflow using NN-MCMC algorithm can characterize fracture geometry, reservoir properties, and natural fractures in a probabilistic manner. These multiple realizations can be further used for a probabilistic production forecast, future fracturing design improvement, and infill well placement decision.
Liu, Guoxiang (Baker Hughes a GE Company) | Stephenson, Hayley (Baker Hughes a GE Company) | Shahkarami, Alireza (Baker Hughes a GE Company) | Murrell, Glen (Baker Hughes a GE Company) | Klenner, Robert (Energy & Environmental Research Center, University of North Dakota) | Iyer, Naresh (GE Global Research) | Barr, Brian (GE Global Research) | Virani, Nurali (GE Global Research)
Optimization problems, such as optimal well-spacing or completion design, can be resolved rapidly via surrogate proxy models, and these models can be built using either data-based or physics-based methods. Each approach has its strengths and weaknesses with respect to management of uncertainty, data quality or validation. This paper explores how data- and physics-based proxy models can be used together to create a workflow that combines the strengths of each approach and delivers an improved representation of the overall system. This paper presents use cases that display reduced simulation computational costs and/or reduced uncertainty in the outcomes of the models. A Bayesian calibration technique is used to improve predictability by combining numerical simulations with data regressions. Discrepancies between observations and surrogate outcomes are then observed to calibrate the model and improve the prediction quality and further reduce uncertainty. Furthermore, Gaussian Process Regression is used to locate global minima/maxima, with a minimal number of samples. To demonstrate the methodology, a reservoir model involving two wells in a drill space unit (DSU) in the Bakken Formation was constructed using publicly available data. This reservoir model was tuned by history matching the production data for the two wells. A data-based regression model was constructed based on machine learning technologies using the same dataset. Both models were coupled in a system to build a hybrid model to test the proposed process of data and physics coupling for completion optimization and uncertainty reduction. Subsequently, Gaussian Process Model was used to explore optimization scenarios outside of the data region of confidence and to exploit the hybrid model to further reduce uncertainty and prediction. Overall, both the computation time to identify optimal completion scenarios and uncertainty were reduced. This technique creates a robust framework to improve operational efficiency and drive completion optimization in an optimal timeframe. The hybrid modeling workflow has also been piloted in other applications such as completion design, well placement and optimization, parent-child well interference analysis, and well performance analysis.
Gao, Guohua (Shell Global Solutions (US)) | Vink, Jeroen C. (Shell Global Solutions International) | Chen, Chaohui (Shell International Exploration and Production) | Araujo, Mariela (Shell Global Solutions (US)) | Ramirez, Benjamin A. (Shell International Exploration and Production) | Jennings, James W. (Shell International Exploration and Production) | El Khamra, Yaakoub (Shell Global Solutions (US)) | Ita, Joel (Shell Global Solutions (US))
Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon-field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty-quantification work flow by seamless integration of a distributed-Gauss-Newton (GN) (DGN) optimization method with a Gaussian mixture model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches.
Multiple local maximum-a-posteriori (MAP) estimates are determined with the local-search DGN optimization method. A GMM is constructed to approximate the posterior probability-density function (PDF) by reusing simulation results generated during the DGN minimization process. The traditional acceptance/rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR-GMM samples are independent, identically distributed samples that can be directly used for uncertainty quantification of model parameters and production forecasts.
The proposed method is first validated with 1D nonlinear synthetic problems with multiple MAP points. The AR-GMM samples are better than the original GMM samples. The method is then tested with a synthetic history-matching problem using the SPE01 reservoir model (Odeh 1981; Islam and Sepehrnoori 2013) with eight uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, such as Markov-chain Monte Carlo (MCMC) and global-search DGN combined with the randomized-maximum-likelihood (RML) approach, but have a much lower computational cost (by a factor of five to 100). Finally, it is applied to a real-field reservoir model with synthetic data, with 235 uncertain parameters. AGMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. There are 105 AR-GMM samples accepted from the 1,000 original GMM samples, and they are used to quantify the uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR-GMM samples are quite consistent with the production data observed after the history-matching period.
The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated work flow.
A well-designed pilot is instrumental in reducing uncertainty for the full-field implementation of improved oil recovery (IOR) operations. Traditional model-based approaches for brown-field pilot analysis can be computationally expensive as it involves probabilistic history matching first to historical field data and then to probabilistic pilot data. This paper proposes a practical approach that combines reservoir simulations and data analytics to quantify the effectiveness of brown-field pilot projects.
In our approach, an ensemble of simulations are first performed on models based on prior distributions of subsurface uncertainties and then results for simulated historical data, simulated pilot data and ob jective functions are assembled into a database. The distribution of simulated pilot data and ob jective functions are then conditioned to actual field data using the Data-Space Inversion (DSI) technique, which circumvents the difficulties of traditional history matching. The samples from DSI, conditioned to the observed historical data, are next processed using the Ensemble Variance Analysis (EVA) method to quantify the expected uncertainty reduction of ob jective functions given the pilot data, which provides a metric to ob jectively measure the effectiveness of the pilot and compare the effectiveness of different pilot measurements and designs. Finally, the conditioned samples from DSI can also be used with the classification and regression tree (CART) method to construct signpost trees, which provides an intuitive interpretation of pilot data in terms of implications for ob jective functions.
We demonstrate the practical usefulness of the proposed approach through an application to a brown-field naturally fractured reservoir (NFR) to quantify the expected uncertainty reduction and Value of Information (VOI) of a waterflood pilot following more than 10 years of primary depletion. NFRs are notoriously hard to history match due to their extreme heterogeneity and difficult parameterization; the additional need for pilot analysis in this case further compounds the problem. Using the proposed approach, the effectiveness of a pilot can be evaluated, and signposts can be constructed without explicitly history matching the simulation model. This allows ob jective and efficient comparison of different pilot design alternatives and intuitive interpretation of pilot outcomes. We stress that the only input to the workflow is a reasonably sized ensemble of prior simulations runs (about 200 in this case), i.e., the difficult and tedious task of creating history-matched models is avoided. Once the simulation database is assembled, the data analytics workflow, which entails DSI, EVA, and CART, can be completed within minutes.
To the best of our knowledge, this is the first time the DSI-EVA-CART workflow is proposed and applied to a field case. It is one of the few pilot-evaluation methods that is computationally efficient for practical cases. We expect it to be useful for engineers designing IOR pilot for brown fields with complex reservoir models.