Gao, Guohua (Shell Global Solutions (US)) | Vink, Jeroen C. (Shell Global Solutions International) | Chen, Chaohui (Shell International Exploration and Production) | Araujo, Mariela (Shell Global Solutions (US)) | Ramirez, Benjamin A. (Shell International Exploration and Production) | Jennings, James W. (Shell International Exploration and Production) | El Khamra, Yaakoub (Shell Global Solutions (US)) | Ita, Joel (Shell Global Solutions (US))
Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon-field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty-quantification work flow by seamless integration of a distributed-Gauss-Newton (GN) (DGN) optimization method with a Gaussian mixture model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches.
Multiple local maximum-a-posteriori (MAP) estimates are determined with the local-search DGN optimization method. A GMM is constructed to approximate the posterior probability-density function (PDF) by reusing simulation results generated during the DGN minimization process. The traditional acceptance/rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR-GMM samples are independent, identically distributed samples that can be directly used for uncertainty quantification of model parameters and production forecasts.
The proposed method is first validated with 1D nonlinear synthetic problems with multiple MAP points. The AR-GMM samples are better than the original GMM samples. The method is then tested with a synthetic history-matching problem using the SPE01 reservoir model (Odeh 1981; Islam and Sepehrnoori 2013) with eight uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, such as Markov-chain Monte Carlo (MCMC) and global-search DGN combined with the randomized-maximum-likelihood (RML) approach, but have a much lower computational cost (by a factor of five to 100). Finally, it is applied to a real-field reservoir model with synthetic data, with 235 uncertain parameters. AGMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. There are 105 AR-GMM samples accepted from the 1,000 original GMM samples, and they are used to quantify the uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR-GMM samples are quite consistent with the production data observed after the history-matching period.
The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated work flow.
Hong, Aojie (National IOR Centre of Norway and University of Stavanger) | Bratvold, Reidar B. (National IOR Centre of Norway and University of Stavanger) | Lake, Larry W. (University of Texas at Austin) | Ruiz Maraggi, Leopoldo M. (University of Texas at Austin)
Aojie Hong and Reidar B. Bratvold, National IOR Centre of Norway and University of Stavanger, and Larry W. Lake and Leopoldo M. Ruiz Maraggi, University of Texas at Austin Summary Decline-curve analysis (DCA) for unconventional plays requires a model that can capture the characteristics of different flow regimes. Thus, various models have been proposed. Traditionally, in probabilistic DCA, an analyst chooses a single model that is believed to best fit the data. However, several models might fit the data almost equally well, and the one that best fits the data might not best represent the flow characteristics. Therefore, uncertainty remains regarding which is the "best" model. This work aims to integrate model uncertainty in probabilistic DCA for unconventional plays. Instead of identifying a single "best" model, we propose to regard any model as potentially good, with goodness characterized by a probability. The probability of a model being good is interpreted as a measure of the relative truthfulness of this model compared with the other models. This probability is subsequently used to weight the model forecast. Bayes' law is used to assess the model probabilities for given data. Multiple samples of the model-parameter values are obtained using maximum likelihood estimation (MLE) with Monte Carlo simulation. Thus, the unique probabilistic forecasts of each individual model are aggregated into a single probabilistic forecast, which incorporates model uncertainty along with the intrinsic uncertainty (i.e., the measurement errors) in the given data. We demonstrate and conclude that using the proposed approach can mitigate over/underestimates resulting from using a single decline-curve model for forecasting. The proposed approach performs well in propagating model uncertainty to uncertainty in production forecasting; that is, we determine a forecast that represents uncertainty given multiple possible models conditioned to the data. The field data show that no one model is the most probable to be good for all wells. The novelties of this work are that probability is used to describe the goodness of a model; a Bayesian approach is used to integrate the model uncertainty in probabilistic DCA; the approach is applied to actual field data to identify the most-probable model given the data; and we demonstrate the value of using this approach to consider multiple models in probabilistic DCA for unconventional plays. Introduction Although numerical techniques for forecasting hydrocarbon production have developed rapidly over the past decades, DCA remains an industry-accepted method and is used extensively in the oil and gas industry. Decline-curve models are very computationally attractive because only production data, which can be easily acquired, are required for determining a few parameter values through history matching.
It has been demonstrated in both laboratory measurements and field applications that tertiary polymer flooding can enhance oil recovery from heterogeneous reservoirs, primarily through macroscopic sweep (conformance). This study quantifies the effect of layering on tertiary polymer flooding as a function of layer-permeability contrast, the timing of polymer flooding, the oil/water-viscosity ratio, and the oil/polymer-viscosity ratio. This is achieved by analyzing the results from fine-grid numerical simulations of waterflooding and tertiary polymer flooding in simple layered models.
We find that there is a permeability contrast between the layers of the reservoir at which maximum incremental oil recovery is obtained, and this permeability contrast depends on the oil/water-viscosity ratio, polymer/water-viscosity ratio, and onset time for the polymer flood. Building on an earlier formulation that describes whether a displacement is understable or overstable, we present a linear correlation to estimate this permeability contrast. The accuracy of the newly proposed formulation is demonstrated by reproducing and predicting the permeability contrast from existing flow simulations and further flow simulations that have not been used to formulate the correlation.
This correlation will enable reservoir engineers to estimate the combination of permeability contrast, water/oil-viscosity ratio, and polymer/water-viscosity ratio that will give the maximum incremental oil recovery from tertiary polymer flooding in layered reservoirs regardless of the timing of the start of polymer flooding. This could be a useful screening tool to use before starting a full-scale simulation study of polymer flooding in each reservoir.
Field data have shown the decline of fracture conductivity during reservoir depletion. In addition, refracturing and infill drilling have recently gained much attention as efficient methods to enhance recovery in shale reservoirs. However, current approaches present difficulties in efficiently and accurately simulating such processes, especially for large-scale cases with complex hydraulic and natural fractures.
In this study, a general numerical method compatible with existing simulators is developed to model dynamic behaviors of complex fractures. The method is an extension of an embedded discrete-fracture model (EDFM). With a new set of EDFM formulations, the nonneighboring connections (NNCs) in the EDFM are treated as regular connections in traditional simulators, and the NNC transmissibility factors are linked with gridblock permeabilities. Hence, manipulating block permeabilities in simulators can conveniently control the fluid flow through fractures. Complex dynamic behaviors of hydraulic fractures and natural fractures can be investigated using this method.
The proposed methodology is implemented in a commercial reservoir simulator in a nonintrusive manner. We first present one synthetic case study in a shale-oil reservoir to verify the model accuracy and then combine the new model with field data to demonstrate its field applicability. Subsequently, four field-scale case studies with complex fractures in two and three dimensions are presented to illustrate the applicability of the method. These studies involve vertical- and horizontal-well refracturing in tight reservoirs, infill drilling, and fracture activation in a naturally fractured reservoir. The proposed approach is combined with empirical correlations and geomechanical criteria to model stress-dependent fracture conductivity and natural-fracture activation. It also shows convenience in dynamically adding new fractures or extending existing fractures during simulation. Results of these studies further confirm the significance of dynamic fracture behaviors and fracture complexity in the analysis and optimization of well performance.
A challenge in oil-reservoir studies is evaluating the ability of geomechanical, statistical, and geophysical methods to predict discrete geological features. This problem arises frequently with fracture corridors, which are discrete, tabular subvertical fracture clusters. Fracture corridors can be inferred from well data such as horizontal-borehole-image logs. Unfortunately, well data, and especially borehole image logs, are sparse, and predictive methods are needed to fill in the gap between wells. One way to evaluate such methods is to compare predicted and inferred fracture corridors statistically, using chi-squared and contingency tables.
In this article, we propose a modified contingency table to validate fracture-corridor-prediction techniques. We introduce two important modifications to capture special aspects of fracture corridors. The first modification is the incorporation of exclusion zones where no fracture corridors can exist, and the second modification is taking into consideration the fuzzy nature of fracture-corridor indicators from wells such as circulation losses. An indicator is fuzzy when it has more than one possible interpretation. The reliability of an indicator is the probability that it correctly suggests a fracture corridor. The indicators with reliability of unity are hard indicators, and “soft” and “fuzzy” indicators are those with reliability that is less than unity.
A structural grid is overlaid on the reservoir top in an oil field. Each cell of the grid is examined for the presence and reliability of inferred fracture corridors and exclusion zones and the confidence level of predicted fracture corridors. The results are summarized in a contingency table and are used to calculate chi-squared and conditional probability of having an actual fracture corridor given a predicted fracture corridor.
Three actual case studies are included to demonstrate how single or joint predictive methods can be statistically evaluated and how conditional probabilities are calculated using the modified contingency tables. The first example tests seismic faults as indicators of fracture corridors. The other examples test fracture corridors predicted by a simple geomechanical method.
Silva, Thiago Lima (Federal University of Santa Catarina and Norwegian University of Science and Technology) | Codas, Andrés (IBM Research) | Stanko, Milan (Norwegian University of Science and Technology) | Camponogara, Eduardo (Federal University of Santa Catarina) | Foss, Bjarne (Norwegian University of Science and Technology)
A methodology is proposed for the production optimization of oil reservoirs constrained by gathering systems. Because of differences in scale and simulation tools, production optimization involving oil reservoirs and gathering networks typically adopts standalone models for each domain. Although some reservoir simulators allow the modeling of inflow-control devices (ICDs) and deviated wells, the handling of gathering-network constraints is still limited. The disregard of such constraints might render unfeasible operational plans with respect to the gathering facilities, precluding their application in real-world fields. We propose using multiple shooting (MS) to handle the output constraints from the gathering network in a scalable way. MS allowed the handling of multiple output constraints because it splits the prediction horizon into several smaller intervals, enabling the use of decomposition and parallelization techniques. The novelty of this work lies in the coupling of reservoir and network models, and in the exploitation of the problem structure to cope with multiple network constraints. An explicit coupling of reservoir and network models is used to avoid the extra burden of converging the equations of the integrated system at every timestep. Instead, the inconsistencies between reservoir and network flows and pressures are modeled as constraints in the optimization formulation. Hence, all constraints regarding both reservoir and network equations are consistent at the convergence of the algorithm. The integrated-production-optimization problem is solved with a reduced sequential quadratic programming (SQP) (RSQP) algorithm, which is an efficient gradient-based optimization method. The MS ability to handle such constraints is assessed by a simulation analysis performed in a two-phase black-oil reservoir producing to a gathering network equipped with electrical submersible pumps (ESPs). The results showed that the method is suitable to handle complex and numerous network constraints. Because of the nonconvex nature of the control-optimization problem, a heuristic procedure was developed to obtain a feasible initial solution for the integrated-production system. Further, a case study compared long-term optimization with short-term practices, where the latter yielded a lower net present value (NPV), arguably because it could not anticipate early water-front arrivals.
The analysis of pressure-transient and rate-transient data in gas-condensate reservoirs (GCRs), operating below dewpoint pressure (Pdew) often requires using two-phase pseudovariables. These are used to handle the nonlinearities that arise in the governing partial-differential equation as a result of pressure-dependent saturation, mobility, and fluid-property changes. The incorporation of relative permeability (kr) data in the computation of such pseudovariables requires a priori knowledge of sufficiently representative pressure-saturation relationships. For this purpose, a number of analytical and semianalytical pressure-saturation prediction methods can be found in the literature.
In this study, a pressure-saturation prediction method, which relates component weight fractions (in the gas phase, the condensate phase, and the reservoir fluid mixture) and gas fractional flow [or gas to total gas-plus-condensate flow rate (GTR)] to pressure, was used to compute pressure-saturation relationships for various reservoir fluids and two rock types. The computed pressure-saturation relationships were compared to the responses observed in the near-wellbore region of corresponding single-well 1D radial homogeneous compositional reservoir-simulation models operated at constant well flowing pressures (Pwf) lower than Pdew. Sensitivities were conducted to highlight the impact of gas-condensate fluid richness, kr characteristics, degree of reservoir undersaturation (Pi – Pdew), and Pwf on observed trends in the representativity of the predicted pressure-saturation relationships.
Like other methods based on two-phase steady-state (SS) assumptions, the condensate saturations predicted by the GTR method generally matched those observed around the wellbore in the simulation models when Pi – Pdew was sufficiently high. For cases in which Pi – Pdew was not sufficiently high, an examination of radial pressure and saturation profiles from the simulation models showed that the trend of increased condensate saturation levels observed around the wellbore with increasing degrees of reservoir undersaturation, as reported in literature, represents one of three possible trends (i.e., increasing, decreasing, and almost-constant condensate-saturation levels). For cases in which the predicted GTR-based pressure-saturation curve showed lower (higher) condensate saturations than the pressure/volume/temperature (PVT) liquid-saturation curves at the given Pwf, increasing Pi – Pdew resulted in decreasing (increasing) condensate-saturation levels around the wellbore. The results indicate that at low Pi – Pdew values, trends in two-phase SS-based analysis techniques are affected by the gas-condensate fluid richness and the nature of the kr characteristics. It is also shown that for cases in which there is not much of a difference between PVT-based and SS-based pressure-saturation curves, variations in the Pi – Pdew do not affect the condensate-saturation levels around the wellbore.
Fractures often influence production in hydrocarbon reservoirs, yet the pressure transients observed in the wells might not show the conventional well-test signatures. In this case, the effect of fractures on production would be misinterpreted or even completely missed. The heterogeneous nature of fractured reservoirs makes them difficult to characterize and develop. In addition, the location of a producer within the fracture network also affects the pressure response; however, conventional well-test analysis assumes that the producer is located in symmetrical fracture networks.
In this paper we investigate the effects of variations in fracture conductivity and location of the producer in the fracture network on the pressure-transient responses. To overcome the limitations of the dual-porosity (DP) model, this study uses a discrete fracture/matrix (DFM) modeling technique and an unstructured-grid reservoir simulator to generate pressure transients in all analyzed fracture networks. Our rigorous and systematic geoengineering work flow enables us to correlate the pressure transients to the known geological features of the simulated reservoir model. We observed that the simulated pressure transients vary significantly depending on the location of the producer in the fracture network and the properties of the fractures that the producer intercepts. Our findings enable us to interpret some unconventional features of intersecting fractures with variable conductivity.
We observed that the behavior of two intersecting fractures, in which the well asymmetrically intercepts a finite-conductivity fracture, can be similar to that of a well intercepting a fracture in a connected fracture network with uniform fracture conductivity. Furthermore, a well intercepting a finite-conductivity fracture in naturally fractured reservoirs (NFRs) with both finite- and infinite-conductivity fractures would yield a DP response (V-shape) that might otherwise be absent if the fracture network is assumed to have uniform conductivity.
Matrix permeability is a key petrophysical parameter in reservoir evaluation and simulation. However, measurement of this parameter remains problematic for unconventional reservoirs. One of the challenges lies in the influence of fractures. Inclusion of fractures can lead to overestimation of shale matrix permeability. In this paper, new experimental and data-analysis procedures are developed for more-accurate yet relatively fast measurement of shale matrix permeability on the basis of previous work (Peng and Loucks 2016). The influence of fractures on matrix porosity and permeability is quantified and excluded. Reliability and consistency of the measurement results are confirmed through multiple means, including analytical solution back calculation and measurements for similar samples but with different plug diameters. Because the influence of fractures is explicitly excluded in data analysis, the new method is also more flexible regarding sample conditions—even broken plug samples with fractures can be used in this method. This is another advantage of the new method given the difficulty in obtaining “intact” plugs because of the fissility of shale.
He, Lang (Southwest Petroleum University, Chengdu) | Mei, Haiyan (Southwest Petroleum University, Chengdu) | Hu, Xinrui (Southwest Petroleum University, Chengdu) | Dejam, Morteza (University of Wyoming) | Kou, Zuhao (University of Wyoming) | Zhang, Maolin (Yangtze University, Wuhan)
Lang He, Haiyan Mei, and Xinrui Hu, Southwest Petroleum University, Chengdu; Morteza Dejam and Zuhao Kou, University of Wyoming; and Maolin Zhang, Yangtze University, Wuhan Summary A series of shale gas adsorption and desorption experiments are conducted. Desorption and adsorption curves are not coincident, with the former located above the latter, which suggests that adsorption hysteresis also occurs in shale gas. Pseudodeviation factor (Z*) is revised to advance the material-balance equation (MBE) and flowing material balance (FMB). The case study of the Fuling Shale in China illustrates that original gas in place (OGIP) of all three wells (1-HF, 2-HF, and 3-HF) calculated by conventional FMB is lower than that calculated by refined FMB, which has accounted for adsorption hysteresis. Adsorption hysteresis should be accounted for to accurately determine OGIP.