Not enough data to create a plot.
Try a different view from the menu above.
Just as there are shortcomings of deterministic models that can be avoided with probabilistic models, the latter have their associated pitfalls as well. Adding uncertainty, by replacing single estimate inputs with probability distributions, requires the user to exercise caution on several fronts. Without going into exhaustive detail we offer a couple of illustrations. First, the probabilistic model is more complicated. It demands more documentation and more attention to logical structure.
The oil and gas industry invests significant money and other resources in projects with highly uncertain outcomes. We drill complex wells and build gas plants, refineries, platforms, and pipelines where costly problems can occur and where associated revenues might be disappointing. We may lose our investment; we may make a handsome profit. We are in a risky business. Assessing the outcomes, assigning probabilities of occurrence and associated values, is how we analyze and prepare to manage risk.
Abstract Using the right drilling fluid with optimal rheology and filtration properties is one of the most important factors in successful drilling and completion operations. Designing the right drilling fluid depends on a variety of factors viz. formation lithology, wellbore geometry, temperature, pressure, and drilling objectives. To the best of the author's knowledge there is no standard drilling fluid advisory system to aid drilling engineers and scientists to formulate effective drilling fluids systems for the entire well sections. The paper describes a drilling fluid advisory system based on Artificial Bayesian Intelligence. The advisory system includes a Bayesian decision network (BDN) model that receives inputs and outputs recommendations based on Bayesian probability determinations. This advisory system has been designed to aid drilling engineers when designing drilling fluids for their operations. This paper describes a module that was created in this advisory system. This module was created based on several inputs viz. well geometry (vertical and horizontal), temperature, pressure, productivity. To create the drilling fluids module within the advisory system, a number of drilling fluid specialists/experts were interviewed to gather the information required to determine the best practices as a function of the above inputs. These best practices were then used to build decision trees that would allow the user to take an elementary data set and end up with a decision that honors the best practices. The designing process of this advisory system also included a number of standard lab tests that start from quality assurance, initial designing and finally using field samples to confirm the success of the application. The study also discusses several field cases that validate the drilling fluids advisory system. The novel drilling fluid advisory system based on Artificial Bayesian Intelligence has been designed to aid drilling engineers and scientists to formulate effective drilling fluids systems for the entire well sections.
Abstract The evaluation of downhole fluid analysis (DFA) measurements of asphaltene gradients provides the ability to determine the extent of asphaltene equilibrium and the operative reservoir fluid geodynamics (RFG) processes. Typically, equilibrium of reservoir fluids indicates reservoir connectivity, a primary concern in field development planning. Currently, the modeling of asphaltene gradients is done through the manual evaluation of the DFA optical density gradients. The optical density measurements are fit to an equation of state (EOS), such as the Flory-Huggins-Zuo EOS, and evidence for asphaltene equilibrium is concluded if the inferred asphaltene diameter corresponds to that of the Yen-Mullins model for asphaltene composition. In this work, we present an automated Bayesian algorithm that proposes multiple hypotheses for the state of asphaltene equilibrium. The proposed hypotheses honor DFA measurements; physical models for asphaltenes in equilibrium, such as the Yen-Mullins model; and prior domain knowledge of the reservoir, such as geological layers, faults, and flow units. The leading hypotheses are reported, and evidence for or against asphaltene equilibrium is concluded from inferred quantities. Our proposed method provides a faster way for domain experts to explore different reservoir realizations that honor the theory of asphaltenes gradients and previous knowledge about the reservoir. We verify our novel method on three case studies that are undergoing different RFG processes through comparison of the interpretation done by domain experts. While there are many reservoir complexities associated with each case study, we focus on whether the underlying RFG process corresponds to the asphaltenes in equilibrium. The first case study is a light oil reservoir in the Norwegian North Sea that is mostly in fluid equilibrium with exceptions at the flanks. The second case study is a black oil reservoir that has undergone a fault block migration after the reservoir fluids had a chance to achieve equilibrium. The last case study is a black oil reservoir in quasi-equilibrium due to biodegradation in the lower portion of the well.
Jahani, Nazanin (NORCE Norwegian Research Centre) | Ambía, Joaquín (The University of Texas at Austin) | Fossum, Kristian (NORCE Norwegian Research Centre) | Alyaev, Sergey (NORCE Norwegian Research Centre) | Suter, Erich (NORCE Norwegian Research Centre) | Torres-Verdín, Carlos (The University of Texas at Austin)
Abstract The cost of drilling wells on the Norwegian Continental Shelf are extremely high, and hydrocarbon reservoirs are often located in spatially complex rock formations. Optimized well placement with real-time geosteering is crucial to efficiently produce from such reservoirs and reduce exploration and development costs. Geosteering is commonly assisted by repeated formation evaluation based on the interpretation of well logs while drilling. Thus, reliable computationally efficient and robust workflows that can interpret well logs and capture uncertainties in real time are necessary for successful well placement. We present a formation evaluation workflow for geosteering that implements an iterative version of an ensemble-based method, namely the approximate Levenberg Marquardt form of the Ensemble Randomized Maximum Likelihood (LM-EnRML). The workflow jointly estimates the petrophysical and geological model parameters and their uncertainties. In this paper the demonstrate joint estimation of layer-by-layer water saturation, porosity, and layer-boundary locations and inference of layers’ resistivities and densities. The parameters are estimated by minimizing the statistical misfit between the simulated and the observed measurements for several logs on different scales simultaneously (i.e., shallowsensing nuclear density and shallow to extra-deep EM logs). Numerical experiments performed on a synthetic example verified that the iterative ensemble-based method can estimate multiple petrophysical parameters and decrease their uncertainties in a fraction of time compared to classical Monte Carlo methods. Extra-deep EM measurements are known to provide the best reliable information for geosteering, and we show that they can be interpreted within the proposed workflow. However, we also observe that the parameter uncertainties noticeably decrease when deep-sensing EM logs are combined with shallow sensing nuclear density logs. Importantly the estimation quality increases not only in the proximity of the shallow tool but also extends to the look ahead of the extra-deep EM capabilities. We specifically quantify how shallow data can lead to significant uncertainty reduction of the boundary positions ahead of bit, which is crucial for geosteering decisions and reservoir mapping.
Abstract Log-facies classification aims to predict a vertical profile of facies at well location with log readings or rock properties calculated in the formation evaluation and/or rock-physics modeling analysis as input. Various classification approaches are described in the literature and new ones continue to appear based on emerging Machine Learning techniques. However, most of the available classification methods assume that the inputs are accurate and their inherent uncertainty, related to measurement errors and interpretation steps, is usually neglected. Accounting for facies uncertainty is not a mere exercise in style, rather it is fundamental for the purpose of understanding the reliability of the classification results, and it also represents a critical information for 3D reservoir modeling and/or seismic characterization processes. This is particularly true in wells characterized by high vertical heterogeneity of rock properties or thinly bedded stratigraphy. Among classification methods, probabilistic classifiers, which relies on the principle of Bayes decision theory, offer an intuitive way to model and propagate measurements/rock properties uncertainty into the classification process. In this work, the Bayesian classifier is enhanced such that the most likely classification of facies is expressed by maximizing the integral product between three probability functions. The latters describe: (1) the a-priori information on facies proportion (2) the likelihood of a set of measurements/rock properties to belong to a certain facies-class and (3) the uncertainty of the inputs to the classifier (log data or rock properties derived from them). Reliability of the classification outcome is therefore improved by accounting for both the global uncertainty, related to facies classes overlap in the classification model, and the depth-dependent uncertainty related to log data. As derived in this work, the most interesting feature of the proposed formulation, although generally valid for any type of probability functions, is that it can be analytically solved by representing the input distributions as a Gaussian mixture model and their related uncertainty as an additive white Gaussian noise. This gives a robust, straightforward and fast approach that can be effortlessly integrated in existing classification workflows. The proposed classifier is tested in various well-log characterization studies on clastic depositional environments where Monte-Carlo realizations of rock properties curves, output of a statistical formation evaluation analysis, are used to infer rock properties distributions. Uncertainty on rock properties, modeled as an additive white Gaussian noise, are then statistically estimated (independently at each depth along the well profile) from the ensemble of Monte-Carlo realizations. At the same time, a classifier, based on a Gaussian mixture model, is parametrically inferred from the pointwise mean of the Monte Carlo realizations given an a-priori reference profile of facies. Classification results, given by the a-posteriori facies proportion and the maximum a-posteriori prediction profiles, are finally computed. The classification outcomes clearly highlight that neglecting uncertainty leads to an erroneous final interpretation, especially at the transition zone between different facies. As mentioned, this become particularly remarkable in complex environments and highly heterogeneous scenarios.
ABSTRACT The industry is facing significant challenges due to the recent downturn in oil prices, particularly for the development of tight reservoirs. It is more critical than ever to 1) identify the sweet spots with less uncertainty and 2) optimize the completion-design parameters. The overall objective of this study is to quantify and compare the effects of reservoir quality and completion intensity on well productivity. We developed a supervised fuzzy clustering (SFC) algorithm to rank reservoir quality and completion intensity, and analyze their relative impacts on wells' productivity. We collected reservoir properties and completion-design parameters of 1,784 horizontal oil and gas wells completed in the Western Canadian Sedimentary Basin. Then, we used SFC to classify 1) reservoir quality represented by porosity, hydrocarbon saturation, net pay thickness and initial reservoir pressure; and 2) completion-design intensity represented by proppant concentration, number of stages and injected water volume per stage. Finally, we investigated the relative impacts of reservoir quality and completion intensity on wells' productivity in terms of first year cumulative barrel of oil equivalent (BOE). The results show that in low-quality reservoirs, wells' productivity follows reservoir quality. However, in high-quality reservoirs, the role of completion-design becomes significant, and the productivity can be deterred by inefficient completion design. The results suggest that in low-quality reservoirs, the productivity can be enhanced with less intense completion design, while in high-quality reservoirs, a more intense completion significantly enhances the productivity. Keywords Reservoir quality; completion intensity; supervised fuzzy clustering, approximate reasoning,tight reservoirs development
Summary In this work, we investigate the efficient estimation of the optimal design variables that maximize net present value (NPV) for the life-cycle production optimization during a single-well carbon dioxide (CO2) huff-n-puff (HnP) process in unconventional oil reservoirs. A synthetic unconventional reservoir model based on Bakken Formation oil composition is used. The model accounts for the natural fracture and geomechanical effects. Both the deterministic (based on a single reservoir model) and robust (based on an ensemble of reservoir models) production optimization strategies are considered. The injection rate of CO2, the production bottomhole pressure (BHP), the duration of injection and the production periods in each cycle of the HnP process, and the cycle lengths for a predetermined life-cycle time can be included in the set of optimum design (or well control) variables. During optimization, the NPV is calculated by a machine learning (ML) proxy model trained to accurately approximate the NPV that would be calculated from a reservoir simulator run. Similar to the ML algorithms, we use both least-squares (LS) support vector regression (SVR) and Gaussian process regression (GPR). Given a set of forward simulation runs with a commercial compositional simulator that simulates the miscible CO2 HnP process, a proxy is built based on the ML method chosen. Having the proxy model, we use it in an iterative-sampling-refinement optimization algorithm directly to optimize the design variables. As an optimization tool, the sequential quadratic programming (SQP) method is used inside this iterative-sampling-refinement optimization algorithm. Computational efficiencies of the ML proxy-based optimization methods are compared with those of the conventional stochastic simplex approximate gradient (StoSAG)-based methods. Our results show that the LS-SVR- and GPR-based proxy models are accurate and useful in approximating NPV in the optimization of the CO2 HnP process. The results also indicate that both the GPR and LS-SVR methods exhibit very similar convergence rates, but GPR requires 10 times more computational time than LS-SVR. However, GPR provides flexibility over LS-SVR to access uncertainty in our NPV predictions because it considers the covariance information of the GPR model. Both ML-based methods prove to be quite efficient in production optimization, saving significant computational times (at least 4 times more efficient) over a stochastic gradient computed from a high-fidelity compositional simulator directly in a gradient ascent algorithm. To our knowledge, this is the first study presenting a comprehensive review and comparison of two different ML-proxy-based optimization methods with traditional StoSAG-based optimization methods for the production optimization problem of a miscible CO2HnP.
Aslam, Usman (Emerson Automation Solutions) | Burgos, Jorge (Occidental Oil and Gas) | Williams, Craig (Occidental Oil and Gas) | McCloskey, Shawn (Occidental Oil and Gas) | Cooper, James (Occidental Oil and Gas) | Mirzaei, Mohammad (Occidental Oil and Gas) | Briz, Eduardo (Occidental Oil and Gas)
Abstract Reservoir production forecasts are inherently uncertain due to the lack of quality data available to build predictive reservoir models. Multiple data types, including historical production, well tests (RFT/PLT), and time-lapse seismic data, are assimilated into reservoir models during the history matching process to improve predictability of the model. Traditionally, a ‘best estimate’ for relative permeability data is assumed during the history matching process, despite there being significant uncertainty in the relative permeability. Relative permeability governs multiphase flow in the reservoir; therefore, it has significant importance in understanding the reservoir behavior as well as for model calibration and hence for reliable production forecasts. Performing sensitivities around the ‘best estimate’ relative permeability case will cover only part of the uncertainty space, with no indication of the confidence that may be placed on these forecasts. In this paper, we present an application of a Bayesian framework for uncertainty assessment and efficient history matching of a Permian CO2 EOR field for reliable production forecast. The study field has complex geology with over 65 years of historical data from primary recovery, waterflood, and CO2 injection. Relative permeability data from the field showed significant uncertainty, so we used uncertainties in the saturation endpoints as well as in the curvature of the relative permeability in multiple zones, by employing generalized Corey functions for relative permeability parameterization. Uncertainty in the relative permeability is used through a common platform integrator. An automated workflow generates the first set of relative permeability curves sampled from the prior distribution of saturation endpoints and Corey exponents, called ‘scoping runs’. These relative permeability curves are then passed to the reservoir simulator. The assumptions of uncertainties in the relative permeability data and other dynamic parameters are quickly validated by comparing the scoping runs and historical observations. By creating a mismatch or likelihood function, the Bayesian framework generates an ensemble of history matched models calibrated to the production data which can then be used for reliable probabilistic forecasting. Several iterations during the manual history match did not yield an acceptable solution, as uncertainty in the relative permeability was ignored. An application of the Bayesian inference accelerated by a proxy model found the relative permeability data to be one of the most influential parameters during the assisted history matching exercise. Incorporating the uncertainty in relative permeability data along with other dynamic parameters not only helped speed up the model calibration process, but also led to the identification of multiple history matched models. In addition, results show that the use of the Bayesian framework significantly reduced uncertainty in the most important dynamic parameters. The proposed approach allows incorporating previously ignored uncertainty in the relative permeability data in a systematic manner. The user-defined mismatch function increases the likelihood of obtaining an acceptable match and the weights in the mismatch function allow both the measurement uncertainty and the effect of simulation model inaccuracies. The Bayesian framework considers the whole uncertainty space and not just the history match region, leading to the identification of multiple history matched models.