|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Abstract Using the right drilling fluid with optimal rheology and filtration properties is one of the most important factors in successful drilling and completion operations. Designing the right drilling fluid depends on a variety of factors viz. formation lithology, wellbore geometry, temperature, pressure, and drilling objectives. To the best of the author's knowledge there is no standard drilling fluid advisory system to aid drilling engineers and scientists to formulate effective drilling fluids systems for the entire well sections. The paper describes a drilling fluid advisory system based on Artificial Bayesian Intelligence. The advisory system includes a Bayesian decision network (BDN) model that receives inputs and outputs recommendations based on Bayesian probability determinations. This advisory system has been designed to aid drilling engineers when designing drilling fluids for their operations. This paper describes a module that was created in this advisory system. This module was created based on several inputs viz. well geometry (vertical and horizontal), temperature, pressure, productivity. To create the drilling fluids module within the advisory system, a number of drilling fluid specialists/experts were interviewed to gather the information required to determine the best practices as a function of the above inputs. These best practices were then used to build decision trees that would allow the user to take an elementary data set and end up with a decision that honors the best practices. The designing process of this advisory system also included a number of standard lab tests that start from quality assurance, initial designing and finally using field samples to confirm the success of the application. The study also discusses several field cases that validate the drilling fluids advisory system. The novel drilling fluid advisory system based on Artificial Bayesian Intelligence has been designed to aid drilling engineers and scientists to formulate effective drilling fluids systems for the entire well sections.
Abstract The evaluation of downhole fluid analysis (DFA) measurements of asphaltene gradients provides the ability to determine the extent of asphaltene equilibrium and the operative reservoir fluid geodynamics (RFG) processes. Typically, equilibrium of reservoir fluids indicates reservoir connectivity, a primary concern in field development planning. Currently, the modeling of asphaltene gradients is done through the manual evaluation of the DFA optical density gradients. The optical density measurements are fit to an equation of state (EOS), such as the Flory-Huggins-Zuo EOS, and evidence for asphaltene equilibrium is concluded if the inferred asphaltene diameter corresponds to that of the Yen-Mullins model for asphaltene composition. In this work, we present an automated Bayesian algorithm that proposes multiple hypotheses for the state of asphaltene equilibrium. The proposed hypotheses honor DFA measurements; physical models for asphaltenes in equilibrium, such as the Yen-Mullins model; and prior domain knowledge of the reservoir, such as geological layers, faults, and flow units. The leading hypotheses are reported, and evidence for or against asphaltene equilibrium is concluded from inferred quantities. Our proposed method provides a faster way for domain experts to explore different reservoir realizations that honor the theory of asphaltenes gradients and previous knowledge about the reservoir. We verify our novel method on three case studies that are undergoing different RFG processes through comparison of the interpretation done by domain experts. While there are many reservoir complexities associated with each case study, we focus on whether the underlying RFG process corresponds to the asphaltenes in equilibrium. The first case study is a light oil reservoir in the Norwegian North Sea that is mostly in fluid equilibrium with exceptions at the flanks. The second case study is a black oil reservoir that has undergone a fault block migration after the reservoir fluids had a chance to achieve equilibrium. The last case study is a black oil reservoir in quasi-equilibrium due to biodegradation in the lower portion of the well.
Jahani, Nazanin (NORCE Norwegian Research Centre) | Ambía, Joaquín (The University of Texas at Austin) | Fossum, Kristian (NORCE Norwegian Research Centre) | Alyaev, Sergey (NORCE Norwegian Research Centre) | Suter, Erich (NORCE Norwegian Research Centre) | Torres-Verdín, Carlos (The University of Texas at Austin)
Abstract The cost of drilling wells on the Norwegian Continental Shelf are extremely high, and hydrocarbon reservoirs are often located in spatially complex rock formations. Optimized well placement with real-time geosteering is crucial to efficiently produce from such reservoirs and reduce exploration and development costs. Geosteering is commonly assisted by repeated formation evaluation based on the interpretation of well logs while drilling. Thus, reliable computationally efficient and robust workflows that can interpret well logs and capture uncertainties in real time are necessary for successful well placement. We present a formation evaluation workflow for geosteering that implements an iterative version of an ensemble-based method, namely the approximate Levenberg Marquardt form of the Ensemble Randomized Maximum Likelihood (LM-EnRML). The workflow jointly estimates the petrophysical and geological model parameters and their uncertainties. In this paper the demonstrate joint estimation of layer-by-layer water saturation, porosity, and layer-boundary locations and inference of layers’ resistivities and densities. The parameters are estimated by minimizing the statistical misfit between the simulated and the observed measurements for several logs on different scales simultaneously (i.e., shallowsensing nuclear density and shallow to extra-deep EM logs). Numerical experiments performed on a synthetic example verified that the iterative ensemble-based method can estimate multiple petrophysical parameters and decrease their uncertainties in a fraction of time compared to classical Monte Carlo methods. Extra-deep EM measurements are known to provide the best reliable information for geosteering, and we show that they can be interpreted within the proposed workflow. However, we also observe that the parameter uncertainties noticeably decrease when deep-sensing EM logs are combined with shallow sensing nuclear density logs. Importantly the estimation quality increases not only in the proximity of the shallow tool but also extends to the look ahead of the extra-deep EM capabilities. We specifically quantify how shallow data can lead to significant uncertainty reduction of the boundary positions ahead of bit, which is crucial for geosteering decisions and reservoir mapping.
Abstract Log-facies classification aims to predict a vertical profile of facies at well location with log readings or rock properties calculated in the formation evaluation and/or rock-physics modeling analysis as input. Various classification approaches are described in the literature and new ones continue to appear based on emerging Machine Learning techniques. However, most of the available classification methods assume that the inputs are accurate and their inherent uncertainty, related to measurement errors and interpretation steps, is usually neglected. Accounting for facies uncertainty is not a mere exercise in style, rather it is fundamental for the purpose of understanding the reliability of the classification results, and it also represents a critical information for 3D reservoir modeling and/or seismic characterization processes. This is particularly true in wells characterized by high vertical heterogeneity of rock properties or thinly bedded stratigraphy. Among classification methods, probabilistic classifiers, which relies on the principle of Bayes decision theory, offer an intuitive way to model and propagate measurements/rock properties uncertainty into the classification process. In this work, the Bayesian classifier is enhanced such that the most likely classification of facies is expressed by maximizing the integral product between three probability functions. The latters describe: (1) the a-priori information on facies proportion (2) the likelihood of a set of measurements/rock properties to belong to a certain facies-class and (3) the uncertainty of the inputs to the classifier (log data or rock properties derived from them). Reliability of the classification outcome is therefore improved by accounting for both the global uncertainty, related to facies classes overlap in the classification model, and the depth-dependent uncertainty related to log data. As derived in this work, the most interesting feature of the proposed formulation, although generally valid for any type of probability functions, is that it can be analytically solved by representing the input distributions as a Gaussian mixture model and their related uncertainty as an additive white Gaussian noise. This gives a robust, straightforward and fast approach that can be effortlessly integrated in existing classification workflows. The proposed classifier is tested in various well-log characterization studies on clastic depositional environments where Monte-Carlo realizations of rock properties curves, output of a statistical formation evaluation analysis, are used to infer rock properties distributions. Uncertainty on rock properties, modeled as an additive white Gaussian noise, are then statistically estimated (independently at each depth along the well profile) from the ensemble of Monte-Carlo realizations. At the same time, a classifier, based on a Gaussian mixture model, is parametrically inferred from the pointwise mean of the Monte Carlo realizations given an a-priori reference profile of facies. Classification results, given by the a-posteriori facies proportion and the maximum a-posteriori prediction profiles, are finally computed. The classification outcomes clearly highlight that neglecting uncertainty leads to an erroneous final interpretation, especially at the transition zone between different facies. As mentioned, this become particularly remarkable in complex environments and highly heterogeneous scenarios.
ABSTRACT The industry is facing significant challenges due to the recent downturn in oil prices, particularly for the development of tight reservoirs. It is more critical than ever to 1) identify the sweet spots with less uncertainty and 2) optimize the completion-design parameters. The overall objective of this study is to quantify and compare the effects of reservoir quality and completion intensity on well productivity. We developed a supervised fuzzy clustering (SFC) algorithm to rank reservoir quality and completion intensity, and analyze their relative impacts on wells' productivity. We collected reservoir properties and completion-design parameters of 1,784 horizontal oil and gas wells completed in the Western Canadian Sedimentary Basin. Then, we used SFC to classify 1) reservoir quality represented by porosity, hydrocarbon saturation, net pay thickness and initial reservoir pressure; and 2) completion-design intensity represented by proppant concentration, number of stages and injected water volume per stage. Finally, we investigated the relative impacts of reservoir quality and completion intensity on wells' productivity in terms of first year cumulative barrel of oil equivalent (BOE). The results show that in low-quality reservoirs, wells' productivity follows reservoir quality. However, in high-quality reservoirs, the role of completion-design becomes significant, and the productivity can be deterred by inefficient completion design. The results suggest that in low-quality reservoirs, the productivity can be enhanced with less intense completion design, while in high-quality reservoirs, a more intense completion significantly enhances the productivity. Keywords Reservoir quality; completion intensity; supervised fuzzy clustering, approximate reasoning,tight reservoirs development
Summary In this work, we investigate the efficient estimation of the optimal design variables that maximize net present value (NPV) for the life-cycle production optimization during a single-well carbon dioxide (CO2) huff-n-puff (HnP) process in unconventional oil reservoirs. A synthetic unconventional reservoir model based on Bakken Formation oil composition is used. The model accounts for the natural fracture and geomechanical effects. Both the deterministic (based on a single reservoir model) and robust (based on an ensemble of reservoir models) production optimization strategies are considered. The injection rate of CO2, the production bottomhole pressure (BHP), the duration of injection and the production periods in each cycle of the HnP process, and the cycle lengths for a predetermined life-cycle time can be included in the set of optimum design (or well control) variables. During optimization, the NPV is calculated by a machine learning (ML) proxy model trained to accurately approximate the NPV that would be calculated from a reservoir simulator run. Similar to the ML algorithms, we use both least-squares (LS) support vector regression (SVR) and Gaussian process regression (GPR). Given a set of forward simulation runs with a commercial compositional simulator that simulates the miscible CO2 HnP process, a proxy is built based on the ML method chosen. Having the proxy model, we use it in an iterative-sampling-refinement optimization algorithm directly to optimize the design variables. As an optimization tool, the sequential quadratic programming (SQP) method is used inside this iterative-sampling-refinement optimization algorithm. Computational efficiencies of the ML proxy-based optimization methods are compared with those of the conventional stochastic simplex approximate gradient (StoSAG)-based methods. Our results show that the LS-SVR- and GPR-based proxy models are accurate and useful in approximating NPV in the optimization of the CO2 HnP process. The results also indicate that both the GPR and LS-SVR methods exhibit very similar convergence rates, but GPR requires 10 times more computational time than LS-SVR. However, GPR provides flexibility over LS-SVR to access uncertainty in our NPV predictions because it considers the covariance information of the GPR model. Both ML-based methods prove to be quite efficient in production optimization, saving significant computational times (at least 4 times more efficient) over a stochastic gradient computed from a high-fidelity compositional simulator directly in a gradient ascent algorithm. To our knowledge, this is the first study presenting a comprehensive review and comparison of two different ML-proxy-based optimization methods with traditional StoSAG-based optimization methods for the production optimization problem of a miscible CO2HnP.
Aslam, Usman (Emerson Automation Solutions) | Burgos, Jorge (Occidental Oil and Gas) | Williams, Craig (Occidental Oil and Gas) | McCloskey, Shawn (Occidental Oil and Gas) | Cooper, James (Occidental Oil and Gas) | Mirzaei, Mohammad (Occidental Oil and Gas) | Briz, Eduardo (Occidental Oil and Gas)
Abstract Reservoir production forecasts are inherently uncertain due to the lack of quality data available to build predictive reservoir models. Multiple data types, including historical production, well tests (RFT/PLT), and time-lapse seismic data, are assimilated into reservoir models during the history matching process to improve predictability of the model. Traditionally, a ‘best estimate’ for relative permeability data is assumed during the history matching process, despite there being significant uncertainty in the relative permeability. Relative permeability governs multiphase flow in the reservoir; therefore, it has significant importance in understanding the reservoir behavior as well as for model calibration and hence for reliable production forecasts. Performing sensitivities around the ‘best estimate’ relative permeability case will cover only part of the uncertainty space, with no indication of the confidence that may be placed on these forecasts. In this paper, we present an application of a Bayesian framework for uncertainty assessment and efficient history matching of a Permian CO2 EOR field for reliable production forecast. The study field has complex geology with over 65 years of historical data from primary recovery, waterflood, and CO2 injection. Relative permeability data from the field showed significant uncertainty, so we used uncertainties in the saturation endpoints as well as in the curvature of the relative permeability in multiple zones, by employing generalized Corey functions for relative permeability parameterization. Uncertainty in the relative permeability is used through a common platform integrator. An automated workflow generates the first set of relative permeability curves sampled from the prior distribution of saturation endpoints and Corey exponents, called ‘scoping runs’. These relative permeability curves are then passed to the reservoir simulator. The assumptions of uncertainties in the relative permeability data and other dynamic parameters are quickly validated by comparing the scoping runs and historical observations. By creating a mismatch or likelihood function, the Bayesian framework generates an ensemble of history matched models calibrated to the production data which can then be used for reliable probabilistic forecasting. Several iterations during the manual history match did not yield an acceptable solution, as uncertainty in the relative permeability was ignored. An application of the Bayesian inference accelerated by a proxy model found the relative permeability data to be one of the most influential parameters during the assisted history matching exercise. Incorporating the uncertainty in relative permeability data along with other dynamic parameters not only helped speed up the model calibration process, but also led to the identification of multiple history matched models. In addition, results show that the use of the Bayesian framework significantly reduced uncertainty in the most important dynamic parameters. The proposed approach allows incorporating previously ignored uncertainty in the relative permeability data in a systematic manner. The user-defined mismatch function increases the likelihood of obtaining an acceptable match and the weights in the mismatch function allow both the measurement uncertainty and the effect of simulation model inaccuracies. The Bayesian framework considers the whole uncertainty space and not just the history match region, leading to the identification of multiple history matched models.
Abstract Leveraging publicly available data is a crucial stepfor decision making around investing in the development of any new unconventional asset.Published reports of production performance along with accurate petrophysical and geological characterization of the areashelp operators to evaluate the economics and risk profiles of the new opportunities. A data-driven workflow can facilitate this process and make it less biased by enabling the agnostic analysis of the data as the first step. In this work, several machine learning algorithms are briefly explained and compared in terms of their application in the development of a production evaluation tool for a targetreservoir. Random forest, selected after evaluating several models, is deployed as a predictive model thatincorporates geological characterization and petrophysical data along with production metricsinto the production performance assessment workflow. Considering the influence of the completion design parameters on the well production performance, this workflow also facilitates evaluation of several completion strategies toimprove decision making around the best-performing completion size. Data used in this study include petrophysical parameters collected from publicly available core data, completion and production metrics, and the geological characteristics of theNiobrara formation in the Powder River Basin. Historical periodic production data are used as indicators of the productivity in a certain area in the data-driven model. This model, after training and evaluation, is deployed to predict the productivity of non-producing regions within the area of interest to help with selecting the most prolific sections for drilling the future wells. Tornado plots are provided to demonstrate the key performance driversin each focused area. A supervised fuzzy clustering model is also utilized to automate the rock quality analyses for identifying the "sweet spots" in a reservoir. The output of this model is a sweet-spot map that is generated through evaluating multiple reservoir rock properties spatially. This map assists with combining all different reservoir rock properties into a single exhibition that indicates the average "reservoir quality"of the formation in different areas. Niobrara shale is used as a case study in this work to demonstrate how the proposed workflow is applied on a selected reservoir formation whit enough historical production data available.
Soares, Ricardo Vasconcellos (NORCE Norwegian Research Centre and University of Bergen (Corresponding author) | Luo, Xiaodong (email: firstname.lastname@example.org)) | Evensen, Geir (NORCE Norwegian Research Centre) | Bhakta, Tuhin (NORCE Norwegian Research Centre and Nansen Environmental and Remote Sensing Center (NERSC))
Summary In applications of ensemble-based history matching, it is common to conduct Kalman gain or covariance localization to mitigate spurious correlations and excessive variability reduction resulting from the use of relatively small ensembles. Another alternative strategy not very well explored in reservoir applications is to apply a local analysis scheme, which consists of defining a smaller group of local model variables and observed data (observations), and perform history matching within each group individually. This work aims to demonstrate the practical advantages of a new local analysis scheme over the Kalman gain localization in a 4D seismic history-matching problem that involves big seismic data sets. In the proposed local analysis scheme, we use a correlation-based adaptive data-selection strategy to choose observations for the update of each group of local model variables. Compared to the Kalman gain localization scheme, the proposed local analysis scheme has an improved capacity in handling big models and big data sets, especially in terms of computer memory required to store relevant matrices involved in ensemble-based history-matching algorithms. In addition, we show that despite the need for a higher computational cost to perform model update per iteration step, the proposed local analysis scheme makes the ensemble-based history-matching algorithm converge faster, rendering the same level of data mismatch values at a faster pace. Meanwhile, with the same numbers of iteration steps, the ensemble-based history-matching algorithm equipped with the proposed local analysis scheme tends to yield better qualities for the estimated reservoir models than that with a Kalman gain localization scheme. As such, the proposed adaptive local analysis scheme has the potential of facilitating wider applications of ensemble-based algorithms to practical large-scale history-matching problems.
Abstract In the event of offshore oilfield blow-out, real-time quantification of both spilled volume, recovered oil and environmental damage is essential. It is due to costly recovery and restoration process. In order to develop a robust and accurate quantification, we need to consider numerous parameters, which are sometimes tricky to be identified and captured. In this work, we present a new modeling technique under uncertainty, which accommodates numerous parameters and interaction among them. We begin the model by identifying possible parameters that contributes to the process: grouped into (1) subsurface, (2) surface and (3) operations. Subsurface e.g. well and reservoir characteristics. Surface e.g. ocean, wind, soil. (3) Operations e.g. oil spill treatment blow-out rate, oil characteristics, reservoir characteristics, ocean current speed, meteorological aspects, soil properties, and oil-spill treatment (oil booms and skimmers). We assign prior distribution for each parameter based on available data to capture the uncertainties. Before progressing to uncertainty propagation, we construct objective response (amount of recovered oil) through mass conservation equation in data-driven and non-intrusive way, using design of experiment and regression-based method. We propagate uncertainties using Monte Carlo simulation approach, where the result is presented in a distribution form, summarized by P10, P50, and P90 values. This work shows how to robustly calculate the amount of recovered oil under uncertainty in the event of offshore blow out. There are several notable challenges within the approach: 1) determining the uncertainty range in blow-out rate in case of rupture occurs in the well, 2) obtaining data for wind and ocean current speed since there is an interplay between local and global climate, and 3) accuracy of capturing the shoreline geometry. Despite the challenges, the results are in-line with the physics and several recorded blow-out cases. Define what is blow out rate (important as has highest sensitivity). Through sensitivity analysis with Sobol decomposition (define this …), we can define the heavy hitters. These heavy hitters give us knowledge on which parameters should be aware of. In real-time quantification, this analysis can provide an insight on what treatment method should be performed to efficiently recover the spill. We also highlight about the sufficiency of the model to obtain several parameters’ range, for example blow-out rate. The model should at least capture the physics in high details and incorporate multiple scenarios. In the case of blow-out rate, we extensively model the well completion and consider leaking due to unprecedented fractures or crater formation around the wellbore. We introduce a new framework of modeling to perform real-time quantification of offshore oil spills. This framework allows inferring the causality of the process and illustrating the risk level.