|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Abstract Using the right drilling fluid with optimal rheology and filtration properties is one of the most important factors in successful drilling and completion operations. Designing the right drilling fluid depends on a variety of factors viz. formation lithology, wellbore geometry, temperature, pressure, and drilling objectives. To the best of the author's knowledge there is no standard drilling fluid advisory system to aid drilling engineers and scientists to formulate effective drilling fluids systems for the entire well sections. The paper describes a drilling fluid advisory system based on Artificial Bayesian Intelligence. The advisory system includes a Bayesian decision network (BDN) model that receives inputs and outputs recommendations based on Bayesian probability determinations. This advisory system has been designed to aid drilling engineers when designing drilling fluids for their operations. This paper describes a module that was created in this advisory system. This module was created based on several inputs viz. well geometry (vertical and horizontal), temperature, pressure, productivity. To create the drilling fluids module within the advisory system, a number of drilling fluid specialists/experts were interviewed to gather the information required to determine the best practices as a function of the above inputs. These best practices were then used to build decision trees that would allow the user to take an elementary data set and end up with a decision that honors the best practices. The designing process of this advisory system also included a number of standard lab tests that start from quality assurance, initial designing and finally using field samples to confirm the success of the application. The study also discusses several field cases that validate the drilling fluids advisory system. The novel drilling fluid advisory system based on Artificial Bayesian Intelligence has been designed to aid drilling engineers and scientists to formulate effective drilling fluids systems for the entire well sections.
Abstract The evaluation of downhole fluid analysis (DFA) measurements of asphaltene gradients provides the ability to determine the extent of asphaltene equilibrium and the operative reservoir fluid geodynamics (RFG) processes. Typically, equilibrium of reservoir fluids indicates reservoir connectivity, a primary concern in field development planning. Currently, the modeling of asphaltene gradients is done through the manual evaluation of the DFA optical density gradients. The optical density measurements are fit to an equation of state (EOS), such as the Flory-Huggins-Zuo EOS, and evidence for asphaltene equilibrium is concluded if the inferred asphaltene diameter corresponds to that of the Yen-Mullins model for asphaltene composition. In this work, we present an automated Bayesian algorithm that proposes multiple hypotheses for the state of asphaltene equilibrium. The proposed hypotheses honor DFA measurements; physical models for asphaltenes in equilibrium, such as the Yen-Mullins model; and prior domain knowledge of the reservoir, such as geological layers, faults, and flow units. The leading hypotheses are reported, and evidence for or against asphaltene equilibrium is concluded from inferred quantities. Our proposed method provides a faster way for domain experts to explore different reservoir realizations that honor the theory of asphaltenes gradients and previous knowledge about the reservoir. We verify our novel method on three case studies that are undergoing different RFG processes through comparison of the interpretation done by domain experts. While there are many reservoir complexities associated with each case study, we focus on whether the underlying RFG process corresponds to the asphaltenes in equilibrium. The first case study is a light oil reservoir in the Norwegian North Sea that is mostly in fluid equilibrium with exceptions at the flanks. The second case study is a black oil reservoir that has undergone a fault block migration after the reservoir fluids had a chance to achieve equilibrium. The last case study is a black oil reservoir in quasi-equilibrium due to biodegradation in the lower portion of the well.
Jahani, Nazanin (NORCE Norwegian Research Centre) | Ambía, Joaquín (The University of Texas at Austin) | Fossum, Kristian (NORCE Norwegian Research Centre) | Alyaev, Sergey (NORCE Norwegian Research Centre) | Suter, Erich (NORCE Norwegian Research Centre) | Torres-Verdín, Carlos (The University of Texas at Austin)
Abstract The cost of drilling wells on the Norwegian Continental Shelf are extremely high, and hydrocarbon reservoirs are often located in spatially complex rock formations. Optimized well placement with real-time geosteering is crucial to efficiently produce from such reservoirs and reduce exploration and development costs. Geosteering is commonly assisted by repeated formation evaluation based on the interpretation of well logs while drilling. Thus, reliable computationally efficient and robust workflows that can interpret well logs and capture uncertainties in real time are necessary for successful well placement. We present a formation evaluation workflow for geosteering that implements an iterative version of an ensemble-based method, namely the approximate Levenberg Marquardt form of the Ensemble Randomized Maximum Likelihood (LM-EnRML). The workflow jointly estimates the petrophysical and geological model parameters and their uncertainties. In this paper the demonstrate joint estimation of layer-by-layer water saturation, porosity, and layer-boundary locations and inference of layers’ resistivities and densities. The parameters are estimated by minimizing the statistical misfit between the simulated and the observed measurements for several logs on different scales simultaneously (i.e., shallowsensing nuclear density and shallow to extra-deep EM logs). Numerical experiments performed on a synthetic example verified that the iterative ensemble-based method can estimate multiple petrophysical parameters and decrease their uncertainties in a fraction of time compared to classical Monte Carlo methods. Extra-deep EM measurements are known to provide the best reliable information for geosteering, and we show that they can be interpreted within the proposed workflow. However, we also observe that the parameter uncertainties noticeably decrease when deep-sensing EM logs are combined with shallow sensing nuclear density logs. Importantly the estimation quality increases not only in the proximity of the shallow tool but also extends to the look ahead of the extra-deep EM capabilities. We specifically quantify how shallow data can lead to significant uncertainty reduction of the boundary positions ahead of bit, which is crucial for geosteering decisions and reservoir mapping.
Abstract Log-facies classification aims to predict a vertical profile of facies at well location with log readings or rock properties calculated in the formation evaluation and/or rock-physics modeling analysis as input. Various classification approaches are described in the literature and new ones continue to appear based on emerging Machine Learning techniques. However, most of the available classification methods assume that the inputs are accurate and their inherent uncertainty, related to measurement errors and interpretation steps, is usually neglected. Accounting for facies uncertainty is not a mere exercise in style, rather it is fundamental for the purpose of understanding the reliability of the classification results, and it also represents a critical information for 3D reservoir modeling and/or seismic characterization processes. This is particularly true in wells characterized by high vertical heterogeneity of rock properties or thinly bedded stratigraphy. Among classification methods, probabilistic classifiers, which relies on the principle of Bayes decision theory, offer an intuitive way to model and propagate measurements/rock properties uncertainty into the classification process. In this work, the Bayesian classifier is enhanced such that the most likely classification of facies is expressed by maximizing the integral product between three probability functions. The latters describe: (1) the a-priori information on facies proportion (2) the likelihood of a set of measurements/rock properties to belong to a certain facies-class and (3) the uncertainty of the inputs to the classifier (log data or rock properties derived from them). Reliability of the classification outcome is therefore improved by accounting for both the global uncertainty, related to facies classes overlap in the classification model, and the depth-dependent uncertainty related to log data. As derived in this work, the most interesting feature of the proposed formulation, although generally valid for any type of probability functions, is that it can be analytically solved by representing the input distributions as a Gaussian mixture model and their related uncertainty as an additive white Gaussian noise. This gives a robust, straightforward and fast approach that can be effortlessly integrated in existing classification workflows. The proposed classifier is tested in various well-log characterization studies on clastic depositional environments where Monte-Carlo realizations of rock properties curves, output of a statistical formation evaluation analysis, are used to infer rock properties distributions. Uncertainty on rock properties, modeled as an additive white Gaussian noise, are then statistically estimated (independently at each depth along the well profile) from the ensemble of Monte-Carlo realizations. At the same time, a classifier, based on a Gaussian mixture model, is parametrically inferred from the pointwise mean of the Monte Carlo realizations given an a-priori reference profile of facies. Classification results, given by the a-posteriori facies proportion and the maximum a-posteriori prediction profiles, are finally computed. The classification outcomes clearly highlight that neglecting uncertainty leads to an erroneous final interpretation, especially at the transition zone between different facies. As mentioned, this become particularly remarkable in complex environments and highly heterogeneous scenarios.
Summary In this work, we investigate the efficient estimation of the optimal design variables that maximize net present value (NPV) for the life-cycle production optimization during a single-well carbon dioxide (CO2) huff-n-puff (HnP) process in unconventional oil reservoirs. A synthetic unconventional reservoir model based on Bakken Formation oil composition is used. The model accounts for the natural fracture and geomechanical effects. Both the deterministic (based on a single reservoir model) and robust (based on an ensemble of reservoir models) production optimization strategies are considered. The injection rate of CO2, the production bottomhole pressure (BHP), the duration of injection and the production periods in each cycle of the HnP process, and the cycle lengths for a predetermined life-cycle time can be included in the set of optimum design (or well control) variables. During optimization, the NPV is calculated by a machine learning (ML) proxy model trained to accurately approximate the NPV that would be calculated from a reservoir simulator run. Similar to the ML algorithms, we use both least-squares (LS) support vector regression (SVR) and Gaussian process regression (GPR). Given a set of forward simulation runs with a commercial compositional simulator that simulates the miscible CO2 HnP process, a proxy is built based on the ML method chosen. Having the proxy model, we use it in an iterative-sampling-refinement optimization algorithm directly to optimize the design variables. As an optimization tool, the sequential quadratic programming (SQP) method is used inside this iterative-sampling-refinement optimization algorithm. Computational efficiencies of the ML proxy-based optimization methods are compared with those of the conventional stochastic simplex approximate gradient (StoSAG)-based methods. Our results show that the LS-SVR- and GPR-based proxy models are accurate and useful in approximating NPV in the optimization of the CO2 HnP process. The results also indicate that both the GPR and LS-SVR methods exhibit very similar convergence rates, but GPR requires 10 times more computational time than LS-SVR. However, GPR provides flexibility over LS-SVR to access uncertainty in our NPV predictions because it considers the covariance information of the GPR model. Both ML-based methods prove to be quite efficient in production optimization, saving significant computational times (at least 4 times more efficient) over a stochastic gradient computed from a high-fidelity compositional simulator directly in a gradient ascent algorithm. To our knowledge, this is the first study presenting a comprehensive review and comparison of two different ML-proxy-based optimization methods with traditional StoSAG-based optimization methods for the production optimization problem of a miscible CO2HnP.
Aslam, Usman (Emerson Automation Solutions) | Burgos, Jorge (Occidental Oil and Gas) | Williams, Craig (Occidental Oil and Gas) | McCloskey, Shawn (Occidental Oil and Gas) | Cooper, James (Occidental Oil and Gas) | Mirzaei, Mohammad (Occidental Oil and Gas) | Briz, Eduardo (Occidental Oil and Gas)
Abstract Reservoir production forecasts are inherently uncertain due to the lack of quality data available to build predictive reservoir models. Multiple data types, including historical production, well tests (RFT/PLT), and time-lapse seismic data, are assimilated into reservoir models during the history matching process to improve predictability of the model. Traditionally, a ‘best estimate’ for relative permeability data is assumed during the history matching process, despite there being significant uncertainty in the relative permeability. Relative permeability governs multiphase flow in the reservoir; therefore, it has significant importance in understanding the reservoir behavior as well as for model calibration and hence for reliable production forecasts. Performing sensitivities around the ‘best estimate’ relative permeability case will cover only part of the uncertainty space, with no indication of the confidence that may be placed on these forecasts. In this paper, we present an application of a Bayesian framework for uncertainty assessment and efficient history matching of a Permian CO2 EOR field for reliable production forecast. The study field has complex geology with over 65 years of historical data from primary recovery, waterflood, and CO2 injection. Relative permeability data from the field showed significant uncertainty, so we used uncertainties in the saturation endpoints as well as in the curvature of the relative permeability in multiple zones, by employing generalized Corey functions for relative permeability parameterization. Uncertainty in the relative permeability is used through a common platform integrator. An automated workflow generates the first set of relative permeability curves sampled from the prior distribution of saturation endpoints and Corey exponents, called ‘scoping runs’. These relative permeability curves are then passed to the reservoir simulator. The assumptions of uncertainties in the relative permeability data and other dynamic parameters are quickly validated by comparing the scoping runs and historical observations. By creating a mismatch or likelihood function, the Bayesian framework generates an ensemble of history matched models calibrated to the production data which can then be used for reliable probabilistic forecasting. Several iterations during the manual history match did not yield an acceptable solution, as uncertainty in the relative permeability was ignored. An application of the Bayesian inference accelerated by a proxy model found the relative permeability data to be one of the most influential parameters during the assisted history matching exercise. Incorporating the uncertainty in relative permeability data along with other dynamic parameters not only helped speed up the model calibration process, but also led to the identification of multiple history matched models. In addition, results show that the use of the Bayesian framework significantly reduced uncertainty in the most important dynamic parameters. The proposed approach allows incorporating previously ignored uncertainty in the relative permeability data in a systematic manner. The user-defined mismatch function increases the likelihood of obtaining an acceptable match and the weights in the mismatch function allow both the measurement uncertainty and the effect of simulation model inaccuracies. The Bayesian framework considers the whole uncertainty space and not just the history match region, leading to the identification of multiple history matched models.
Naufal, Ahmad Naufal (PETRONAS CARIGALI SDN BHD) | Samy, Samy Abdelhamid (PETRONAS CARIGALI SDN BHD) | Nenisurya, Nenisurya Hashim (PETRONAS CARIGALI SDN BHD) | Zaharuddin, Zaharuddin Muhammad (PETRONAS CARIGALI SDN BHD) | Eddy, Eddy Damsuri (PETRONAS CARIGALI SDN BHD) | Amir, Amir Ali (PETRONAS CARIGALI SDN BHD) | Hilmi, Mohd Hilmi (Universiti Teknologi PETRONAS) | Izzatdin, Izzatdin A (Universiti Teknologi PETRONAS) | Jafreezal, Jafreezal Jaafar (Universiti Teknologi PETRONAS) | Norshakirah, Norshakirah A (Universiti Teknologi PETRONAS) | Emelia, Emelia Akashah (Universiti Teknologi PETRONAS) | Amirul, Ku Amirul (Universiti Teknologi PETRONAS) | Hajar, Hajar M (Universiti Teknologi PETRONAS) | Wahyu, Ade Wahyu (Universiti Teknologi PETRONAS) | Syakirah, Nur Syakirah (Universiti Teknologi PETRONAS)
Abstract Equipment failure, unplanned downtime operation, and environmental damage cost represent critical challenges in overall oil and gas business from well reservoir identification and drilling strategy to production and processing. Identifying and managing the risks around assets that could fail and cause redundant and expensive downtime are the core of plant reliability in oil and gas industry. In the current digital era; there is an essential need of innovative data-driven solutions to address these challenges, especially, monitoring and diagnosis of plant equipment operations, recognize equipment failure; avoid unplanned downtime; repair costs and potential environmental damage; maintaining reliable production, and identifying equipment failures. Machine learning-artificial intelligence application is being studied to develop predictive maintenance (PdM) models as innovative analytics solution based on real-data streaming to get to an elevated level of situational intelligence to guide actions and provide early warnings of impending asset failure that previously remained undetected. This paper proposes novel machine learning predictive models based on extreme learning/support vector machines (ELM-SVM) to predict the time to failure (TTF) and when a plant equipment(s) will fail; so maintenance can be planned well ahead of time to minimize disruption. Proper visualization with deep-insights (training and validation) processes of the available mountains of historian and real-time data are carried out. Comparative studies of ELM-SVM techniques versus the most common physical-statistical regression techniques using available rotating equipment-compressors and time-failure mode data. Results are presented and it is promising to show that the new machine learning (ELM-SVM) techniques outperforms physical-statistics techniques with reliable and high accurate predictions; which have a high impact on the future ROI of oil and gas industry.
Yi, Michael (Intellicess Inc.) | Ramos, Dawson (Intellicess Inc.) | Ashok, Pradeepkumar (Intellicess Inc.) | Thetford, Taylor (Apache Corporation) | Bohlander, Spencer (Apache Corporation) | Behounek, Michael (Apache Corporation)
Abstract Detecting drilling dysfunctions from surface data is not always easy as downhole vibrations tend to get damped before they reach surface sensors. Building machine learning models to recognize patterns in the surface data requires vibration signals captured by downhole sensors for training purposes. Such datasets are not widely available and therefore a methodology to expand these datasets is highly desirable. This work explores ways to utilize data augmentation to artificially diversify and increase datasets to build better models. Stick-slip (including full-stick), bit bounce, whirl, and bit balling are the primary dysfunctions considered in this work. Bayesian networks are used as classifiers to keep the model intuitive, and address situations where some input data is missing or unavailable. Once the dysfunction events in the downhole dataset were labeled, data augmentation techniques were used to generate synthetic data for scenarios where data was sparse. The dataset used in the project consisted of nine wells (with 19 bit runs). Most of the bit runs had a downhole vibration sensor at the bit, while some had sensors along the string as well. Of these 19 bit runs, 15 were used for training and four were used to test the models. Various data augmentations techniques were applied and validated manually as appropriate synthetic data. In the case of full-stick event detection, the saw tooth pattern in the surface torque signal was captured and provided as an input to the classifier. The classifiers thus trained were able to detect the dysfunctions using data from surface sensors to a high level of accuracy and with low false alarm rates. This paper presents models to predict downhole dysfunctions from surface data alone. This paper also provides guidance on data augmentation techniques that use sparse downhole datasets to improve machine learning drilling advisory models. For identifying drilling dysfunction from surface data, the tortuosity of the well is also taken into account.
Ashok, Pradeepkumar (Intellicess Inc.) | D' Angelo, John (Intellicess Inc.) | Ramos, Dawson (Intellicess Inc.) | Yi, Michael (Intellicess Inc.) | Thetford, Taylor (Apache Corp) | Younk, Nathaniel (Montana Tech) | Bohlander, Spencer (Apache Corp) | Shahri, Mojtaba (Apache Corp) | Behounek, Michael (Apache Corp)
Abstract Hole cleaning is important in preventing stuck pipe events during well construction operations. A cuttings transport model is traditionally used to determine the cleanliness of a hole, but its real-time rig site implementation is often made difficult by a lack of necessary inputs. There is a need for a simpler yet reliable approach to quantifying hole cleanliness using data readily available at the rig site. The paper proposes a method that relies on the detection of events over a long time horizon and the use of key parameters relating to such events to quantify hole cleanliness. These events are then related through duration and frequency to probabilistic features in a Bayesian network, to infer the probability that the hole cleaning process has been efficient or poor. These events are also weighted by their age to ensure that current beliefs are not strongly influenced by those that are far in the past. The method was deployed on a drilling advisory system and is currently used on rigs in North American land operations. The events and features found to be most relevant to quantifying hole cleanliness were the circulation rates during drilling, tight spots when moving the drillstring, bit hydraulics, and prolonged periods of inactivity. Proactive hole cleaning actions such as working of the pipe, off bottom circulation and pipe rotation were also considered. The Bayesian network model used by the proposed method was able to be run with low computational overhead (micro-seconds on a standard edge device) compared to a traditional cuttings transport model. This was enabled by an event logging procedure that keeps track of hole-cleaning events over time and consolidates several hours (days) of drilling information into relevant hole-cleaning features that can be processed quickly. The proposed method was validated with statistical methods using surface datasets from six wells involved in North American land operations. Through this validation it was determined that the method was highly effective in correctly characterizing hole conditions throughout the well operation. On the rig, the system was helpful not only in alerting the drillers whenever hole cleanliness deteriorated but also providing the most likely causes of the deterioration. This provided the rig crew real-time guidance to make actionable decisions to avoid a stuck pipe situation. The proposed method differentiates itself from the published methods of hole cleaning analysis in two main aspects. First, it does not presume to estimate the cuttings bed height or accumulation over time. Instead, it attempts to infer the probability that the hole cleaning operations are effective over time using features in data that suggest efficient or poor hole cleaning. Second, this method provides a clear indication of when hole cleaning actions are needed and why.
Abstract Automated geosteering relies on logging-while-drilling data from offset wells to make inferences about the geological formation and help guide directional drilling of the subject well. When data from multiple offset wells are available, it is desirable to consistently combine data typelogs from these wells to better estimate the 3D geological formation around the drilling path. We develop a quantitative typelog alignment method based on a Bayesian approach, where the alignment map between pairs of typelogs are modeled as a random function with a prior distribution. A multi-stage penalized procedure is developed that optimizes this alignment map to minimize a misfit function, while taking the prior knowledge into consideration.