|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Abstract This study explores three distinct approaches to ROP modeling: deterministic, data-driven, and hybrid models. Deterministic or physics-based models rely on a fixed equation derived from drilling physical principles and have been the traditional workhorse of the industry. Newer, and more powerful data-driven models utilize machine learning and predictive analytics to enhance ROP prediction and optimization. However, the improved predictive accuracy achieved with statistical techniques comes at the expense of model interpretability. In order to overcome this disadvantage, a novel hybrid modeling technique is introduced. A novel way to formulate hybrid models is discussed by presenting two broad strategies: ensembles of a single deterministic model (hybrid-One) and ensembles of several deterministic models (hybrid-N). They provide a means to encode the physics of drilling formulated in deterministic models into machine learning algorithms. Hybrid models also enable inference on ROP models which provide valuable insight. Both classes of hybrid models predict ROP with a greater accuracy than physics-based models alone; purely data-driven models perform marginally better in most cases. On the other hand, hybrid models offer higher interpretability, as they are built from deterministic models. Inference using hybrid models has been discussed with a case study for Mission Canyon Limestone. Hence, hybrid models are employed for ROP prediction and optimization by computing ideal drilling operating parameters - weight-on-bit, RPM, and flowrate - for each rock formation in the vertical section of a Bakken shale horizontal well. The study demonstrates the use of hybrid models for higher accuracy (than deterministic models) and higher interpretability (than machine learning models) - providing an optimal tradeoff. 1. Introduction Rate of penetration (ROP) has been a focal point in drilling optimization for decades: the rate at which a well is drilled is a key indicator of drilling efficiency. Higher ROP implies faster drilling which in turn implies better rig performance and higher rig productivity. Given this inherent interest in ROP modeling, deterministic models have been developed over the past 50 years for ROP prediction based on laboratory experiments (Bingham, 1964; Bourgoyne Jr & Young Jr, 1974; Hareland & Rampersad, 1994; Motahhari, Hareland, & James, 2010). The performance of these models has been routinely questioned (Soares, Daigle, & Gray, 2016) since their applicability on new datasets is not guaranteed. Advances in computational power and machine learning over the past few years has seen the birth of many new data-driven ROP prediction models - models based purely on data statistics (C. Hegde, Daigle, Millwater, & Gray, 2017; C. Hegde & Gray, 2017). These models utilize machine learning algorithms for ROP prediction and have been shown to generalize well for different formations.
Abstract Predicting rate of penetration (ROP) has always been of fundamental interest to the drilling industry. Early predictions can assist the engineer in changing parameters to reduce non-productive time (NPT) and achieve optimum ROP. This paper illustrates methods to predict the ROP in a computationally efficient manner using only data available at the surface. These methods can then be incorporated into real time drilling operations, first through a passive diagnostic tool, and then an integrated real-time control loop. In this work, statistical learning techniques such as trees, bagged trees, and random forests (RF) are used to predict ROP. Trees provide easy interpretability and hence are favored over other non-linear techniques. However, accuracy is imperative in this procedure. Accuracy can be increased by using bootstrap aggregating (bagging) or Random Forests. These techniques are applied, using the statistical software computing package R and its numerous libraries. Statistical learning techniques have been applied to a data set which had nine predictors. Applying trees to a data set yields great visualization of the data, but the lack of accuracy and can result in substantial overfitting. This shortcoming was rectified using bagging or RF methods to substantially increase accuracy. The results were promising in all cases and acceptable for real time predictions. Scalability is another concern for real time operations. Computational efficiency of the methods were evaluated, and the best method was based on a combination of computational efficiency and accuracy. Potential time savings which would result from applying the model in real-time optimizations and demonstration of the power of machine learning techniques are included in this paper. Future improvements will be incorporated in real-time prediction during drilling. State of the art statistical learning and machine learning techniques are applied to prediction of ROP, whereas previous prediction methods have not been based on real-time drilling data. The result is a computationally efficient model which can determine the right features for prediction at each step, while also incorporating engineering judgement and maintaining integrity of the statistical principles being employed. These methods can easily be extended to other drilling parameters such as MSE or Torque and Drag.
Summary There is a great deal of interest in the oil and gas industry (OGI) in seeking ways to implement machine learning (ML) to provide valuable insights for increased profitability. With buzzwords such as data analytics, ML, artificial intelligence (AI), and so forth, the curiosity of typical drilling practitioners and researchers is piqued. While a few review papers summarize the application of ML in the OGI, such as Noshi and Schubert (2018), they only provide simple summaries of ML applications without detailed and practical steps that benefit OGI practitioners interested in incorporating ML into their workflow. This paper addresses this gap by systematically reviewing a variety of recent publications to identify the problems posed by oil and gas practitioners and researchers in drilling operations. Analyses are also performed to determine which algorithms are most widely used and in which area of oilwell-drilling operations these algorithms are being used. Deep dives are performed into representative case studies that use ML techniques to address the challenges of oilwell drilling. This study summarizes what ML techniques are used to resolve the challenges faced, and what input parameters are needed for these ML algorithms. The optimal size of the data set necessary is included, and in some cases where to obtain the data set for efficient implementation is also included. Thus, we break down the ML workflow into the three phases commonly used in the input/process/output model. Simplifying the ML applications into this model is expected to help define the appropriate tools to be used for different problems. In this work, data on the required input, appropriate ML method, and the desired output are extracted from representative case studies in the literature of the last decade. The results show that artificial neural networks (ANNs), support vector machines (SVMs), and regression are the most used ML algorithms in drilling, accounting for 18, 17, and 13%, respectively, of all the cases analyzed in this paper. Of the representative case studies, 60% implemented these and other ML techniques to predict the rate of penetration (ROP), differential pipe sticking (DPS), drillstring vibration, or other drilling events. Prediction of rheological properties of drilling fluids and estimation of the formation properties was performed in 22% of the publications reviewed. Some other aspects of drilling in which ML was applied were well planning (5%), pressure management (3%), and well placement (3%). From the results, the top ML algorithms used in the drilling industry are versatile algorithms that are easily applicable in almost any situation. The presentation of the ML workflow in different aspects of drilling is expected to help both drilling practitioners and researchers. Several step-by-step guidelines available in the publications reviewed here will guide the implementation of these algorithms in the resolution of drilling challenges.
Abstract With the persistent quest for better prediction accuracies of petroleum reservoir properties, research in Computational Intelligence (CI) continues to evolve new techniques to meet this noble objective of petroleum reservoir characterization. In previous presentations, it was established that individual CI techniques are limited in their performance as they have their respective areas of strengths and weaknesses. The concept of Hybrid CI (HCI) was presented to overcome this problem as it utilizes the strengths of two or more techniques to compliment their respective weaknesses. However, the HCI techniques are not able to integrate the various expert opinions on the optimization of CI techniques and those that exist in their respective fields of application. The ensemble learning paradigm is presented here as a possible solution. The ensemble learning paradigm, also called the committee of learning machines, is the latest development in Computational Intelligence and Machine Learning technologies. It is the method of combining the output of several individual learners with different hypotheses employed to solve the same problem in order to produce an overall best result. The success of this paradigm is based on the belief that the decision of a committee of experts is better than that of a single expert. The ensemble method has been successfully applied in other fields such as bio-informatics, hydrology, time series forecasting, soil science, and control systems. Its benefits have not been well utilized in petroleum engineering. As a continuation of what has become like a "SPE Computational Intelligence Lecture Series" over the past 2 years, this paper presents an overview of the ensemble learning paradigm, a review of its successful application in other fields, a justification of its necessity in petroleum engineering and a general framework for its successful application in reservoir characterization. This paper will be of benefit to interested persons to explore the exciting world of computational intelligence and for the appreciation of the benefits of the latest development in computational intelligence.
Galkina, Alena Vladimirovna (Institute of Geology and Development of Fossil Fuels IGIRGI) | Yalaev, Tagir Rustamovich (Institute of Geology and Development of Fossil Fuels IGIRGI) | Lisitsyna, Marina Yurievna (Institute of Geology and Development of Fossil Fuels IGIRGI) | Rakhimov, Timur Rinatovich (Institute of Geology and Development of Fossil Fuels IGIRGI)
Abstract This study presents an approach based on machine learning to predict the lithology at the bit while drilling. This is necessary for (near) real-time adjustments of well path while providing reservoir characterization (reservoir rock, non-reservoir rock and tight rocks). The approach is based on an ensemble of decision trees and gradient boosting. Work on this project included analysis and preprocessing of the data, selection of features for training, creation of additional features to improve the quality of the model, comparison of machine learning algorithms to identify the most effective within the task, optimization of the hyperparameters. After that, the final model is built on a training dataset with desired outputs obtained from LWD formation evaluation and the quality of the algorithm is evaluated on a test dataset with the selected metrics. The computer program developed on the proposed approach receives drilling data as input and provides a reservoir characterization. High quality of the model is necessary for successful geosteering of horizontal wells due to the rapid detection of lithology changes in the reservoir and increasing the efficiency of well drilling by minimizing penetration through the non-reservoir rocks, which can further increase oil production. The proposed approach provides an accuracy of 80-90% for a number of oil fields.