Downhole vibration remains a major challenge for drillers. Today, there is technology to look at the problem from a unique perspective. A novel look at the problem focuses on evaluation of machine learning algorithms to predict downhole vibrations. Prediction is the first step in a longer road map. The goal would be to find an optimal combination of revolutions per minute (RPM) and weight-on-bit (WOB) to remedy drilling vibration in real-time, hence closing the loop. Drilling mechanics data for thousands of wells, acquired over more than ten years was analyzed. Some preparation of the drilling mechanics data was required. Data cleaning was first performed. This included corrections for time-dependent nature of the data. Data imputing for missing values and handling of outliers and anomalies was also performed in this stage. This was followed by feature engineering which included adding variables based on company-wide drilling domain expertise. Variables to capture data patterns and variables for better capturing the time-series dependencies were also created in this stage.
This paper will discuss methodologies and general rules that were tested for preparing unstructured drilling data. A few of the machine learning algorithms used as building blocks of our full solution are gradient boosting and random forest. Deep learning models were also tested and the value of these are compared. The results were compiled to decide the best algorithm which could further be used to fine-tune optimum performance. The time series aspect of the data is captured in a moving window. As the window increases, the performance of each algorithm also varied. Also, evaluation of the benefits and drawbacks of each algorithm for the drilling predictions is detailed. Ways to improve the accuracy of prediction for downhole vibrations is also suggested with reference to the results showing the logic behind all recommendations. There will be a summary of the details of each finding and a short discussion on the way forward for the industry.
Low rates of penetration (ROP) were experienced in an area with well-known lithology. The vast drilling experience and similarity of drilling conditions in the area, provided the operator with enough data to improve the well schedule and cost performance through the use of machine learning.
Machine learning, specifically artificial neural networks (ANN), is a statistical tool to find relations between multiple inputs. Details that would have been missed or considered outliers by a mathematical model can be accounted for and explained in the ANN model. The ANN was trained on thousands of real time data points recorded from selected wells in a specific depth interval. Typical drilling parameters such as weight on bit, rotary speed, bit hydraulics, lithological properties, and dogleg severity were the input parameters chosen in the model to generate ROP. Once the model was calibrated to historical data, it was used to find the best parameters to maximize ROP.
R squared factors were 0.729 and 0.675 for 12.25 in. and 17.5 in. sections repectively. This was achieved with an ANN structure of 2 hidden layers consisting of 5 nodes each. Sensitivity analysis identified bit hydraulics, weight on bit, and rotary speed as the major parameters impacting ROP. The ROP model was used to conduct a "virtual drill-off test" to identify drilling parameters that maximize ROP. ROP dependency on weight on bit and lithological analysis suggests bit design can be further improved. Bit hydraulics showed that higher flow rate was needed in sections with higher overbalance. Optimum drilling parameters were tested on four wells and resulted in more than 50% higher ROP compared to original field data.
In an industry increasingly dominated by big data, separating the clean data from the "noise" will be a vital topic. This paper aims to provide a blueprint for the use machine learning to optimize ROP in a manner that is simple and easily replicated.
Abbas, Ahmed K. (Iraqi Drilling Company) | Assi, Amel H. (Baghdad University) | Abbas, Hayder (Missan Oil Company) | Almubarak, Haidar (King Saud University) | Al Saba, Mortadha (Australian College of Kuwait)
The drill bit is the most essential tool in drilling operation and optimum bit selection is one of the main challenges in planning and designing new wells. Conventional bit selections are mostly based on the historical performance of similar bits from offset wells. In addition, it is done by different techniques based on offset well logs. However, these methods are time consuming and they are not dependent on actual drilling parameters. The main objective of this study is to optimize bit selection in order to achieve maximum rate of penetration (ROP). In this work, a model that predicts the ROP was developed using artificial neural networks (ANNs) based on 19 input parameters. For the modeling part, a one-dimension mechanical earth model (1D MEM) parameters, drilling fluid properties, and rig- and bit-related parameters, were included as inputs. The optimizing process was then performed to propose the optimum drilling parameters to select the drilling bit that provides the maximum possible ROP. To achieve this, the corresponding mathematical function of the ANNs model was implemented in a procedure using the genetic algorithm (GA) to obtain operating parameters that lead to maximum ROP. The output will propose an optimal bit selection that provides the maximum ROP along with the best drilling parameters. The statistical analysis of the predicted bit types and optimum drilling parameters comparing the actual flied measured values showed a low root mean square error (RMSE), low average absolute percentage error (AAPE), and high correction coefficient (R2). The proposed methodology provides drilling engineers with more choices to determine the best-case scenario for planning and/or drilling future wells. Meanwhile, the newly developed model can be used in optimizing the drilling parameters, maximizing ROP, estimating the drilling time, and eventually reducing the total field development expenses.
Wang, Han (College of Safety and Ocean Engineering, China University of Petroleum-Beijing) | Chen, Dong (College of Petroleum Engineering, China University of Petroleum Beijing) | Ye, Zhihui (College of Safety and Ocean Engineering, China University of Petroleum-Beijing) | Li, Jun (College of Petroleum Engineering, China University of Petroleum Beijing)
The conventional well trajectory design mainly developed from spatial mathematical model e.g. arcs or cylindrical spiral lines method, focusing on seeking optimal kickoff point and curvature. However, the complex well trajectory design requires more sophisticated and efficient method, considering greater contact with reservoir pay zone, which means time consuming computation works based on conventional well trajectory design. The automated well trajectory planning can meet such requirements, based on reservoir area identification, reservoir matrix evaluation, wellbore trajectory path generation. The present study aims to develop this algorithm for automated drilling trajectory design without assistances from human experts. Based on the stratigraphic figure of a specific reservoir, the present algorithm is able to provide an optimal trajectory with more contact zone while still meeting the curvature requirement. An intelligent planning algorithm of drilling trajectory is proposed on basis of computer vision. Based on the stratigraphic figure of a specific reservoir, the present algorithm is able to provide an optimal trajectory with more contact zone. By converting the reservoir profile (pay zone) to digital matrix and evaluating the pay zone matrix, the optimal well path is able to be automated generated. The research finding can be potential applied in geosteering drilling or autumated drilling.
As stuck pipe continues to be a major contributor to nonproductive time (NPT) in drilling for oil and gas operations, efforts to mitigate its incidence cannot be over emphasized. A machine learning approach is presented in this paper to identify warning signals and give early indications for an impending stuck pipe possibility during drilling activities so as to take proactive measures to mitigate its occurrence. The model uses a moving window-based approach to capture key drilling parameters trends and apply an unsupervised machine learning algorithm to predict abnormalities in the parameters' rate of change. It utilizes most commonly available drilling real-time data and is therefore deployable in all type of wells. No pre-drill model is essentially required as the model utilizes a self-learning and self-adjusting model. The methodology involves the use of change point detection in identifying rig activity and the associated drilling parameters so as to capture relevant parametric trend for analysis. Inherent in the parameter trend are the different factors that affects their readings; such as wellbore geometry, bottom-hole assembly (BHA), dogleg severity (DLS), formation characteristics, pump flow rate and pipe rotations. The algorithm has been tested on historical wells data in which stuck pipe incidence, near-miss stuck pipe occurred, and incidence-free wells to prove the concept. The results of the model performance is hereby presented along with an accuracy measure.
Dropped Objects pose the number-one risk of serious injuries, fatalities, and equipment damage in several industries across the globe and is more prominent in Off shore drilling condition, where more personnel perform their duties within a confined space. Dropped Object Prevention Scheme [DROPS] is intended to be applied to operations in which Dropped Objects could cause harm to people, equipment, and/or the environment. Though Companies develop documented DROPS at relevant levels/departments within the organization as per objectives and targets developed as part of their company's existing Health, Safety, and Environment and/or operational goal-setting processes. Third Party DROPS Survey and Reporting is a catalyst to integrate & guide the scheme to meet its objectives. This paper emphasizes the importance of a robust DROPS, reinforced by repeatable and reliable DROPS Survey & Reporting, in compliance with present industry standard and at the end extend its SMARTER possibilities of expansion with use of RFID Tags and Artificial Intelligence. Our cloud-based Asset Inspection Management System "SOSINSPECTIONEERING" through which (1)Each individually identified DROPS Item shall have a Unique Asset ID,(2)Generate individual DROPS Report for each DROPS Asset, (3)Generate a DROPS Master Asset Register capturing the main features of DROPS Reports and linking it to the latest DROPS Report, (4)Option for Client to download a excel picture book with all 3 levels of Securing and a detailed item specific checklist, (5)The Check list downloaded to'Tab' or any equivalent device provides the work site supervisor to use it in his/her regular verification inspection and report compliance/noncompliance report for further analysis. We are also developing the system to next level of application such as, (a)Tagging each DROPS Asset with RFID Tags, (b) RFID Tag to load with latest DROPS Report for that Asset ID, (c)Each DROPS Reports of the Asset to linked to its other documentation like (i)Manufacture Certificate; (ii)OEM/Other DATA Sheets; (iii)Latest Lifting Certificate & NDT Inspection, (d)RFID scanner with the Work Site Supervisor to Download the DROPS Report or'Drone' to scan the RFID Tag to Down load the latest DROPS Report to the Remote System for verification, (e)Work Site Supervisor of Third Party DROPS Surveyor with Camera can take real time Photos to edit the DROPS Report and modify the'Photo Book' in real time, (f)similarly 2 SPE-197758-MS Camera fitted in a'Drone' can go one step beyond the checklist to use artificial intelligence to recognize patterns and takes a probabilistic approach to verify the compliance by using more historical data.
Antipova, Ksenia (Skolkovo Institute of Science and Technology, Digital Petroleum) | Klyuchnikov, Nikita (Skolkovo Institute of Science and Technology, Digital Petroleum) | Zaytsev, Alexey (Skolkovo Institute of Science and Technology, Digital Petroleum) | Gurina, Ekaterina (Skolkovo Institute of Science and Technology, Digital Petroleum) | Romanenkova, Evgenia (Skolkovo Institute of Science and Technology, Digital Petroleum) | Koroteev, Dmitry (Skolkovo Institute of Science and Technology, Digital Petroleum)
Majority of the accidents while drilling have a number of premonitory symptoms notable during continuous drilling support. Experts can usually recognize such symptoms, however, we are not aware of any system that can do this job automatically. We have developed a Machine learning algorithm which allows detecting anomalies using the drilling support data (drilling telemetry). The algorithm automatically extracts patterns of premonitory symptoms and then recognizes them during drilling.
The machine learning model is based on Gradient Boosting decision trees. The model analyzes real time drilling parameters within a sliding 4-hour window. For each measurement, the model calculates the probability of an accident and warns about anomaly of particular type, if the probability exceeds the selected threshold.
Our training sample comes from 20+ oilfields and consists of sections related to 80+ accidents of the following types: stuck pipe, mud loss, gas-oil-water show, washout of pipe string, failure of drilling tool, packing formation, that occurred while drilling, trip-in, trip-out, reaming; the sample also includes more than 700 sections without accidents.
We have designed the prediction model to work during drilling new wells and to distinguish the normal drilling process from the faulty one. One can configure the anomaly threshold to balance amount of false alarms and the number of missed accidents.
To evaluate quality of the model we measure data science metrics and check an industry-driven criterion. The model can identify 40 accidents from about 80 with high confidence, whereas for the others there is still a room for improvement. Our findings suggest that including more accidents of underrepresented types will improve quality. Other data science metrics also support aptitude of the model. Finally, having data from multiple heterogeneous oilfields, we expect that the model will generalize well to new ones.
This paper presents a good practice of development and implementation of a data-driven model for automatic supervision of continuous drilling. In particular, the model described in the paper will assist specialists with drilling accidents prediction, optimize their work with data and reduce the nonproductive time associated with the accidents by up to 20%.
The paper provides a technical overview of an operator's Real-Time Drilling (RTD) ecosystem currently developed and deployed to all US Onshore and Deepwater Gulf of Mexico rigs. It also shares best practices with the industry through the journey of building the RTD solution: first designing and building the initial analytics system, then addressing significant challenges the system faces (these challenges should be common in drilling industry, especially for operators), next enhancing the system from lessons learned, and lastly, finalizing a fully integrated and functional ecosystem to provide a one-stop solution to end users.
The RTD ecosystem consists of four subsystems as shown in architecture
RTD ecosystem architecture
RTD ecosystem architecture
All of these subsystems are fully integrated and interact with each other to function as one system, providing a one-stop solution for real-time drilling optimization and monitoring. This RTD ecosystem has become a powerful decision support tool for the drilling operations team. While it was a significant effort, the long term operational and engineering benefits to operators designing such a real-time drilling analytics ecosystem far outweighs the cost and provides a solid foundation to continue pushing the historical limitations of drilling workflow and operational efficiency during this period of rapid digital transformation in the industry.
Joshi, Deep (Colorado School of Mines) | Eustes, Alfred (Colorado School of Mines) | Rostami, Jamal (Colorado School of Mines) | Gottschalk, Colby (Colorado School of Mines) | Dreyer, Christopher (Colorado School of Mines) | Liu, Wenpeng (Colorado School of Mines) | Zody, Zachary (Colorado School of Mines) | Bottini, Claire (Colorado School of Mines)
Water is considered the ‘oil of space’ with applications ranging from fuel production to colony consumption. Recent findings suggested the presence of water-ice in the Permanently shadowed craters on Lunar poles. This water present on the Moon and other planetary bodies can significantly bring down the cost of space exploration, fueling the colonization of the solar system. With low-resolution orbital data available, the next step is to drill and analyze samples from the Moon.
An extensive review of drilling systems designed by NASA was conducted focusing on the effect of different planetary environments on the drill design. Inspired by this and the drilling systems developed in the petroleum industry, an auger based rotary drilling rig was designed and fabricated with an extensive high-frequency data acquisition system, measuring all essential drilling parameters. Several analog rocks were cast with regolith simulant grout to replicate different subsurface geotechnical properties in the Lunar polar craters. The drill was tested on samples with different geotechnical properties to account for the varying properties expected in the Lunar poles.
Application of the drilling engineering concepts has resulted in the development of a robust drilling system capable of replicating drilling process for different planetary environments like the Moon and Mars. Using the data acquisition system on the rig, an advanced machine learning algorithm capable of processing and analyzing the real-time high-frequency drilling data to estimate a sample's geotechnical properties and water content was created. The evolving algorithm was developed based on initial drilling tests on homogenous and heterogeneous analogs. It was tested on samples with varying heterogeneity to estimate the geotechnical properties and the water content accurately. With some modifications, this algorithm can be applied in the Lunar and Martian missions to estimate the geotechnical properties in real-time, without the need to analyze the subsurface samples on the surface. This can result in a cost-effective exploration of water-ice resources on the Moon and Mars, kickstarting the space resources industry and the human colonization on those planetary bodies. The expertise of the drilling engineers in designing and executing wells in extreme terrestrial environments can help create significantly effective drilling systems for extraterrestrial environments.
This work details the design considerations to drill on the Moon and other planetary bodies focusing specifically on the application of drilling data to evaluate geotechnical properties and water content at Lunar polar conditions. The techniques developed here might pay a vital role in understanding the extent and composition of water-ice on the Moon, leading to efficient colonization of the solar system.
Intelligent multilateral well completions provide downhole flow rate, pressure, and temperature measurements at multiple well segments which allows for a continuous spatiotemporal data stream. Such an extensive data input poses a challenging task to decide on the optimal strategy of manipulating the inflow control valve (ICV) settings over time for best performance. This study investigated the use of machine learning to analyze and predict well performance given different ICV settings to ultimately maximize the well output.
A commercial reservoir simulator was used to generate two synthetic reservoir models: homogeneous (Case A) and heterogenous (Case B). These synthetic data were used to train, validate, and test machine learning models. The reservoir cases were generated based on a segmented, trilateral producer completed with three ICV devices installed at tie-in segments. The data used were measurements of wellhead and downhole flow rates across ICV segments over a period of 4,000 days. A total of 1,330 experiments were conducted with an eight-day timestep, generating a total of 667,660 sample data points for each of Case A and Case B. Fully connected neural networks were used to fit the data while model generalizability was enhanced using regularization techniques, namely L2 regularization and early stopping.
Both random sampling and Latin Hypercube Sampling (LHS) methods were evaluated in constructing the training, validation, and testing splits. Trained with different sample sizes drawn from the 1,330 simulated data histories for the two reservoir models, the proposed neural network showed excellent results. Given only ten simulated choices of ICV settings for training, the network proved capable of predicting oil and water production profiles at surface for both homogeneous and heterogeneous reservoir models with over 0.95 coefficient of determination (R2) when evaluated at unseen, test ICV settings. Extending the problem to downhole flow performance prediction, about 40 training simulated settings were necessary to achieve 0.95 R2. We observed that LHS was superior to random sampling in both R2 average and confidence interval. We also found that increasing the training and validation sample sizes increased the test R2 when testing against unseen cases. Study results suggest the applicability of machine reinforcement learning to estimate the well output at different ICV settings, where the neural network model depends fully on the real-time well feedback and production measurements.
By using a machine learning approach during the operation of a well with multiple ICV settings, it would be feasible to estimate the lateral-by-lateral output at unseen scenarios. Hence, it becomes possible to maximize the well output by using an optimization algorithm to determine the optimal ICV settings.