|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Schlumberger and IBM announced the industry's first commercial hybrid cloud data management for the OSDU data platform. The hybrid cloud offering is designed to expand access to customers globally, including those in locations where data residency requirements and local regulations may affect the use of a global public cloud. "This collaboration is a game changer for energy operators to drive higher performance and greater efficiencies by now enabling integrated work flows and innovation using AI. The hybrid cloud solution allows clients to maintain the sovereignty of their data and also gives them options as to how they choose to leverage the solution, with the freedom to deploy on a range of infrastructures or a regional cloud provider," said Manish Chawla, global industry managing director for energy, resources, and manufacturing at IBM. The platform will provide energy operators with full interoperability, making their data accessible by any application within their exploration and production (E&P) environment through the OSDU common data standard to enable easy sharing of information between teams.
Abstract There has been discrepancy between the pre-calculated and actual T&D values, because of the dependence of the model’s predictability on assumed inputs. Therefore, to have a reliable model, the users must adjust the model inputs; mainly friction coefficient in order to match the actual T&D. This, however, can mask downhole conditions such as cutting beds, tight holes and sticking tendencies. This paper aims to introduce a machine learning model to predict the continuous profile of the surface drilling torque to detect the operational issues in advance. Actual data of Well-1, starting from the time of drilling a 5-7/8-inch horizontal section until one day prior to the stuck pipe event, was used to train and test a random forest (RF) model with an 80/20 split ratio, to predict the surface drilling torque. The input variables for the model are the drilling surface parameters, namely: flow rate, hook load, rate of penetration, rotary speed, standpipe pressure, and weight-on-bit. The developed model was used to predict the surface drilling torque, which represents the normal trend for the last day leading up to the stuck pipe incident in Well-1. Then the model was integrated with a multivariate metric distance, Mahalanobis, to be used as a classifier to measure how close an actual observation is from the predictive normal trend. Based on a pre-determined threshold, each actual observation was labeled as "NORMAL" or "ANOMAL".
AlSaadi, Hamdan (ADNOC Offshore) | Rashid, Faisal (ADNOC Offshore) | Bimastianto, Paulinus (ADNOC Offshore) | Khambete, Shreepad (ADNOC Offshore) | Toader, Lucian (ADNOC Offshore) | Landaeta Rivas, Fernando (ADNOC Offshore) | Couzigou, Erwan (ADNOC Offshore) | Al-Marzouqi, Adel (ADNOC Offshore) | El-Masri, Hassan (Schlumberger) | Pausin, Wiliem (Schlumberger)
Abstract Big data analytics is the often complex process of examining large andvaried data sets to uncover information. The aim of this paper is to describe how Real TimeOperation Center structuring drilling data in an informative and systematic manner throughdigital solution that can help organizations make informed business decisions and leverage business value to deliver wells efficiently and effectively. Real Time Operation Center process of collecting largechunks of structured/unstructured data, segregating and analyzing it and discovering thepatterns and other useful business insights from it. The methods were based on structuringa detailed workflow, RACI, quality check list for every single process of the provision of real-timedrilling data and digitally transform into valuable information through robust auditableprocess, quality standards and sophisticated software. The paper will explain RTOC DataManagement System and how it helped the organization determining which data is relevantand can be analyzed to drive better business decisions in the future. The big data platform, in-house built-in software, andautomated dashboards have helped the company build the links between different assets,analyzing technical gaps, creating opportunities and moving away from manual data entry(e.g. Excel) which was causing data errors, disconnection between information and wastedworker hours due to inefficiency. These solutions leverage analytics and unlock the valuefrom data to enhance operational efficiency, drive performance and maximize profitability. As a result, the company has successfully delivered 160 wells in 2019 (6% higher than 2019 Business Plan and 10% higher than number of delivered wellsin 2018) more efficiently with 28.2 days per 10kft fornew wells (10% better than 2018), without compromising the well objectives and quality of the wells. Moreover, despite increasing complexity, the highest level ofconfidence on data analytics has permitted the company to go beyond their normaloperating envelop and set a major record for drilling the world's fifth longest well as amilestone in 2019.
Cheng, Zhong (CNOOC Ener Tech-Drilling &Production Co.) | Xu, Rongqiang (CNOOC Ener Tech-Drilling &Production Co.) | Chen, Jianbing (CNOOC Ener Tech-Drilling &Production Co.) | Li, Ning (CNOOC Ener Tech-Drilling &Production Co.) | Yu, Xiaolong (CNOOC Ener Tech-Drilling &Production Co.) | Ding, Xiangxiang (CNOOC Ener Tech-Drilling &Production Co.) | Cao, Jie (Xi'an Shiyou University)
Abstract Digital oil and gas field is an overly complex integrated information system, and with the continuous expansion of business scale and needs, oil companies will constantly raise more new and higher requirements for digital transformation. In the previous system construction, we adopted multi-phase, multi-vendor, multi-technology and multi-method, resulting in the problem of data silos and fragmentation. The result of the data management problems is that decisions are often made using incomplete information. Even when the desired data is accessible, requirements for gathering and formatting it may limit the amount of analysis performed before a timely decision must be made. Therefore, through the use of advanced computer technologies such as big data, cloud computing and IOT (internet of things), it has become our current goal to build an integrated data integration platform and provide unified data services to improve the company's bottom line. As part of the digital oilfield, offshore drilling operations is one of the potential areas where data processing and advanced analytics technology can be used to increase revenue, lower costs, and reduce risks. Building a data mining and analytics engine that uses multiple drilling data is a difficult challenge. The workflow of data processing and the timeliness of the analysis are major considerations for developing a data service solution. Most of the current analytical engines require more than one tool to have a complete system. Therefore, adopting an integrated system that combines all required tools will significantly help an organization to address the above challenges in a timely manner. This paper serves to provide a technical overview of the offshore drilling data service system currently developed and deployed. The data service system consists of four subsystems. They are the static data management system including structured data (job report) and unstructured data (design documentation and research report), the real-time data management system, the third-party software data management system integrating major industry software databases, and the cloud-based data visual application system providing dynamic analysis results to achieve timely optimization of the operations. Through a unified logical data model, it can realize the quick access to the third-party software data and application support; These subsystems are fully integrated and interact with each other to function as microservices, providing a one-stop solution for real-time drilling optimization and monitoring. This data service system has become a powerful decision support tool for the drilling operations team. The learned lessons and gained experiences from the system services presented here provide valuable guidance for future demands E&P and the industrial revolution.
Abstract Decades of subsurface exploration and characterisation have led to the collation and storage of large volumes of well related data. The amount of data gathered daily continues to grow rapidly as technology and recording methods improve. With the increasing adoption of machine learning techniques in the subsurface domain, it is essential that the quality of the input data is carefully considered when working with these tools. If the input data is of poor quality, the impact on precision and accuracy of the prediction can be significant. Consequently, this can impact key decisions about the future of a well or a field. This study focuses on well log data, which can be highly multi-dimensional, diverse and stored in a variety of file formats. Well log data exhibits key characteristics of Big Data: Volume, Variety, Velocity, Veracity and Value. Well data can include numeric values, text values, waveform data, image arrays, maps, volumes, etc. All of which can be indexed by time or depth in a regular or irregular way. A significant portion of time can be spent gathering data and quality checking it prior to carrying out petrophysical interpretations and applying machine learning models. Well log data can be affected by numerous issues causing a degradation in data quality. These include missing data - ranging from single data points to entire curves; noisy data from tool related issues; borehole washout; processing issues; incorrect environmental corrections; and mislabelled data. Having vast quantities of data does not mean it can all be passed into a machine learning algorithm with the expectation that the resultant prediction is fit for purpose. It is essential that the most important and relevant data is passed into the model through appropriate feature selection techniques. Not only does this improve the quality of the prediction, it also reduces computational time and can provide a better understanding of how the models reach their conclusion. This paper reviews data quality issues typically faced by petrophysicists when working with well log data and deploying machine learning models. First, an overview of machine learning and Big Data is covered in relation to petrophysical applications. Secondly, data quality issues commonly faced with well log data are discussed. Thirdly, methods are suggested on how to deal with data issues prior to modelling. Finally, multiple case studies are discussed covering the impacts of data quality on predictive capability.
Alipour K, Mehdi (Halliburton) | Dai, Bin (Halliburton) | Price, Jimmy (Halliburton) | Jones, Christopher Michael (Halliburton) | Gascooke, Darren (Halliburton) | VanZuilekom, Anthony (Halliburton) | Tahani, Hoda (Halliburton) | Ahmed, Fahad (Halliburton) | Hamza, Farrukh (Halliburton) | Sungkorn, Radompon (Halliburton) | Rekully, Cameron (Halliburton)
Abstract Measuring formation pressure and collecting representative samples are the essential tasks of formation testing operations. Where, when and how to measure pressure or collect samples are critical questions which must be addressed in order to complete any job successfully. Formation testing data has a crucial role in reserve estimation especially at the stage of field exploration and appraisal, but can be time consuming and expensive. Optimum location has a major impact on both the time spent performing and the success of pressure testing and sampling. Success and optimization of rig-time paradoxically requires careful and extensive but also quick pre-job planning. The current practice of finding optimum locations for testing heavily rely on expert knowledge. With nearly complete digitization of data collection, the oil industry is now dealing with massive data flow giving rise to the question of its application and the necessity to collect. Some data may be so called “dark data” of which a very tiny portion is used for decision making. For instance, a variety of petrophysical logs may be collected in a single well to provide measures of formation properties. The logs may include conventional gamma ray, neutron, density, caliper, resistivity or more advanced tools such as high-resolution image logs, acoustic, or NMR. These data can be integrated to help decide where to pressure test and sample, however, this effort is nearly exclusively driven by experts and is manpower intensive. In this paper we present a workflow to gather, process and analyze conventional log data in order to optimize formation testing operations. The data is from an enormous geographic distribution of wells. Tremendous effort has been performed to extract, transform and load (ETL) the data into a usable format. Stored files contains multi-million to multi-billions rows of data thereby creating technology challenges in terms of reading, processing and analyzing in a timely manner for pre-job planning. We address the technological challenges by deploying cutting-edge data technology to solve this problem. Upon completion of the workflow we have been able to build a scalable petrophysical interpretation log platform which can be easily utilized for machine learning and application deployment. This type of data base is invaluable asset especially in places where there is a need for knowledge of analogous wells. Exploratory data analysis on worldwide data on mobility and some key influencing features on pressure test and sampling quality, is performed and presented. We further show how this data is integrated and analyzed in order to automate selection of locations for which to formation test.
Abstract In carbonate reservoirs, the estimation of a reliable permeability log is a long-standing problem mainly because of the inherent multi-scale heterogeneities. The conventional approach relies on core-calibrated algorithms applied to open-hole (OH) logs. In general, this static log-based prediction uses to underestimate the actual dynamic performance of the wells and an ad-hoc integration with production logging tool (PLT) and well test (WT) analyses represents a required step to correct the initial estimation. However, it is critical, and at once challenging, to define the relation between dynamic-based corrections and OH characterization outcomes. An elegant solution is here proposed that makes use of predictive analytics applied on special core analyses (SCAL), nuclear magnetic resonance (NMR) log modeling, and multi-rate PLT/WT interpretations. The methodology is presented for a complex oil-bearing carbonate reservoir and it starts with an advanced NMR characterization performed downhole for more than 100 wells, and after a rigorous calibration with SCAL. The main outputs are a robust porosity partition (in terms of micropore, mesopore and macropore contributions), and a physics-based permeability formula. Although the match with core data demonstrates the reliability of the applied NMR rock characterization, log permeability underestimates the actual dynamic performances obtained from WT, as expected. At the same time, multi-rate PLT data from more than 150 wells are used to compute an apparent permeability value for each perforated interval, automatically consistent with the associated WT interpretation. Finally, both static and dynamic characterization outputs are used as inputs for a dual random forest (RF) template. In detail, the first RF algorithm learns through experience how NMR porosity partition and core-calibrated permeability are related to PLT/WT apparent permeability values, after considering the proper change of scale. Next, the second RF is utilized to estimate the uncertainty associated to the previous step, still in a completely data-driven way. Hence, the so-defined dual model provides a continuous automatic flow-calibrated permeability log, together with its confidence interval, directly from static NMR responses. The presented methodology allows dynamic data to be incorporate efficiently into a static workflow by means of a pure data-driven analytics approach. The latter is able to shed light on the statistical relationships hidden in the available datasets, thus leading to a more accurate permeability estimation. It is also shown how this provides fundamental information for perforation strategy optimization and reservoir modeling purposes in such carbonate rocks.
Big data analytics is a big deal right now in the oil and gas industry. This emerging trend is on track to become an industry best practice for good reason: It improves exploration and production efficiency. With the help of sensors, massive amounts of data already are being extracted from exploration, drilling, and production operations, as well as being leveraged to shed light on sophisticated engineering problems. So, why shouldn't a similar approach be applied when it comes to worker health and safety; especially when it's the norm across a wide variety of other industries? While the International Association of Oil and Gas Producers came out with a safety performance report that showed fatalities and injuries for the industry were down in 2019, the US Occupational Safety and Health Administration (OSHA) says that the oil and gas industry's fatality rate is 7 times higher than all other industries in the US.
Soares, Ricardo Vasconcellos (NORCE Norwegian Research Centre and University of Bergen (Corresponding author) | Luo, Xiaodong (email: firstname.lastname@example.org)) | Evensen, Geir (NORCE Norwegian Research Centre) | Bhakta, Tuhin (NORCE Norwegian Research Centre and Nansen Environmental and Remote Sensing Center (NERSC))
Summary In applications of ensemble-based history matching, it is common to conduct Kalman gain or covariance localization to mitigate spurious correlations and excessive variability reduction resulting from the use of relatively small ensembles. Another alternative strategy not very well explored in reservoir applications is to apply a local analysis scheme, which consists of defining a smaller group of local model variables and observed data (observations), and perform history matching within each group individually. This work aims to demonstrate the practical advantages of a new local analysis scheme over the Kalman gain localization in a 4D seismic history-matching problem that involves big seismic data sets. In the proposed local analysis scheme, we use a correlation-based adaptive data-selection strategy to choose observations for the update of each group of local model variables. Compared to the Kalman gain localization scheme, the proposed local analysis scheme has an improved capacity in handling big models and big data sets, especially in terms of computer memory required to store relevant matrices involved in ensemble-based history-matching algorithms. In addition, we show that despite the need for a higher computational cost to perform model update per iteration step, the proposed local analysis scheme makes the ensemble-based history-matching algorithm converge faster, rendering the same level of data mismatch values at a faster pace. Meanwhile, with the same numbers of iteration steps, the ensemble-based history-matching algorithm equipped with the proposed local analysis scheme tends to yield better qualities for the estimated reservoir models than that with a Kalman gain localization scheme. As such, the proposed adaptive local analysis scheme has the potential of facilitating wider applications of ensemble-based algorithms to practical large-scale history-matching problems.