|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
JJ De Paep is a graduate of Texas A&M University and the director of strategy and marketing at Astra Innovations, a company focused on delivering next-generation remote collaboration and data analytics tools for upstream oil and gas. Before joining Astra Innovations, he spent 9 years with National Oilwell Varco, and subsequently made a pivot to the technology industry managing software development before a passion for innovation and collaborative tools prompted a return to the energy industry.
Major industries--including oil and gas--tend to operate in the same way that they have been for decades; that is, until circumstances present themselves that catalyze change. In the wake of a year-long pandemic, oil market crash, and resulting economic recovery now under way, much uncertainty still exists in our industry. In addition, lofty new climate goals from the Biden Administration, growing global financial pressures and regulatory mandates, and stakeholder advocacy have added another layer of complexity in the march toward net-zero and cleaner energy. As a result, technology innovation within traditional oil and gas is burgeoning. For example, two supermajors, Saudi Aramco and Equinor, were named to Forbes Blockchain 50 list this year, spotlighting the companies' unique, innovative applications of technology.
AlSaadi, Hamdan (ADNOC Offshore) | Rashid, Faisal (ADNOC Offshore) | Bimastianto, Paulinus (ADNOC Offshore) | Khambete, Shreepad (ADNOC Offshore) | Toader, Lucian (ADNOC Offshore) | Landaeta Rivas, Fernando (ADNOC Offshore) | Couzigou, Erwan (ADNOC Offshore) | Al-Marzouqi, Adel (ADNOC Offshore) | El-Masri, Hassan (Schlumberger) | Pausin, Wiliem (Schlumberger)
Abstract Big data analytics is the often complex process of examining large andvaried data sets to uncover information. The aim of this paper is to describe how Real TimeOperation Center structuring drilling data in an informative and systematic manner throughdigital solution that can help organizations make informed business decisions and leverage business value to deliver wells efficiently and effectively. Real Time Operation Center process of collecting largechunks of structured/unstructured data, segregating and analyzing it and discovering thepatterns and other useful business insights from it. The methods were based on structuringa detailed workflow, RACI, quality check list for every single process of the provision of real-timedrilling data and digitally transform into valuable information through robust auditableprocess, quality standards and sophisticated software. The paper will explain RTOC DataManagement System and how it helped the organization determining which data is relevantand can be analyzed to drive better business decisions in the future. The big data platform, in-house built-in software, andautomated dashboards have helped the company build the links between different assets,analyzing technical gaps, creating opportunities and moving away from manual data entry(e.g. Excel) which was causing data errors, disconnection between information and wastedworker hours due to inefficiency. These solutions leverage analytics and unlock the valuefrom data to enhance operational efficiency, drive performance and maximize profitability. As a result, the company has successfully delivered 160 wells in 2019 (6% higher than 2019 Business Plan and 10% higher than number of delivered wellsin 2018) more efficiently with 28.2 days per 10kft fornew wells (10% better than 2018), without compromising the well objectives and quality of the wells. Moreover, despite increasing complexity, the highest level ofconfidence on data analytics has permitted the company to go beyond their normaloperating envelop and set a major record for drilling the world's fifth longest well as amilestone in 2019.
Cheng, Zhong (CNOOC Ener Tech-Drilling &Production Co.) | Xu, Rongqiang (CNOOC Ener Tech-Drilling &Production Co.) | Chen, Jianbing (CNOOC Ener Tech-Drilling &Production Co.) | Li, Ning (CNOOC Ener Tech-Drilling &Production Co.) | Yu, Xiaolong (CNOOC Ener Tech-Drilling &Production Co.) | Ding, Xiangxiang (CNOOC Ener Tech-Drilling &Production Co.) | Cao, Jie (Xi'an Shiyou University)
Abstract Digital oil and gas field is an overly complex integrated information system, and with the continuous expansion of business scale and needs, oil companies will constantly raise more new and higher requirements for digital transformation. In the previous system construction, we adopted multi-phase, multi-vendor, multi-technology and multi-method, resulting in the problem of data silos and fragmentation. The result of the data management problems is that decisions are often made using incomplete information. Even when the desired data is accessible, requirements for gathering and formatting it may limit the amount of analysis performed before a timely decision must be made. Therefore, through the use of advanced computer technologies such as big data, cloud computing and IOT (internet of things), it has become our current goal to build an integrated data integration platform and provide unified data services to improve the company's bottom line. As part of the digital oilfield, offshore drilling operations is one of the potential areas where data processing and advanced analytics technology can be used to increase revenue, lower costs, and reduce risks. Building a data mining and analytics engine that uses multiple drilling data is a difficult challenge. The workflow of data processing and the timeliness of the analysis are major considerations for developing a data service solution. Most of the current analytical engines require more than one tool to have a complete system. Therefore, adopting an integrated system that combines all required tools will significantly help an organization to address the above challenges in a timely manner. This paper serves to provide a technical overview of the offshore drilling data service system currently developed and deployed. The data service system consists of four subsystems. They are the static data management system including structured data (job report) and unstructured data (design documentation and research report), the real-time data management system, the third-party software data management system integrating major industry software databases, and the cloud-based data visual application system providing dynamic analysis results to achieve timely optimization of the operations. Through a unified logical data model, it can realize the quick access to the third-party software data and application support; These subsystems are fully integrated and interact with each other to function as microservices, providing a one-stop solution for real-time drilling optimization and monitoring. This data service system has become a powerful decision support tool for the drilling operations team. The learned lessons and gained experiences from the system services presented here provide valuable guidance for future demands E&P and the industrial revolution.
Abstract Risk-mitigation strategies are most effective when the major sources of uncertainty are determined through dedicated and in-depth studies. In the context of reservoir characterization and modeling, petrophysical uncertainty plays a significant role in the risk assessment phase, for instance in the computation of volumetrics. The conventional workflow for the propagation of the petrophysical uncertainty consists of physics-based model embedded into a Monte Carlo (MC) template. In detail, open-hole logs and their inherent uncertainties are used to estimate the important petrophysical properties (e.g. shale volume, porosity, water saturation) with uncertainty through the mechanistic model and MC simulations. In turn, model parameter uncertainties can be also considered. This standard approach can be highly time-consuming in case the physics-based model is complex, unknown, difficult to reproduce (e.g. old/legacy wells) and/or the number of wells to be processed is very high. In this respect, the aim of this paper is to show how a data-driven methodology can be used to propagate the petrophysical uncertainty in a fast and efficient way, speeding-up the complete process but still remaining consistent with the main outcomes. In detail, a fit-for-purpose Random Forest (RF) algorithm learns through experience how log measurements are related to the important petrophysical parameters. Then, a MC framework is used to infer the petrophysical uncertainty starting from the uncertainty of the input logs, still with the RF model as a driver. The complete methodology, first validated with ad-hoc synthetic case studies, has been then applied to two real cases, where the petrophysical uncertainty has been required for reservoir modeling purposes. The first one includes legacy wells intercepting a very complex lithological environment. The second case comprises a sandstone reservoir with a very high number of wells, instead. For both scenarios, the standard approach would have taken too long (several months) to be completed, with no possibility to integrate the results into the reservoir models in time. Hence, for each well the RF regressor has been trained and tested on the whole dataset available, obtaining a valid data-driven analytics model for formation evaluation. Next, 1000 scenarios of input logs have been generated via MC simulations using multivariate normal distributions. Finally, the RF regressor predicts the associated 1000 petrophysical characterization scenarios. As final outcomes of the workflow, ad-hoc statistics (e.g. P10, P50, P90 quantiles) have been used to wrap up the main findings. The complete data-driven approach took few days for both scenarios with a critical impact on the subsequent reservoir modeling activities. This study opens the possibility to quickly process a high number of wells and, in particular, it can be also used to effectively propagate the petrophysical uncertainty to legacy well data for which conventional approaches are not an option, in terms of time-efficiency.
Big data analytics is a big deal right now in the oil and gas industry. This emerging trend is on track to become an industry best practice for good reason: It improves exploration and production efficiency. With the help of sensors, massive amounts of data already are being extracted from exploration, drilling, and production operations, as well as being leveraged to shed light on sophisticated engineering problems. So, why shouldn't a similar approach be applied when it comes to worker health and safety; especially when it's the norm across a wide variety of other industries? While the International Association of Oil and Gas Producers came out with a safety performance report that showed fatalities and injuries for the industry were down in 2019, the US Occupational Safety and Health Administration (OSHA) says that the oil and gas industry's fatality rate is 7 times higher than all other industries in the US.
Abstract In multi-stage plug-and-perf horizontal well completions, there are a multitude of moving parts and variables to consider when evaluating performance drivers. Properly identifying performance drivers allows an operator to focus their efforts to maximize the rate of return of resource development. Typically, well-to-well comparisons are made to help identify performance drivers, but in many cases the differences are not clear. Identifying these drivers may require a better understanding of performance variability along a single lateral. Data analytics can help to identify performance drivers using existing data from development activities. In the case study below, multiple diagnostics are utilized to identify performance drivers. A combination of completion diagnostics including oil and water tracers, stimulation data, reservoir data, 3D seismic, and borehole image logs were collected on a set of wells in the early appraisal phase of a field. Using oil tracers as the best indication of stage level performance along the laterals, data analytics is applied to uncover the relationships between the tracers and the numerous diagnostics. After smoothing was applied to the dataset, trends between oil tracer recovery, several independent variables and features seen in image logs and 3D seismic were identified. All the analyses pointed to decreasing tracer recovery, and likely decreased oil production, near faulted areas along each lateral. A random forest model showed a moderate prediction power, where the model's predicted tracer recovery on blind stages was able to explain 54% of the variance seen in the tracer response (r=0.54). This analysis suggests the identification of certain faulted areas along the wellbore could lead to ways of improving individual well economics by adjusting completion design in these areas.
Abstract Leaks and ruptures are the most important possible risks for operational oil and gas pipelines. Due to their hazardous effects on the environment, much research has been conducted to prevent and detect possible ruptures on a pipeline to protect people and the environment by enhancing safe operation. Any improvement in leak detection technologies to increase the accuracy and sensitivity while eliminating false leak and rupture alerts will protect the environment and assure hazard-free operation. Data mining algorithms are widely used in many industries, including the energy industry. They have already been implemented as computational leak detection methodologies. To increase confidence and improve accuracy and sensitivity, different algorithms may be introduced to detect ruptures. In our study, a 36" crude oil pipeline with two pump stations was configured in a pipeline simulator. The pipeline parameters of flow, pressure, and temperature were computed for several leak and rupture cases, and data science algorithms such as Logistic Regression, Neural Network, and Multivariate Adaptive Regression Splines were used as classifiers to detect the leaks and ruptures. Multivariate Adaptive Regression Splines (MARS) is an important statistical learning tool for both classification and regression. MARS is nonparametric, adaptive, and effective in high dimensional problems with a proven record for fitting nonlinear multivariate functions. The contribution from the basis functions together with interaction effects between the predictors are used to determine the response variable: MARS produces a resultant model as an explicit formula. MARS proves itself as a comparative classifier to the already known logistic regression and neural network methods as a new leak and rupture detection computation data science technique for pipeline operators.
Alzahabi, A. (University of Texas-Permian Basin) | Alexandre Trindade, A. (Texas Tech University) | Kamel, A. A. (University of Texas-Permian Basin) | Harouaka, A. (University of Texas-Permian Basin) | Baustian, W. (Camino Natural Resources) | Campbell, C. (Camino Natural Resources)
Summary One of the enduring pieces of the jigsaw puzzle for all unconventional plays is drawdown (DD), a technique for attaining optimal return on investment. Assessment of the DD from producing wells in unconventional resources poses unique challenges to operators; among them the fact that many operators are reluctant to reveal the production, pressure, and completion data required. In addition to multiple factors, various completion and spacing parameters add to the complexity of the problem. This work aims to determine the optimum DD strategy. Several DD trials were implemented within the Anadarko Basin in combination with various completion strategies. Privately obtained production and completion data were analyzed and combined with well log analysis in conjunction with data analytics tools. A case study is presented that explores a new strategy for DD producing wells within the Anadarko Basin to optimize a return on investment. We use scatter-plot smoothing to develop a predictive relationship between DD and two dependent variables—estimated ultimate recovery (EUR) and initial production (IP) for 180 days of oil—and introduce a model that evaluates horizontal well production variables based on DD. Key data were estimated using reservoir and production variables. The data analytics suggested the optimal DD value of 53 psi/D for different reservoirs within the Anadarko Basin. This result may give professionals additional insight into more fully understanding the Anadarko Basin. Through these optimal ranges, we hope to gain a more complete understanding of the best way to DD wells when they are drilled simultaneously. Our discoveries and workflow within the Woodford and Mayes Formations may be applied to various plays and formations across the unconventional play spectrum. Optimal DD techniques in unconventional reservoirs could add billions of dollars in revenue to a company’s portfolio and dramatically increase the rate of return, as well as offer a new understanding of the respective producing reservoirs.
Abstract Leveraging publicly available data is a crucial stepfor decision making around investing in the development of any new unconventional asset.Published reports of production performance along with accurate petrophysical and geological characterization of the areashelp operators to evaluate the economics and risk profiles of the new opportunities. A data-driven workflow can facilitate this process and make it less biased by enabling the agnostic analysis of the data as the first step. In this work, several machine learning algorithms are briefly explained and compared in terms of their application in the development of a production evaluation tool for a targetreservoir. Random forest, selected after evaluating several models, is deployed as a predictive model thatincorporates geological characterization and petrophysical data along with production metricsinto the production performance assessment workflow. Considering the influence of the completion design parameters on the well production performance, this workflow also facilitates evaluation of several completion strategies toimprove decision making around the best-performing completion size. Data used in this study include petrophysical parameters collected from publicly available core data, completion and production metrics, and the geological characteristics of theNiobrara formation in the Powder River Basin. Historical periodic production data are used as indicators of the productivity in a certain area in the data-driven model. This model, after training and evaluation, is deployed to predict the productivity of non-producing regions within the area of interest to help with selecting the most prolific sections for drilling the future wells. Tornado plots are provided to demonstrate the key performance driversin each focused area. A supervised fuzzy clustering model is also utilized to automate the rock quality analyses for identifying the "sweet spots" in a reservoir. The output of this model is a sweet-spot map that is generated through evaluating multiple reservoir rock properties spatially. This map assists with combining all different reservoir rock properties into a single exhibition that indicates the average "reservoir quality"of the formation in different areas. Niobrara shale is used as a case study in this work to demonstrate how the proposed workflow is applied on a selected reservoir formation whit enough historical production data available.