With today's current technologies it is possible to answer the question, "What is my most profitable mode of operation for the next few hours, for the rest of today, tomorrow and beyond?" With'lower for longer' oil prices the need for enterprise wide optimization in the upstream and midstream oil & gas industry is greater than ever. The terms Digital Oil Field / Digital Gas Field / Digital Twin are being utilized to extol the virtue and value of big data analytics, model based asset optimization and supply chain optimization. These provide optimization solutions in ways not previously possible with multiple unintegrated systems, processes and procedures. Ad-hoc deployment of applications across multiple sites become difficult to integrate horizontally for management of safe and optimized operations and vertically up to Business ERP (enterprise resource planning) level to give useful and timely business insights.
Ahmed, Syed (Saudi Arabian Oil Company, Saudi Aramco) | Al-Zubail, Ahmad (Saudi Arabian Oil Company, Saudi Aramco) | Al-Jeshi, Majed (Saudi Arabian Oil Company, Saudi Aramco) | Yousef, Khaled (Saudi Arabian Oil Company, Saudi Aramco) | Musabbeh, Alya (Saudi Arabian Oil Company, Saudi Aramco) | Mousa, Saad (Saudi Arabian Oil Company, Saudi Aramco) | Bukhari, Adeeb (Saudi Arabian Oil Company, Saudi Aramco) | Seraihi, Emad (Saudi Arabian Oil Company, Saudi Aramco) | Alamri, Sultan H. (Saudi Arabian Oil Company, Saudi Aramco)
This paper describes integrated solution that leverages Industrial Revolution 4.0 to sustain crude quality specifications for Saudi Aramco supply chain covering more than 50 GOSPs (Gas Oil Separation Plants), Pipelines, and Terminals. Sustaining crude quality specifications such water content (BS&W), salt content, etc. for the Arabian Crudes (Arab light, Arab Extra Light etc.) requires big data analysis across the supply chain. To address this challenge, Saudi Aramco developed a customized solution called Crude Quality Monitoring Solution (CQMS) which leverages 800 critical data streams every minute (PI values), classifies the data according to the risk level impacting crude quality specifications. Three developed risk levels are leading proactive, lagging proactive, and lagging reactive, each of which has a defined acceptable risk matrix. Each risk matrix initiates automated notifications for corrective actions proactively. Moreover, patterns and operational events can be easily recognized in the solution visually. The paper also describes several examples where the solution notifications have proactively remediated process disturbances by up to 20% at upstream and downstream facilities while ensuring asset integrity. The solution deployment has also substantially improved the operational efficiency across the network by benchmarking critical data streams. Saudi Aramco is continuing to enhance the solution capabilities to ensure maximization of the crude network.
Cadei, Luca (Eni SpA) | Corneo, Andrea (Eni SpA) | Milana, Diletta (Eni SpA) | Loffreno, Danilo (Eni SpA) | Lancia, Lorenzo (Eni SpA) | Montini, Marco (Eni SpA) | Rossi, Gianmarco (Eni SpA) | Purlalli, Elisabetta (Eni SpA) | Fier, Piero (Eni SpA) | Carducci, Francesco (The Boston Consulting Group GAMMA)
The current oil and gas market is characterized by low prices, high uncertainties and a subsequent reduction in new investments. This leads to an ever-increasing attention towards more efficient asset management. The fouling effect is considered one of the main problems drastically affecting asset integrity/efficiency and heat exchanger performances of critical machineries in upstream production plants. This paper illustrates the application of advanced big data analytics and innovative machine learning techniques to face this challenge.
The optimal maintenance scheduling and the early identification of workflow-blocking events strongly impact the overall production, as they heavily contribute to the reduction of down-times. While, machine learning techniques proved to introduce significant advantages to these problems, they are fundamentally data-driven. In industry scenarios, where dealing with a limited amount of data is standard practice, this means forcing the use of simpler models that are often not able to disentangle the real dynamics of the phenomenon. The lack of data is generally caused by frequent changes in operating conditions/field layout or an insufficient instrumentation system. Moreover, the intrinsic long duration of many physical phenomena and the ordinary asset maintenance lifecycle, cause a critical reduction in the number of relevant events that can be learned.
In this work, the fouling problem has been explored leveraging only limited data. The attention is focused on two different equipment: heat exchangers and re-boilers. While the formers involve slower dynamics, the latter are characterized by a steady phase followed by an abrupt deterioration. Moreover, the first ones allow a proper scheduling of cleaning interventions in advance. On the other hand, the second forces a much quicker plant stop. Finally, heat exchangers are characterized by few episodes of comparable deterioration, while re-boilers present only a single episode. Regarding heat exchangers, a dual approach has been followed, merging a short-term, time-series-based model, and a long-term one based on linear regression. After having isolated a number of training regions related to the fouling episodes that showed a characteristic behavior, it is possible to obtain accurate results in the short-term and to capture the general trend in the long-term. In the case of re-boilers, a novelty detection approach has been adopted: first, the model learns the equipment normal behavior, then it uses the features learned to detect anomalies. This continuous training-predicting iteration also leverages the user feedback to adapt to new operating conditions.
Results show that in an "young digital" industry, the use of limited data together with simpler machine learning techniques, can successfully become an automatic diagnostics tool supporting the operators to improve traditional maintenance activities as well as optimize the production rate, and finally the asset efficiency
Cadei, Luca (Eni SpA) | Corneo, Andrea (Eni SpA) | Milana, Diletta (Eni SpA) | Loffreno, Danilo (Eni SpA) | Lancia, Lorenzo (Eni SpA) | Montini, Marco (Eni SpA) | Rossi, Gianmarco (Eni SpA) | Purlalli, Elisabetta (Eni SpA) | Fier, Piero (Eni SpA) | Carducci, Francesco (The Boston Consulting Group GAMMA) | Nizzolo, Riccardo (The Boston Consulting Group GAMMA)
The use of advanced analytics techniques has become pivotal for the Digital Transformation of the Oil and Gas Industry. Most of these models are used to predict and avoid the off-spec behaviors of both equipment and functional units of the plant and also for predicting overshooting events in advance allows plant’s operators to avoid production down-time.
From a Machine Learning perspective, predicting off-specs situation and peaks in time signal is a complex task, due to the great rarity of events. For the very same reason, using standard data science measures – like Area Under the Curve (AUC), Recall and Precision – can lead to misleading performance indicators. In fact, a model that predicts no off-spec would have a high AUC just because of the unbalanced classes, leading to many false alarms. In this paper we present a business-oriented validation framework for big data analytics and machine learning models applied to a upstream production plant. This allow to evaluate both the effort required to operators and the expected benefit that could be achieved.
The validation metrics defined take the classical Data Science measures to the business domain. This allow to adapt the model to the very specific use case and end user addressing the specific upstream plants constraints. This framework allows to define the optimal tradeoff between effort required and preventable events, providing statistics and KPIs to evaluate it. Normalized Recall (NR) takes into account both the percentage of events intercepted and the effort required, in terms of Attention Time (AT), when the operator should pay attention to the equipment involved. Plant operators can now have an idea of the results they can achieve with respect to the maximum effort required. Moreover, to prove the goodness of the model, we defined the lift in the NR as the ratio of the model NR and the NR that would be obtained by randomly distributing the same number of alarms.
We applied this framework to specific use cases obtaining an expected recall of 40-50% with an expected effort of 5-10% of the time (considering more than 6 months). The effort is actually lower, since the operator is not requested to be fully committed to the alarm. The innovative framework developed is able to demonstrate the real operating capability of the analytics implemented on field, highlighting both the effort required to operators and the accuracy of machine learning tools.
There are various factors that contribute to the Well planning. Be it cost, complexity of the reservoir, safety measures, isolating problem areas, engineering, etc., The whole process of well analysis takes a lot of time and efforts, as the method used is mostly manual. In this abstract, we explain the Big Data approach used to perform Well Planning which significantly reduces the drilling engineer time. Future, possible suggestions to the existing system are also discussed in this paper.
Over the past few years, the concept of Big Data has gradually matured. Drilling Engineers, Data Management and the Solution Development Team have jointly discussed the potential and the functionality if this concept to optimize the Drilling Engineer performance and timeframe for preparing and delivering drilling programs for the new development wells. EDM (Enterprise Drilling Management System), drilling suite of applications, In-House drilling data extraction tool with Web based interface, and many different types of excel sheets, were considered as the input sources. Well planning process was automated for offset and related data acquisition, results and knowledge were captured in a central repository, which will be used in future endaovers.
Organizations today, especially in the oil and gas sector, are swimming in data  but most of them manage to analyze only a fraction of what they collect. To help build a stronger data-driven culture, many organizations, starting with technology companies, are adopting a new approach called self-service analytics. In this paper, we present our big data and analytics architecture that has enabled our in-house and outsourced engineering, and data science teams, to develop self-serving AI models and data pipelines. Our architecture is presented to the reader in such a way that he/she can apply it using the cloud provider of their choice by translating the concepts presented here, either to native cloud PaaS or using open source products.
The oil and gas industry recognizes that there are significant gains to be had by the implementation of new digital technologies. For well construction planning, the goal is to bring together all domains, all data, and all engineering requirements in a seamlessly interconnected solution. As part of this transformation process, an innovative digital solution has been designed and implemented to cover all different aspects of the well planning and engineering workflows, delivering a step change in terms of capabilities and efficiency.
The use of traditional workflows and solutions often requires a lengthy, disconnected, and iterative processes, which can introduce a lack of coherency between the different teams and engineering processes involved, increasing the planning risks. Given the complexity and the number of disciplines involved in the well engineering process, a high-level collaboration is needed in new planning systems. Considering this as the basis for a change, a novel approach to connect all the well planning workflows demonstrates how the process can be optimized by combining automatic engineering and validation, big data analytics, concurrent engineering, and project orchestration.
The main benefit of a native cloud designed and deployed solution is the simplicity for the user because the architecture handles the interconnection of different workflows, data, and people, resulting in a more effective process. In addition, the solution always accounts for the latest data versions and automatically adjusts when any design change is made to the plan. Moreover, the cloud deployment enables complex big data analytics for offset well data analyses, thus increasing the accuracy of the results and democratizing the company knowledge. Automatic engineering validations are immediately confirmed against changes to the wellbore geometries, trajectory design, drilling parameters, bottomhole assemblies, bits, and rig specifications, among others, to deliver a more accurate and coherent result when compared to a disconnected model application workflow. As a result of this innovative solution, a significant reduction of individual software packages required to plan a well can be achieved. The system delivers a step change in in well engineering performance, with case study examples delivering both high quality results and efficiency gains of 50% and more.
This paper presents a full description of a new industry standard digital well construction planning solution that has the potential to transform the well planning process by providing a step change in collaboration, concurrent engineering, automation, and big data analytics. Furthermore, the cloud-deployed solution challenges will be briefly discussed.
The next time you are tempted to scold your son or daughter for spending too many hours playing videogames, think twice: they may be training to be the best workers of the 21st century and even replace your position…
Collaborative Work Environments (CWE) combined with Telepresence and Mixed Reality technologies are revolutionizing the design, engineering and building large petrochemical projects. This paper provides an overview of the technologies and describes how the design, implementation and control processes in these projects can be performed more safely and accurately at lower cost.
Over three decades ago, businesses experienced a leap in performance, code reusability and maintainability when their information technologies moved from numbered line to object-oriented programming (OOP). We are now poised at the cusp of another quantum change in efficiency as a result of technology. In this new era data travels from "cradle to grave." From design, construction or assembly, to use, service and final dismantling of refineries and industrial facilities, the physical world of discrete elements will have an accurate digital equivalent. Thanks to powerful computing and Big Data warehousing, complex structures with millions of individual parts can now be tracked and displayed like intelligent LEGO® structures.
The vision is that by adopting an open, agree-upon, and
Extending this model and common language of data communication to include various industries, such as engineering, construction, aviation and military operations provides economies of scale in the adoption of an open, global and flexible platform for use by all, but without restricting innovation or compromising security.
Seismic data is one of earliest data acquired in a prospect evaluation and the data are utilized throughout the exploration and production stages of a prospect. With recent advances in the handling of big data, it is essential to re-evaluate the best practices in the seismic data ecosystem. This paper presents the idea to leveragingthe technology advancement in big data and cloud computing for Seismic data ecosystem with the aim to providing an improve user experience.
This new seismic platform would be capable of handling, managing and delivering the full spectrum of seismic data varieties starting from acquired field data to interpretation ready processed data. The system to have the following capabilities: Capability to entitle the right portion of data to every user as per interest Organization of seismic data as per the business units Data security by sharing data only with legitimate users/groups. Direct or indirect integration with all the data sources and applications who are consuming and/or generating data Sharing of and collaboration on data within company and/or across organization for shareholding partner, perspective seismic buyer for trading and relinquishment, regulatory agency resource certifying agencies and service providers etc. over limited network connectivity. Provide intergration/data deliverivey to End Users applications where this seismic data will be utilizaed
Capability to entitle the right portion of data to every user as per interest
Organization of seismic data as per the business units
Data security by sharing data only with legitimate users/groups.
Direct or indirect integration with all the data sources and applications who are consuming and/or generating data
Sharing of and collaboration on data within company and/or across organization for shareholding partner, perspective seismic buyer for trading and relinquishment, regulatory agency resource certifying agencies and service providers etc. over limited network connectivity.
Provide intergration/data deliverivey to End Users applications where this seismic data will be utilizaed
Implementation of Seismic ecosystem will enable: Sharing of seismic data by the acquisition, quality control, data processing and interpretation with user communities from one centralized storage Collaboration of stake holders in real time over an encrypted network Leveraging cloud and mobility technology advancement for agility and interaction. The system will be connected and interactive yet has the power of complex high-performance computing infrastructure on the background. Data delivery and auditing to wider and more diverse user community that consumes data from different platforms. Secure data access based on organizational business units to make sure data does not fall into unauthorized hand. Reduction in seismic data turnaround time by reading and ingesting large volume of data through parallel input/output operation. Improved data delivery and map interface with contextual information out of the centralized data store. Augment traditional workflows with machine learning and artificial intelligence for example automated fault detection, etc.,
Sharing of seismic data by the acquisition, quality control, data processing and interpretation with user communities from one centralized storage
Collaboration of stake holders in real time over an encrypted network
Leveraging cloud and mobility technology advancement for agility and interaction. The system will be connected and interactive yet has the power of complex high-performance computing infrastructure on the background.
Data delivery and auditing to wider and more diverse user community that consumes data from different platforms.
Secure data access based on organizational business units to make sure data does not fall into unauthorized hand.
Reduction in seismic data turnaround time by reading and ingesting large volume of data through parallel input/output operation.
Improved data delivery and map interface with contextual information out of the centralized data store.
Augment traditional workflows with machine learning and artificial intelligence for example automated fault detection, etc.,
The proposed best practice aims to bring all of the different disciplines working with seismic data to one centralized seismic data repository and enable them to consume and share seismic data from big data lake. This is live and interactive when compared to traditional technologies of using the archive and restore system in standalone application.
Alkandari, Dana K. (Australian College of Kuwait) | AlTheferi, Ghaneima M. (Australian College of Kuwait) | Almutawaa, Hawra'a M. (Australian College of Kuwait) | Almutairi, Maryam (Australian College of Kuwait) | Alhindi, Nora (Australian College of Kuwait) | Al-Rashid, Sherifa M. (Australian College of Kuwait) | Al-Bazzaz, Waleed H. A. (Kuwait Institute For Scientific Research)
Formation damage is the impairment of permeability of rocks inside a petroleum reservoir. This occurs during drilling, production, stimulation and enhanced oil recovery operations, by various mechanisms such as chemical, mechanical, biological and thermal. Near wellbore formation damages have a great impact on productivity of the damaged formation. Acidizing is a stimulation method to remove the effect of near wellbore damage through reacting with damaging materials or the formation rocks (carbonate or sandstone rocks) to restore or improve permeability around the wellbore. Several experiments are conducted to study the effect of temperature and acid concentration combined on the efficiency of matrix acidizing. Three different concentrations scenarios of hydrochloric acid (3%, 15%, and 28%) and 4 different temperatures scenarios (25 °C, 35 °C, 70 °C, and 100 °C) were tested to investigate pore-enlargement success effect on permeability. The purpose of this experiment is to introduce the concept of optimized temperature augmented with optimized acid concentration in carbonate matrix acidulation. Morphology of pore geometry and area measurement software is used. New Advancement in imaging that captured pore area enlargement as big-data necessarily for artificial intelligence modeling. Captured pores before treatment and captured pores after thermal-HCL acid treatment have demonstrated that image processing of the actual acidized rock data can select the optimized recipe concentration of acid that will increase permeability, hence recovery. The results show that matrix acidizing is an effective method to improve permeability and enhance production, as it demonstrates that using less acid concentration with the optimized temperature can result in a favorable and satisfying outcomes.