Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
ai model
Within information technology (IT) today, a significant trend revolves around the empowerment of artificial intelligence (AI) and the Internet of Things (IoT) through edge computing, expediting the time to value for digital transformation initiatives. Santhosh Rao, a senior research director at Gartner, notes that, presently, approximately 10% of enterprise-generated data is created and processed outside a traditional centralized data center or cloud. Looking ahead, Gartner anticipates this figure will surge to 75% by 2025. Expanding on this, edge computing is propelling computer vision into a new era, catalyzing the development of smart devices, intelligent systems, and immersive experiences. The inherent benefits of edge computing, including expedited processing, increased security, and real-time insights, have positioned it as a pivotal tool across various computer vision applications.
AI Augmented Engineering Intelligence for Industrial Equipment
Santos, P. (TotalEnergies E&P UK Ltd, Aberdeen, Scotland) | Aldren, L. (TotalEnergies E&P UK Ltd, Aberdeen, Scotland) | Melvin, E. (TotalEnergies E&P UK Ltd, Aberdeen, Scotland) | Lim, J. (TotalEnergies E&P UK Ltd, Aberdeen, Scotland) | McMillan, G. (TotalEnergies E&P UK Ltd, Aberdeen, Scotland) | Yang, J. (TotalEnergies E&P UK Ltd, Aberdeen, Scotland) | Fraser, C. (TotalEnergies E&P UK Ltd, Aberdeen, Scotland) | ODonnell, J. (Genesis Energies, Aberdeen, Scotland)
Abstract This paper presents an Artificial Intelligence (AI) based implementation for anomaly detection in critical industrial equipment at TotalEnergies. The approach was initially inspired by the work of Sipple, J. (2020), on AI anomaly detection in smart buildings. This approach has since then been adapted, applied, and extended to also handle multidimensional time series data effectively, whilst keeping model interpretability by design. The AI models have been successfully deployed in various North Sea assets and have been able to detect anomalous behaviour in the industrial rotating equipment, leading to improved visibility, efficiency, and reliability. The paper also discusses the motivations for the business case and change management processes associated with the implementation of the AI models.
The Deployment of Deep Learning Models for Performance Optimization and Failure Prevention of Electric Submersible Pumps
Dachanuwattana, Silpakorn (Mubadala Petroleum) | Ratanatanyong, Suwitcha (Mubadala Petroleum) | Wasanapradit, Tawan (Mubadala Petroleum) | Vimolsubsin, Pojana (Mubadala Petroleum) | Kulchanyavivat, Sawin (Mubadala Petroleum)
Abstract Real-time sensors are crucial for monitoring electrical submersible pump (ESP) operation. However, manually analyzing the whole data from these sensors is virtually impossible due to its overwhelming volume. Artificial intelligence (AI) is a game-changing tool that can leverage the big data from ESP sensors more efficiently. Coupled with ESP knowledge, AI could reveal insights into ESP behaviors, well performances, and reservoirs dynamics, leading to ESP life extension and better production optimization. In this paper, we present the development and deployment of an AI workflow to enhance ESP surveillance. The workflow is developed in-house using the Python programming language and consists of the following four main modules: Data ingestion – to ingest all ESP-relevant databases Data preprocess – to transform the databases in the format ready for AI modelling AI modelling – to experiment several AI models, e.g., to detect ESP critical events, and predict ESP run life. Deployment – To automatically notify ESP critical events and visualize insight from the AI models The application of a hierarchical clustering algorithm reveals that the ESP run life in our fields are most influenced by gas production. Then, after more than 1000 runs of experiments, we achieve a deep learning model to predict whether an ESP will fail within the next 90 days. We also develop a module to automate nodal analysis as part of the AI workflow. Combining this physics-based model with a data-driven approach, the resulting AI models can accurately detect ESP critical events, such as ESP degradation, imminent gas lock, and sand production. To deploy the AI workflow, we build a dashboard to effectively visualize actionable insights from the AI models on our local server. The workflow sends notifications of ESP critical events to users for prompt troubleshooting actions and collects user feedbacks for improvement of the AI models in the next model development cycle. This paper demonstrates a holistic approach to develop a closed-loop ESP surveillance workflow that integrates the powers of AI, automation, and ESP knowledge including nodal analysis. The AI workflow potentially creates value of several million dollars or higher per year by extending ESP run lives and optimizing production. The lessons learnt from this AI workflow development are shared to assist the development and deploying of similar AI methods throughout the oil and gas industry.
- Asia > Middle East > UAE (0.29)
- North America > United States > Texas (0.28)
Abstract In heterogeneous tight sand formations, horizontal wells encounter intervals deposited under varying depositional environments along the lateral portion of the wellbore between landing point and total depth. Horizontal wells in this study were drilled in tight sands deposited in a marine environment where lateral depositional facies changes are common, and hydraulic fracture stimulation is necessary to achieve economic hydrocarbon extraction due to the relatively low permeability of the formation. Without geomechanical logs currently derived from wireline logging, it is not possible to optimize cluster spacing and placement. This step provides necesary information used to optimize completion design, which is crucial to the ultimate productivity of a well. Due to formation heterogeneity, expensive wireline logs must be collected in order to optimize fracture stimulation or else new methods to estimate these logs must be employed. This paper presents a technique to optimize cluster selection for hydraulic fracturing in unconventional tight gas development horizontal wells without wireline logging by leveraging Measure While Drilling (MWD) Gamma Ray logs and surface drilling parameters together with Artificial Intelegence (AI) algorythms to predict density, compressional and shear slowness logs for use in geomechanical evaluation.
- North America (0.48)
- Asia > Middle East > Saudi Arabia (0.47)
- Research Report > New Finding (0.50)
- Overview (0.34)
- Geology > Geological Subdiscipline > Geomechanics (1.00)
- Geology > Rock Type > Sedimentary Rock > Clastic Rock > Sandstone (0.50)
Estimation of Far-Field Fiber Optics Distributed Acoustic Sensing DAS Response Using Spatio-Temporal Machine Learning Schemes and Improvement of Hydraulic Fracture Geometric Characterization
Ramos Gurjao, Kildare George (Texas A&M University) | Gildin, Eduardo (Texas A&M University) | Gibson, Richard (NanoSeis) | Everett, Mark (Texas A&M University)
Abstract Distributed Acoustic Sensing (DAS) is a fiber optics method that is revolutionizing the unconventional reservoir monitoring technology with substantial spatial coverage, high frequency data acquisition, and broad cable deployment options including hazardous/harsh environments compared to traditional geophysical methods such as point sensors (i.e., geophones). However, a single well equipped with fiber cannot acquire the far-field strain response since the sensitivity of this technique is restricted to a region near the monitor well. In this paper, we develop an Artificial Intelligence (AI) algorithm to estimate the magnitude of the far-field DAS response for any spatio-temporal input. Moreover, we identify a discontinuity in displacement results following fracture hit, which is interpreted as an effect of rock plastic deformation, and for the first time we demonstrate that it may be related to fracture width. Therefore, the output of our algorithm is used to estimate such geometric property along time in multiple locations. We generate the tangent displacement component (uy) (parallel to monitor well) using an in-house code based on Displacement Discontinuity Method (DDM). Several monitor wells are incorporated in the simulation of physical scenarios characterized by single and multiple hydraulic fractures. For each specific scenario we train and test an Artificial Neural Network (ANN) with position and time as input variables, and axial displacement as output. The Machine Learning (ML) model is designed with 7 hidden layers, 100 the maximum number of neurons per layer and hyperbolic tangent as activation function. Finally, predicted uy is used to: (1) obtain Distributed Acoustic Sensing (DAS) data deriving it sequentially in space and time; and (2) estimate fracture width based on discontinuity magnitude. Training stage is performed avoiding overfitting and minimizing ANN loss function. In the testing phase, error between true and predicted variables is negligible in the entire waterfall plot region, except at initial time steps where fracture treatment starts at operation well and magnitude of axial displacement collected at monitor well is very small on the order of 10 or even lower. In this case, we suspect that these tiny supervisor values may have minimal impact on the loss function, and consequently weights and biases of regression model are barely updated to consider the effect of such outputs. Regarding fracture width estimation, error reduces consistently along time at all locations reaching values near 0%. To the best of our knowledge this is the first work that creates a ML algorithm able to estimate strain fields generated during hydraulic fracturing treatments merely based on position and time inputs. The model developed with synthetic data is an incentive for the deployment of multiple monitor wells in the field to enhance beyond the near wellbore region geometric characterization of created fracture systems, and possibly identify critical patterns associated with fracture propagation that ultimately can lead to production optimization.
- Well Completion > Hydraulic Fracturing (1.00)
- Production and Well Operations > Well & Reservoir Surveillance and Monitoring > Production logging (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Neural networks (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Artificial intelligence (1.00)
By mid-2022, Meta will control what it believes will be the world's fastest artificial intelligence (AI) supercomputer. Dubbed the AI Research SuperCluster (RSC), the system is already running and is among the world's fastest AI supercomputers, the company said in a blog post on 24 January. Development of the RSC is ongoing, but once the second phase is completed by the second half of this year, the system will deliver nearly 5 exaflops of mixed-precision computing. Meta, formerly known as Facebook, is already using the supercomputer to train large models in natural language processing (NLP) and computer vision for research. The company uses large-scale AI models for ongoing priorities, such as detecting harmful content on its social platforms.
Abstract This investigation presents a powerful predictive model to determine crude oil formation volume factor (FVF) using state-of-the-art artificial intelligence (AI) techniques. FVF is a vital pressure-volume-temperature (PVT) parameter used to characterize hydrocarbon systems and is pivotal to reserves calculation and reservoir engineering studies. Ideally, FVF is measured at the laboratory scale; however, prognostic tools to evaluate this parameter can optimize time and cost estimates. The database utilized in this study is obtained from open literature and covers statistics of crude oils of the Middle East region. Multiple AI algorithms are considered, including Artificial Neural Networks (ANN) and Artificial Neural Fuzzy Inference Systems (ANFIS). Models are developed utilizing an optimization strategy for various parameters/hyper-parameters of the respective algorithms. Unique permutations and combinations for the number of perceptron and their resident layers is investigated to reach a solution that provides the most optimum output. These intelligent models are produced as a function of the parameters intrinsically affecting FVF; reservoir temperature, solution GOR, gas specific gravity, bubble point pressure, and crude oil API gravity. Comparative analysis of developed AI models is performed using visualization/statistical analysis, and the best model is pointed out. Finally, the mathematical equation extraction to determine FVF is accomplished with the respective weights and bias for the model presented. Graphical analysis is used to evaluate the performance of developed AI models. The results of scatter plots showed most of the points are lying on the 45 degree line. Moreover, during this study, an error metric is developed comprising of multiple analysis parameters; Average absolute percentage error (AAPE), Root Mean Squared Error (RMSE), coefficient of determination (R). All models investigated are tested on an unseen dataset to prevent a biased model's development. Performance of the established AI models is gauged based on this error metric, demonstrating that ANN outperforms ANFIS with error within 1% of the measured PVT values. A computationally derived intelligent model provides the strongest predictive capabilities as it maps complex non-linear interactions between various input parameters leading to FVF.
Abstract Reservoir simulation is a key tool for predicting the dynamic behavior of the reservoir and optimizing its development. Fine scale CPU demanding simulation grids are necessary to improve the accuracy of the simulation results. We propose a hybrid modeling approach to minimize the weight of the full physics model by dynamically building and updating an artificial intelligence (AI) based model. The AI model can be used to quickly mimic the full physics (FP) model. The methodology that we propose consists of starting with running the FP model, an associated AI model is systematically updated using the newly performed FP runs. Once the mismatch between the two models is below a predefined cutoff the FP model is switch off and only the AI model is used. The FP model is switched on at the end of the exercise either to confirm the AI model decision and stop the study or to reject this decision (high mismatch between FP and AI model) and upgrade the AI model. The proposed workflow was applied to a synthetic reservoir model, where the objective is to match the average reservoir pressure. For this study, to better account for reservoir heterogeneity, fine scale simulation grid (approximately 50 million cells) is necessary to improve the accuracy of the reservoir simulation results. Reservoir simulation using FP model and 1024 CPUs requires approximately 14 hours. During this history matching exercise, six parameters have been selected to be part of the optimization loop. Therefore, a Latin Hypercube Sampling (LHS) using seven FP runs is used to initiate the hybrid approach and build the first AI model. During history matching, only the AI model is used. At the convergence of the optimization loop, a final FP model run is performed either to confirm the convergence for the FP model or to re iterate the same approach starting from the LHS around the converged solution. The following AI model will be updated using all the FP simulations done in the study. This approach allows the achievement of the history matching with very acceptable quality match, however with much less computational resources and CPU time. CPU intensive, multimillion-cell simulation models are commonly utilized in reservoir development. Completing a reservoir study in acceptable timeframe is a real challenge for such a situation. The development of new concepts/techniques is a real need to successfully complete a reservoir study. The hybrid approach that we are proposing is showing very promising results to handle such a challenge.
Unleashing the Potential of Relative Permeability Using Artificial Intelligence
Shah, Abdur Rahman (Schlumberger) | Ghorayeb, Kassem (American University of Beirut) | Mustapha, Hussein (Schlumberger) | Ramatullayev, Samat (Schlumberger) | Droubi, Nour El (Schlumberger) | Kloucha, Chakib Kada (ADNOC)
Abstract One of the most important aspects of any dynamic model is relative permeability. To unlock the potential of large relative permeability data bases, the proposed workflow integrates data analysis, machine learning, and artificial intelligence (AI). The workflow allows for the automated generation of a clean database and a digital twin of relative permeability data. The workflow employs artificial intelligence to identify analogue data from nearby fields by extending the rock typing scheme across multiple fields for the same formation. We created a fully integrated and intelligent tool for extracting SCAL data from laboratory reports, then processing and modeling the data using AI and automation. After the endpoints and Corey coefficients have been extracted, the quality of the relative permeability samples is checked using an automated history match and simulation of core flood experiments. An AI model that has been trained is used to identify analogues for various rock types from other fields that produce from the same formations. Finally, based on the output of the AI model, the relative permeabilities are calculated using data from the same and analog fields. The workflow solution offers a solid and well-integrated methodology for creating a clean database for relative permeability. The workflow made it possible to create a digital twin of the relative permeability data using the Corey and LET methods in a systematic manner. The simulation runs were designed so that the pressure measurements are history matched with the adjustment and refinement of the relative permeability curve. The AI workflow enabled us to realize the full potential of the massive database of relative permeability samples from various fields. To ensure utilization in the dynamic model, high, mid, and low cases were created in a robust manner. The workflow solution employs artificial intelligence models to identify rock typing analogues from the same formation across multiple fields. The AI-generated analogues, combined with a robust workflow for quickly QC’ing the relative permeability data, allow for the creation of a fully integrated relative permeability database. The proposed solution is agile and scalable, and it can adapt to any data and be applied to any field.
- Geophysics > Time-Lapse Surveying > Time-Lapse Seismic Surveying (0.47)
- Geophysics > Seismic Surveying (0.47)
- North America > United States > Texas > Permian Basin > Midland Basin > Good Field (0.89)
- North America > United States > Texas > Fort Worth Basin > Fair Field (0.89)
Abstract Wintershall Dea is developing together with partners a digital system to monitor and optimize electrical submersible pump (ESP) performance based on the data from Mittelplate oil field. This tool is using machine learning (ML) models which are fed by historic data and will notify engineers and operators when operating conditions are trending beyond the operating envelope, which enables an operator to mitigate upcoming performance problems. In addition to traditional engineering methods, such a system will capture knowledge by continuous improvement based on ML. With this approach the engineer has a system at hand to support the day-to-day work. Manual monitoring and on demand investigations are now backed up by an intelligent system which permanently monitors the equipment. In order to create such a system, a proof of concept (PoC) study has been initiated with industry partners and data scientists to evaluate historic events, which are used to train the ML-systems. This phase aims to better understand the capabilities of machine learning and data science in the subsurface domain as well as to build up trust for the engineers with such systems. The concept evaluation has shown that the intensive collaboration between engineers and data scientist is essential. A continuous and structured exchange between engineering and data science resulted in a mutual developed product, which fits the engineer's needs based on the technical capabilities and limits set by ML-models. To organize such a development, new project management elements like agile working methods, sprints and scrum methods were utilized. During the development Wintershall Dea has partnered with two organizations. One has a pure data science background and the other one was the data science team of the ESP manufacturer. After the PoC period the following conclusions can be derived: (1) data quality and format is key to success; (2) detailed knowledge of the equipment speeds up the development and the quality of the results; (3) high model accuracy requires a high number of events in the training dataset. The overall conclusion of this PoC is that the collaboration between engineers and data scientists, fostered by the agile project management toolkit and suitable datasets, leads to a successful development. Even when the limits of the ML-algorithms are hit, the model forecast, in combination with traditional engineering methods, adds significant value to the ESP performance. The novelty of such a system is that the production engineer will be supported by trusted ML-models and digital systems. This system in combination with the traditional engineering tools improves monitoring of the equipment and taking decisions leading to increased equipment performance.