The SPE has split the former "Management & Information" technical discipline into two new technical discplines:
- Management
- Data Science & Engineering Analytics
Geologic Time
Journal
Conference
Publisher
Date
Author
Concept Tag
Country
Genre
Geophysics
Industry
Oilfield Places
Technology
Source
File Type
The SPE has split the former "Management & Information" technical discipline into two new technical discplines:
Layer | Fill | Outline |
---|
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Li, Chen (Chengdu University of Technology, China) | Yan, Bicheng (King Abdullah University of Science and Technology, Saudi Arabia) | Kou, Rui (Texas A&M University, United States) | Gao, Shunhua (Texas A&M University, United States)
Abstract The Fast Marching Method (FMM) is a highly efficient numerical algorithm frequently used to solve the Eikonal equation to obtain the travel time from the source point to spatial locations, which can generate a geometric description of monotonically advancing front in anisotropic and heterogeneous media. In modeling fluid flow in subsurface heterogeneous porous media, application of the FMM makes the characterization of pressure front propagation quite straightforward using the diffusive time of flight (DTOF) as the Eikonal solution from an asymptotic approximation to the diffusivity equation. For the infinite-acting flow that occurs in smoothly varying heterogeneous media, travel time of pressure front from the active production or injection well to the observation well can be directly estimated from the DTOF using the concept of radius of investigation (ROI). Based on the ROI definition, the travel time to a given location in space can be determined from the maximum magnitude of partial derivative of pressure to time. Treating travel time computed at the observation well as the objective function, we propose a FMM based deep learning (DL) framework, namely the Inversion Neural Network (INN), to inversely estimate heterogeneous reservoir permeability fields through training the deep neural network (DNN) with the travel time data directly generated from the FMM. A convolutional neural network (CNN) is adopted to establish the mapping between the heterogeneous permeability field and the sparse observational data. Because of the quasi-linear relationship between the travel time and reservoir properties, CNN inspired by FMM is able to provide a rapid inverse estimate of heterogeneous reservoir properties that show sufficient accuracy compared to the true reference model with a limited number of observation wells. Inverse modeling results of the permeability fields are validated by the asymptotic pressure approximation through history matching of the reservoir models with the multi-well pressure transient data.
Abstract Unconventional reservoirs like shale oil/gas are expected to play a major role in many unexplored regions, globally. Shale resource evaluation involves the estimation of Total Organic Carbon (TOC) which correlates to the prospective capability of generating and containing hydrocarbons. Direct measurement of TOC through geochemical analysis is often not feasible, and hence researchers have focused on indirect methods to estimate TOC using analytical and statistical techniques. Accordingly, this work proposes the application of artificial intelligence (AI) techniques to leverage routinely available well logs for the prediction of TOC. Multiple algorithms are developed and compared to rank the most optimum solution based on efficiency analysis. Support Vector Regression (SVR), Random Forest (RF), and XGBoost algorithms are utilized to analyze the well-log data and develop intelligent models for shale TOC. A process-based approach is followed starting with systematic data analysis, which includes the selection of the most relevant input parameters, data cleaning, filtering, and data-dressing, to ensure optimized inputs into the AI models. The data utilized in this work is from major shale basins in Asia and North America. The AI models are then used to develop TOC predictor as a function of fundamental open-hole logs including sonic, gamma-ray, resistivity, and density. Furthermore, to strengthen AI input-output correlation mapping, a k-fold cross-validation methodology integrating with the exhaustive-grid search approach is adopted. This ensures the optimized hyperparameters of the intelligent algorithms developed in this work are selected. Finally, developed models are compared to geochemically derived TOC using a comprehensive error analysis schema. The proposed models are teted for veracity by applying them on blind dataset. An error metrics schema composed of root-mean-squared-error, and coefficient of determination, is developed. This analysis ranks the respective AI models based on the highest performance efficiency and lowest prediction error. Consequently, it is concluded that the XGBoost and SVR-based TOC predictions are inaccurate yielding high deviations from the actual measured values in predictive mode. On the other hand, Random Forest TOC predictor optimized using k-fold validation produces high R values of more than 0.85 and reasonably low errors when compared to true values. The RF method overpowers other models by mapping complex non-linear interactions between TOC and various well logs.
Tabib, Mandar Vasudeo (SINTEF Digital) | Nivlet, Philippe (SINTEF Industry) | Steinar, Knut (SINTEF Industry) | Skogestad, Jan Ole (SINTEF Industry) | Nybรธ, Roar (SINTEF Industry) | Rasheed, Adil (NTNU)
The paper aims at demonstrating a novel intrusive hybrid-analytics and modelling (HAM) that combines physics-based model and machine learning (ML) for predicting key variables in the monitoring of hole cleaning during drilling, and more specifically monitoring pressure/Equivalent Circulating Density (ECD) and cuttings volume fraction. Currently, for predicting the spatial-temporal evolution of circulating mud in real-time in the annulus during drilling and potentially anticipating hole cleaning issues, a low-resolution physics-based 1D model is utilized that solves multi-phase flow equations. This model is computationally efficient but susceptible to discrepancies with actual observations. These errors could be a result of numerical issues, unmodelled physics in the model, or inaccurate input to the model. Here, machine learning is used to learn the pattern in residuals between the low-resolution model and a higher fidelity calculation, as well as measurements. The results show that the inclusion of machine learning models for correcting the low-fidelity cutting transport model has helped to improve the accuracy of low-fidelity model in predicting pressure and cutting volume fractions: which are key variables for monitoring hole cleaning. The machine learning models (ANN and LSTM models) have shown good performance in learning and correcting various errors associated with the 1D model, like (a) the numerical errors, (i.e. the error resulting from coarser and finer time-scales for the cuttings volume fraction along the well), and (b) the error due to physics (i.e. the difference in predictions between hi-fidelity model and low-fidelity model for pressure), and (c) the error between measurements and predictions of low-fidelity model. The conclusion of the work is that the intrusive HAM approach combining deep-learning with physics-based approach has the potential to provide a robust and efficient replacement of unknown parts of complex physics in mathematical models for drilling. Future work may involve using this HAM-in-drilling approach in conjunction with an anomaly detection algorithm to enable real-time decision when an anomaly occurs. 2 SPE-214369-MS
Abstract Underground gas storage (UGS) are essential components in energy security. However, UGS wells present a complicated and delicate combination of elements where ensuring safe and secure functionality over long periods is paramount. Today, with the advancement of continuous remote monitoring and digitalization, evaluating the integrity of UGS wells has become quicker and more efficient. This paper showcases how a digital twin is used to evaluate and forecast the link between leaks and temperature and pressure trends in a UGS well, allowing the identification and quantification of defects and, subsequently, well barrier integrity. UGS wells present additional criticalities with respect to normal production wells due to its longer life span and the repetitive production and injection cycles. This makes early and accurate leak detection essential for a safe management of the well barriers. The proposed digital twin has been developed using material and energy balances and considering each annulus as a separate control volume. Each control volume can exchange heat and mass through predesigned barriers. Simulating evolution in time of pressure and temperature in the control volumes., and comparing results with data from field, allows the identification of position and size of leaks. A genetic algorithm is applied to optimize placement of leaks on their specific barriers. The system aims to identify the position and dimension of possible leaks by matching historical pressure, temperature, and flow data. Once a leak is identified, a risk assessment is conducted to evaluate the overall integrity of the well. If the status of the well is found to be critical enough, an intervention may be planned. The system has been in use for little over a year and has shown great potential in accurate and efficient identification of leaks. This has accelerated the process of well integrity evaluation and allowed timely interventions on wells that required it. On the other hand, the process has highlighted cases where previous assumptions about leak location and size were corrected using the digital twin, therefore reducing the costs of interventions. Finally, the model showcased a clear readiness for predictive capabilities aimed to select, plan and design fit for purpose mitigating actions. This paper highlights the power that a digital twin can present leveraging field data with advanced algorithms. The paper also showcases workflows that allow convenient, efficient, and timely evaluation of well integrity, which leads to safer operating conditions and lower operational costs.
Abstract Reservoir simulators based on physics provide the most accurate method for predicting oil and gas recovery, in particular from waterflood and EOR processes. However, detailed full-field simulation can be computationally demanding. In recent years, there have been attempts in accelerating reservoir simulation by combining simplification of the gridding requirement with data-driven approaches while maintaining the full physics. One such approach is the physics-based data-driven flow network model where 1D or 2D grids connecting the wells are configured and simulated. The parameters of the flow network model are then tuned to match full 3D simulation or field-data. Even though the grid has been simplified, a large number of parameters are needed to reproduce the 3D simulation results. In this paper, an approach similar to the flow network model is presented. The main contribution of this paper is the parameterization of the gridding process between the wells such that a minimal number of parameters are needed. Essentially, the grids between the wells are configured to model accurately the flow behavior. The corner-point grid geometry is kept so that current simulators could be used with the proposed method. In this paper, the grid geometry is determined with AI methods for one waterflood run. The grid could be used subsequently for waterflood with widely different injection/production scenarios and even for chemical flood. The ability of the approach to derive the grid from a single waterflood run is another significant contribution of this paper.
Contreras Perez, David Rafael (ADNOC Offshore โ Seconded from OMV) | Sellar, Christopher (ADNOC Offshore โ Seconded from OMV) | Valente, Alexander (ADNOC Offshore โ Seconded from OMV)
Abstract The implementation of Artificial Intelligence and Machine Learning algorithms has introduced the opportunity for alternative techniques to estimate permeability within uncored intervals/wells. Different combinations of input data and different approaches for training values were evaluated to select a consistent predictive model able to produce permeability logs while honoring the geological concept and available core permeability measurements. The approach presented here is deployed in a low porosity, low permeability hydrocarbon bearing carbonate reservoir with a limited dataset, aiming to estimate a permeability log from available wireline log and core data. The geological description from core in Reservoir 3 indicates the presence of oil saturation in mud-dominated carbonate rocks with a moderate degree of calcite cementation. After a rigorous quality check of the available permeability measurements from conventional core analysis data, a high confidence database of porosity and permeability measurements were combined with existing wireline logs (e.g. GR, Resistivity, Neutron Porosity and Archie Water Saturation). The resulting structured dataset was used for permeability prediction using Random Forest regression (from scikit-learn in python). Three cases of permeability logs were generated from this methodology at well log resolution to be used in static reservoir modeling and saturation-height-modeling with J-functions. The permeability and saturation height models are key inputs for dynamic modelling to generate production forecasts in this undeveloped reservoir. Three different permeability models were trained using 144 high quality core plug measurements from six cored wells. Even though the number of available samples can be considered low for a machine learning workflow, an oversampling approach with point repetition was adopted to overcome data insufficiency for this dataset. From an E & P point of view, accurately predicting reservoir permeability in hydrocarbon reservoirs has been one of the major challenges facing the industry for decades. The approach outlined here reduces uncertainty in permeability prediction in the uncored interval. Furthermore, since permeability is part of the estimation of water saturation, this approach reduces uncertainty in water saturation interpretation, the potential deliverability of flow units and the volumetrics.
Mariotti, Pamela (Eni S.p.A.) | Toscano, Claudio (Eni S.p.A.) | Vecera, Carmela (Eni S.p.A.) | Da Marinis, Annunziata (Eni S.p.A.) | Frau, Simone (Eni S.p.A.) | Poggio, Franco (Eni S.p.A.) | Pangestu, Imam (Eni Muara Bakau BV) | Praja, Kurna (Eni Muara Bakau BV)
Abstract Currently the oil and gas industry is becoming more digitalized. The abundance of data varieties that are recorded has driven the industry to move forward from the conventional data management to more fashioned data acquisition. The field under study (Field A) is a deep-water gas asset, characterized by a complex internal architecture of many separate and discrete gas charged stacked sand bodies. Objective of this paper is to show the key role of the reservoir monitoring strategy, fully integrated in a multidisciplinary workflow that allowed to detail the reservoir conceptual model leading to the identification of valuable production optimization opportunities. Field A produces through smart wells with selective completions, equipped with permanent down hole gauge (one for each open layer) allowing Real Time Monitoring of the key dynamic parameters (e.g., rate, flowing bottom hole pressure) and implementation of surveillance actions such as selective Pressure Transient Analysis. A workflow is implemented to be able to describe each open layer performance integrating all available data starting from well back allocation verification through virtual metering implementation. Then, Inflow Performance Relationship per layer is used to back-allocate well production to each unit. Robust continuous update of material balance analysis for each layer allowed to verify alignment between the geological gas volume in place and the dynamic connected volume, leading to update coherently also the dynamic model. Comparison between geological gas volume in place and dynamic connected one triggered a revision of geological modelling, reviewing seismic uncertainty and facies modelling, trying to embed dynamic evidence. Among parameters taken in account, layers internal connectivity resulted as the most impacting one. The revised model allowed to identify and rank residual opportunities on developed layers and possible additional explorative targets. The result of this screening led to the strategic business decision to plan an infilling well, with primary target the best unexploited sub-portion identified inside one of the analyzed layers together with other stacked minor targets. The expectation of primary target resulted confirmed by the data acquired in the new well drilled. Moreover, the real time monitoring workflow has been implemented in a digital environment for continuous automated update resulting in continuous reservoir monitoring and management. The successful experience on Field A proved the key role of a structured Reservoir Monitoring strategy as "drive mechanism" for a decision-making process extremely impacting on the core business. The automation of data extraction, will lead the way to an increasingly efficient use of "big amount" of data coming from real time monitoring, thus further improving the overall process of asset maximization opportunities identification.
Nguyen, Quang Minh (The University of Tulsa) | Onur, Mustafa (The University of Tulsa) | Alpak, Faruk Omer (Shell International Exploration & Production Inc.)
Abstract Summary This study focuses on carbon capture, utilization, and sequestration (CCUS) via the means of nonlinearly constrained production optimization workflow for a CO2-EOR process, in which both the net present value (NPV) and the net present carbon tax credits (NPCTC) are bi-objectively maximized, with the emphasis on the consideration of injection bottomhole pressure (IBHP) constraints on the injectors, in addition to field liquid production rate (FLPR) and field water production rate (FLWR), to ensure the integrity of the formation and to prevent any potential damage during life-cycle injection/production process. The main optimization framework used in this work is a lexicographic method based on line-search sequential quadratic programming (LS-SQP) coupled with stochastic simplex approximate gradients (StoSAG). We demonstrate the performance of the optimization algorithm and results in a field-scale realistic problem, simulated using a commercial compositional reservoir simulator. Results show that the workflow is capable of solving the single-objective and bi-objective optimization problems computationally efficiently and effectively, especially in handling and honoring nonlinear state constraints imposed onto the problem. Various numerical settings have been experimented with to estimate the Pareto front for the bi-objective optimization problem, showing the trade-off between the two objectives NPV and NPCTC. We also perform a single-objective optimization on the total life-cycle cash flow, which is the aggregated quantity of NPV and NPCTC, and quantify the results to further emphasize the necessity of performing bi-objective production optimization, especially when utilized in conjunction with commercial flow simulators that lack the capability of computing adjoint-based gradients.
Abstract Representation learning is a technique for transforming high-dimensional data into lower-dimensional representations that capture meaningful patterns or structures in the data. Uniform manifold approximation and projection (UMAP) enables representation learning that uses a combination of nearest neighbor search and stochastic gradient descent in the low-dimensional graph-based representation to preserve local structure and global distances present in high-dimensional data. We introduce a new technique in representation learning, where high-dimensional data is transformed into a lower-dimensional, graph-based representation using UMAP. Our method, which combines nearest neighbor search and stochastic gradient descent, effectively captures meaningful patterns and structures in the data, preserving local and global distances. In this paper, we demonstrate our expertise by utilizing unsupervised representation learning on accelerometer and hydrophone signals recorded during a fracture propagation experiment at the Sanford Underground Research Facility in South Dakota. Our UMAP-based representation executes a five-step process, including distance formulation, connection probability calculation, and low-dimensional projection using force-directed optimization. Our analysis shows that the short-time Fourier Transform of signals recorded by a single channel of the 3D accelerometer is the best feature extraction technique for representation learning. For the first time, we have successfully identified the distinct fracture planes corresponding to each micro-earthquake location using accelerometer and hydrophone data from an intermediate-scale hydraulic stimulation experiment. Our results from the EGS Collab project show the accuracy of this method in identifying fracture planes and hypocenter locations using signals from both accelerometers and hydrophones. Our findings demonstrate the superiority of UMAP as a powerful tool for understanding the underlying structure of seismic signals in hydraulic fracturing.
Abstract This paper demonstrates how supervised machine learning (ML) aids planning and acquisition of wireline formation testing (WFT) in thin laminated sands. Available well data was used to train a set of algorithms to identify intervals where tests are likely to fail. The trained model aims to prevent WFT failures what in turn results in reduced rig downtime, increased efficiency of the logging contracts and improved reservoir characterization. Wireline formation testing is essential to acquire rock and fluid characteristics in multilayered reservoirs setting up the base for the upcoming decisions. This becomes significantly complicated in thin laminated sands with a thick hydrocarbon column, requiring hundred(s) of points to meet given objectives. The percentage of failed tests can be much higher than those of being successful. In this context an automated advisory system, based on the abundant historic WFT dataset, can mitigate personal biases of the subsurface team and boost the share of successful tests in future wells. A combined set of wireline logs served as features to predict WFT outcome in two classification approaches. Binary classification predicts likelihood of having a good test or a failure, whereas multi-classification further details failure types into 5 categories. The overall dataset comprised more than 500 points (testing attempts) within the concession to train various ML models, using variety of preprocessing and hyper parameters. Their accuracy and area under curve (AUC) were used as the ranking criteria. The performance mostly depends on the number of classes to be predicted, the number of input features and the number of data points available for training. Less classes to predict and more input features result generally in better model metrics. The final selected model attained a maximum accuracy of 0.75 in two exploration wells in the adjacent concessions, i.e. correctly predicting 75 outcomes out of 100 wireline formation tests. A log interpretation suite accesses the deployed model via a cloud endpoint for the upcoming infill wells. The approach could improve wireline formation testing in other reservoirs or regions prone to WFT failures, where accumulated data is sufficient for machine learning applications. This could result in tangible savings during well operations.