The rapid development of machine learning algorithms and the massive accumulation of well data from continuous monitoring has enabled new applications in the oil and gas industries. Data gathered from well sensors are a foundation of the oilfield digitization and data-driven analysis. Here, we describe a deep learning approach to predict the long-term well performance based on a moderate duration of well monitoring data.
In this study, we first developed the data processing procedures for oilfield time series data and determined the proper selection of data sampling frequency, parameter combinations and data structures for deep learning models. Then we explored how Deep Learning (DL) models can be employed for well data analysis and how can we combine physics and DL models. Recurrent Neural Network (RNN) is a type of sequential DL model, which can be utilized for time series data analysis. This approach preserves preceding information and yields current response with memory of prior well behavior. Two candidate RNN models were tried to determine how well they were able to improve the accuracy and stability of well performance estimates. These two methods are Gated Recurrent Unit (GRU) and Long Short Term Memory (LSTM). In addition, a novel combination of RNN with Convolutional Neural Networks (CNNs), Long- and Short-term Time-series network (LSTNet), was also investigated.
These various models were tested and compared based on the public production datasets from Volve Field. Both GRU and LSTM achieved higher accuracy in performance prediction compared to the simple RNN. In the case of frequent well shut-in and opening, the failure in capturing fast pressure responses and the extreme fluctuations with the simple RNN ultimately leads to high error. In contrast, LSTNet is more stable to frequent or significant well variations. With advanced deep learning structures, engineers can interpret long-term reservoir performance information from responses estimated by deep learning models, instead of performing costly well tests or shut-ins.
Accurate estimation of mud weight (MW) helps to conserve wellbore stability in real-time drilling operations. Determination of proper MW requires a correct understanding of the stress field, natural fractures, pore pressure, rock strength, borehole trajectories, etc. It is a problematic task especially in, highly inclined wells, deviated wells, and near salt formations due to uneven variations in wellbore stresses. Proper MWs are difficult to apply at target depths of the unstable formations because of uncertainties existing inside the wellbore. There are no reliable tools or techniques available that can precisely determine the optimum value of MW. This paper proposes a novel and more convenient approach to estimate the safe MW for deviated wells using surface measured data. In this study, Bagging and Random forest ensembles have been utilized to model the relationship between sensors measured variables and MW. The proposed framework has been trained and tested on real-time Norwegian post-drilling data. Artificial neural networks (ANNs) and support vector regression (SVR) have also been utilized in this study for comparison purposes. The analysis of prediction results clearly reveals that Random forest ensemble has acquired the highest coefficient of correlation and minimum estimation errors. The performance of Ensemble methods is found to be superior to the ANNs and SVR models. The proposed approach can be useful for the determination of MW required at different depths of reservoir formation and maintaining the wellbore stability during real-time operations.
Well Integrity engineers are commonly challenged with using limited resources, and even more limited data, when trying to identify which wells amongst their diverse well inventory may be prone to damage and failure, the mechanisms and influential factors responsible for the potential damage and failures, and the reason why certain wells may pose the greatest risk. Furthermore, these integrity engineers are often uncertain as to the parameters that should be tracked; what inspection methods should be conducted, in which wells and at what frequency measures should be taken; and how the asset risks can be adequately determined and relayed to management to prioritize near-term and future financial investments into well integrity and decommissioning cost centres. In this paper, an approach and workflow are described on how the application of a combination of reliability and risk methods, parameter-based damage models and available field data can be used to develop a tool used by asset integrity and operations personnel to risk-rank wells by the probability of failure and associated consequences. Additionally, this paper illustrates how the approach and models developed are adaptable to both the damage mechanisms specific to the application and to the data and parameters that are currently being measured or readily obtained, or other related variables that can used as suitable proxy parameters. As experience and history build (adding to the understanding and prioritization of damage mechanisms and key parameters), and to improve estimated values of the associated probability of failure due to these mechanisms, the knowledge is fed back into the model to improve its predictive capabilities. This paper also describes how the methodology was applied by a commercial SAGD operator to develop a subsurface isolation risk assessment tool that was tailored to their wells, their application conditions and the parameters that they measure. The types of static and dynamic parameters that this tool considers, including geologic, well design, construction and operational data, are also illustrated, as well as how the tool is being used to prioritize injection and production wells by relative risk. Illustrative examples of how well, pad and asset risks are being identified, rolled-up across the asset and summarized are presented, and how well integrity and risk metrics are being communicated within the company. Ongoing activities to continue to update and advance the risk-ranking model are also noted; in particular, potential opportunities to develop improved mechanistic and data-driven models and predictions of damage and failure likelihoods, based on pooled reliability data and information across the broader thermal recovery sector. 2 SPE-196081-MS
Partnerships with big tech, tech startups, and innovative service companies--and the merging of their data, cloud, and software applications--are proving essential for operators in the scaling phase of digital deployment. Equinor has been among the first of many international oil companies to actively seek out and form such alliances. The Norwegian operator is in the process of leveraging its massive collection of data by making it accessible both inside and outside the company to improve its next generation of upstream projects--a task so big that it certainly cannot go it alone. "The challenge is not what data to share but to define the rules of the game for how to share [the data]," said Anders Opedal, Equinor executive vice president of technology, projects, and drilling, during the recent Halliburton Landmark Innovation Forum and Expo (LIFE 2019) in Houston. To overcome the industry's inclination toward data protectionism, Equinor became a founding member of the Open Subsurface Data Universe (OSDU) initiative, a global collaboration between most of the world's largest operators and service firms to define standards for an open-data architecture for subsurface data.
A hydrocarbon find has always been an exploration geologist’s adventure and has remained at the forefront of the E&P cycle for the survival of the oil and gas industry. Big and easy finds are a distant past; therefore, the quest has shifted to go beyond conventional sandstones and carbonates to more complex areas of unconventionals: low porosity, low permeability, low resistivity, tight and ultra-tight, HPHT, shale, CBM, gas hydrates, and any other possible regime including deeper, geologically complex, and seismically opaque features such as salt, basalt, sub-basalt, even basement.
This programme is taking place 8 July 2019 and will be hosted alongside SPE Workshop: Digitalisation—Disruption or Evolution? The field, located 200 km west of Stavanger at the Southern end of the Norwegian sector, was decommissioned in September 2016 after 8.5 years in operation, more than twice as long as originally planned. Volve reached a recovery rate of 54% and in March 2016 the licensee decided to shut down its production permanently. The field was originally scheduled for 3–5 years of operation. At plateau, Volve produced some 56,000 barrels per day and delivered a total of 63 million barrels.
Copyright 2019 held jointly by the Society of Petrophysicists and Well Log Analysts (SPWLA) and the submitting authors. ABSTRACT Today, many machine learning techniques are regularly employed in petrophysical modelling such as cluster analysis, neural networks, fuzzy logic, self-organising maps, genetic algorithm, principal component analysis etc. While each of these methods has its strengths and weaknesses, one of the challenges to most of the existing techniques is how to best handle the variety of dynamic ranges present in petrophysical input data. Mixing input data with logarithmic variation (such as resistivity) and linear variation (such as gamma ray) while effectively balancing the weight of each variable can be particularly difficult to manage. DTA is conceived based on extensive research conducted in the field of CFD (Computational Fluid Dynamics). This paper is focused on the application of DTA to petrophysics and its fundamental distinction from various other statistical methods adopted in the industry. Case studies are shown, predicting porosity and permeability for a variety of scenarios using the DTA method and other techniques. The results from the various methods are compared, and the robustness of DTA is illustrated. The example datasets are drawn from public databases within the Norwegian and Dutch sectors of the North Sea, and Western Australia, some of which have a rich set of input data including logs, core, and reservoir characterisation from which to build a model, while others have relatively sparse data available allowing for an analysis of the effectiveness of the method when both rich and poor training data are available. The paper concludes with recommendations on the best way to use DTA in real-time to predict porosity and permeability. INTRODUCTION The seismic shift in the data analytics landscape after the Macondo disaster has produced intensive focus on the accuracy and precision of prediction of pore pressure and petrophysical parameters.
Oil price forecasting has been shown to be challenging if not impossible for the long-term. However, the oil price has a major impact on Exploration and Production projects.
Historical Project Realized Oil Price (PROP) can be calculated for example projects by summing up the total project revenue using the actual oil prices and dividing through the total amount of oil produced. For different starting dates of example projects, the PROP changes. Determining the PROP for different starting times, a Cumulative Distribution Function (CDF) can be derived. Adjusting this CDF for expected "half cycle breakeven costs" for the low limit and demand considerations for the high case leads to a PROP range that can be used for future project evaluation.
Including PROP ranges into project evaluation allows for the selection of the most attractive development option, Value of Information analysis and project Probability of Economic Success (PES) calculation including oil price uncertainty.
Furthermore, using PROP ranges rather than oil price scenarios enables a distinction between short-term budget planning and long-term project development. For budget planning, a scenario approach is suggested while for long-term planning PROP ranges should be used. Applying long-term planning on PROP ranges leads to less fluctuation in staff planning and small annual adjustments in PROP range forecasting. Also, using PROP ranges results in increasing PES project hurdles at low oil prices and lower PES hurdles at high oil prices. Hence, at low oil prices the risk averseness of the company is increased. Another effect of using PROP ranges is that at high oil prices robustness of projects to low oil prices is included in the assessment.
To investigate the effect of PROP ranges on portfolio PES hurdles and project PES hurdles, a simplified linear-fit-model was developed. The results of the model showed that the project PES hurdles in a Value at Risk assessment can be determined applying the linear-fit-model to quantify the oil price dependency. The required individual project PES hurdles can be adjusted using the linear-fit-model to account for oil price uncertainty.
A high rate of penetration (ROP) is considered one of the most sought-after targets when drilling a well. While physics-based models determine the importance of drilling parameters, they fail to capture the extent or degree of influence of the interplay of the different dynamic drilling features. Parameters such as WOB, RPM, and flowrate, MSE, bit run distance, gamma ray for each rock formation in the Volve field in the North Sea were examined ensuring an adequate ROP while controlling the tool face orientation is quite challenging. Nevertheless, its helps follow the planned well trajectory and eliminates excessive doglegs that lead to wellbore deviations. Five different Machine Learning algorithms were preliminary implemented to optimize ROP and create a less tortuous borehole. The collected data was cleaned and preprocessed and used to structure and train Random Forest, Support Vector Regression, Ridge Regression, LASSO, and Gradient Boosting, XG boost among others and the appropriate hyperparameters were selected. A successful model was chosen based on maximized ROP, minimized deviation from planned trajectory, and lower CPF. An MAEP of 15% was achieved using GBM boost followed AdaBoost. The algorithms have demonstrated competence on the historical dataset, accordingly it will be further tested on blind data to serve as a real-time system for drilling optimization to enable a fully automated system.