|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Abstract Accurate prediction of machinery failure is a challenging and important task for the offshore industry. Early diagnosis and prognosis of machinery failure has become a necessity to drive high levels of safety and performance in oil and gas operations. Prognostics enabled by data-driven machine learning techniques offers new insights into the health and performance of machinery and thereby improves operational efficiency. Advances in this topic are important because of the challenging nature of prognostics and the large degree of uncertainty that is associated. In this work, we demonstrate a practical approach to build and perform robust predictive machine learning models that are capable of detecting critical machinery failure early. In addition, a review of recent state-of-art machine learning approaches employed in modeling of machinery failure prediction is presented. Predictive models discussed here are based on various supervised machine learning techniques as well as on different input features. A variety of these newer algorithms include baggings, boosting, support vector machines, ramdon forest, etc., all of which have been widely applied in predictive models. Although it is evident that machine learning methods can improve our understanding of failure progression, appropriate validation schemes are necessary to evaluate machine learning models to assist in effective and accurate decision making. Therefore, we illustrate different levels of evaluation methodologies that can be trusted for these methods to be considered in the everyday operational practice. The machine learning models mentioned in this manuscript is then applied to a case of bearing failure on wind turbine gearbox. A machine learning model by utilizing XGBoost is proposed for prediction of remaining useful life with improved accuracy. This paper could also serve as a guidline to assess machine learning data analytic methods for prognostics relevant to common machinery types on offshore assets.
Abstract Decline Curve Analysis (DCA) is one of the most common tools to predict oil and gas well performance and determine an estimated ultimate recovery based on historical production data. Probabilistic approaches have evolved to provide a measure of uncertainty in such estimates. However, engineers have held onto the belief that such quantification of uncertainty is largely subjective. Consequently there is an innate reluctance to adopt probabilistic methodologies owing to the assumption of prior knowledge of the relevant parameters and reservoir properties distributions. The objective of this paper is to explicate the development of an improved probabilistic approach to estimate reserves and well performance based on historical production data. This methodology precludes the assumption of prior distributions by adopting a bootstrap workflow that is implemented to construct probabilistic estimates with specified confidence intervals from historical production data. It is a statistical approach to assess the uncertainty of estimates objectively, removing the subjective nature of prior assumptions. We shall discuss an automated selection criteria workflow for time series and forecast periods resulting in more robust and accurate DCA. The methodology1 abides by a "more rigorous model-based bootstrap algorithm" that encapsulates the appropriate steps to preserve the inherent characteristics of a time series data set that show an overall decline trend. The paper also explores other potential advanced analytical techniques to gather knowledge from historical upstream data so as to identify wells within an asset that require some remediation based on real-time performance observations. Cluster analysis could improve engineers’ convenience in analyzing wells by separating them into groups based on decline curve shapes (patterns) and other reservoir properties. We shall define some processes that implement soft computing techniques to aggregate the DCA results with disparate upstream data via a data driven methodology. Pattern recognition and classification are important steps that underpin data mining, and we shall explicate a suite of workflows to enable a well optimization solution.
Modern drilling systems have a significant number of sensors already integrated into their design. However, this data is generally underutilized with respect to maintaining class notations or periodic survey activities. Analyzing existing data can yield reductions in non-productive time (NPT) in an era where efficiency is the key to success in offshore exploration and production. A Joint Development Project (JDP) with class, rig operator and equipment vendors was established to understand how data analytics may support class requirements while also increasing availability and reliability through condition based monitoring. This paper describes the approach taken in the JDP and the main findings from this real-world example of data analytics applied to current sensor and logging data Methods, Procedures, Process: The JDP looked to utilize data from 15,000 sensors onboard a drill ship which generated 36 billion data points from the vessels. The sensor data was combined with unstructured data from operational logs and maintenance records to find ways to demonstrate compliance to class and regulatory requirements via digital techniques. Ultimately this approach could remove the need for calendar based inspections while optimizing preventative maintenance tasks and frequencies, both of which increase equipment availability of drilling systems. The JDP resulted in several significant deliverables. The first is a draft assurance framework as to how data streams can be used by drilling units to support classification requirements without the need for intrusive inspections. The second delivery was a data quality assessment report to demonstrate how data quality can be measured and ensure that there is high confidence in alternative compliance approaches. The third delivery was an optimized preventative maintenance based on analytics of the existing data streams. The fourth deliverable involved creating "Big Data" analytic models (i.e., machine learning) for anomaly detection and failure mitigation. The JDP demonstrated that data analytics on existing data streams can be used to demonstrate compliance with classification survey requirements and increase availability. An assurance process for data quality creates trust in the analytics before important decisions about inspections and maintenance are made. This is especially true for safety critical systems where machine learning models may be used to mitigate and address reliability issues. This paper describes the approach taken in the JDP and the main findings from this real-world example of data analytics applied to current sensor and logging data. The project's major novel development issues were the development of an assurance framework, data quality assessment framework and an approach on how sensor based information can be used in this context classification. The use and treatment of sensor data in general is on the exploratory level and leading edge of such applications in the industry.
Bello, Oladele (Baker Hughes Incorporated) | Yang, Don (Baker Hughes Incorporated) | Lazarus, Sony (Baker Hughes Incorporated) | Wang, Xiaowei Shawn (Baker Hughes Incorporated) | Denney, Tommy (Baker Hughes Incorporated)
Abstract The instrumentation of reservoirs and wells using distributed downhole sensor and information- communication systems has enabled significant advances in their management. Examples include monitoring of well integrity and reservoir compaction; production monitoring of artificial lift wells; data integration for short-term history matching, reservoir characterization and geologic model updating; flow rate allocation, inflow profiling, probabilistic production forecasting and downhole set point optimization in intelligent well completions; matrix acidizing and hydraulic fracturing characterization, dynamic estimation of petrophysical properties; dynamic geomechanical properties estimation; joint inversion of distributed downhole fiber sensing and time-lapsed seismic data for anisotropy permeabilities estimation; skin analysis; reservoir and well performance diagnosis; reservoir analysis and parameter estimation, multiphase flow assurance and many more. Expanding the benefits of the distributed downhole sensors is currently driving the need for big data infrastructures and associated dynamic data-driven application systems for reservoir characterization, simulation and management. However, the significant costs of setting up and managing the infrastructure to manage distributed downhole sensing data such as distributed temperature sensors (DTS), discrete distributed temperature sensors (DDTS), discrete distributed strain sensors (DDSS) and distributed acoustic sensors (DAS) is a major challenge. These distributed downhole data sources are characterized with high volume, variety, velocity, veracity, variability and visualization. Currently, the distributed downhole sensing data transfer, storage, processing, archiving, retrieving and interpreting system in the petroleum industry still faces substantial challenges. Some examples are a high cost of hardware and software, ongoing system support and maintenance, a complicated implementation and deployment framework that is difficult to sustain, scale and upgrade, as well as the need for data compatibility provided by different vendors. The objective of this paper is to present a platform which offers an automated one-stop shop for distributed downhole sensing data transmission, management and interpretation. This platform employs a big data infrastructure and allows for joint inversion of production and distributed downhole sensing data in a wide range of online real-time reservoir and well monitoring applications. This paper describes a vendor-neutral, scalable web-based enterprise distributed downhole sensing infrastructure for data exchange, management and visualization. This system also allows for calibration of DTS interrogators and integration with PI systems. This platform applies multi-tier client-server architecture, scalable distributed databases, Production Markup Language (PRODML), and web services technologies to provide a reliable mechanism to bring distributed downhole sensing data from the field site to the corporate network in real-time and enable users to visualize the data anywhere, any time. A framework for cleaning distributed downhole sensing data streams in real-time is developed to render the data produced by sensors usable for analysis (remove problems due to noise, outliers, measurement drifts, incorrect calibration and other issues). Using the distributed downhole sensing data management platform, we combine information from physics-based models with cleaned distributed downhole sensing live data to analyze anisotropy in permeability and skin in multilayer formations, estimate inflow profiles, determine multiplayer formation or petrophysical properties and estimate geomechanical and reservoir compaction properties. This paper demonstrates the capability of the distributed downhole sensor data infrastructure and information integration platform through the use of different sets of distributed downhole sensing data in various applications.
Abstract Quantitative evaluation of steam-assisted gravity drainage (SAGD) performance of heterogeneous reservoir is important for reservoir management and optimization of development strategies for oil sand operations. Although conventional commercial simulators are capable for detailed appraisal SAGD recovery performance, they are usually deterministic and computationally-demanding. Artificial intelligence approaches can be employed as a complementary tool for production forecast and pattern recognition of highly non-linear relationships between system variables. In this paper, a comprehensive dataset, consisting of petrophysical log measurements, production and injection profiles is assembled from various publicly available sources, encompassing ten different SAGD operating fields with approximately two hundred well pairs. Only fields with complete data records are selected. Artificial neural network (ANN) is employed to facilitate the production performance analysis. Predicting (input) variables that are descriptive of reservoir heterogeneities and operating constraints, including log-derived petrophysical parameters, dimensionless shale index, effective numbers of producers and injectors for a given well pair, total production time and cumulative steam injection, are formulated, while parameters pertaining to cumulative production and steam-to-oil ratio are considered as prediction (output) variables. Principal components analysis (PCA) is performed to reduce the dimensionality of the input variables, improve prediction quality and limit over-fitting. Clustering analysis is integrated to identify internal groupings among data. Finally, statistical analysis is conducted to study the influences of data uncertainty because of limited size of field dataset and imprecise log-interpretation criteria, together with model parameter uncertainty due to learning algorithm and initialization on the final ANN predictions. Workflows involving Monte Carlo and bootstrapping methods are applied successfully. A comprehensive uncertainty analysis using an actual SAGD dataset is a novel contribution. The modeling results are demonstrated to be both reliable and acceptable. This paper demonstrates the combination of artificial-intelligence approaches and data-mining analysis can be implemented in a practical manner to analyze large amount of field data, which is often prone to uncertainties and errors, with high reliability and feasibility. Considering that many important variables such as bottom-hole pressures, PVT properties, permeability measurements, multi-phase flow functions and thermal conductivities are typically unavailable in the public domain and, hence, are missing in the dataset, this work demonstrates how practical data-driven analysis approaches can be tailored to construct models capable of predicting SAGD recovery performance from only log-derived and operational variables. Another advantage of the proposed approach is that it can be updated when new information is obtained.