|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Cheng, Zhong (CNOOC Ener Tech-Drilling &Production Co.) | Xu, Rongqiang (CNOOC Ener Tech-Drilling &Production Co.) | Chen, Jianbing (CNOOC Ener Tech-Drilling &Production Co.) | Li, Ning (CNOOC Ener Tech-Drilling &Production Co.) | Yu, Xiaolong (CNOOC Ener Tech-Drilling &Production Co.) | Ding, Xiangxiang (CNOOC Ener Tech-Drilling &Production Co.) | Cao, Jie (Xi'an Shiyou University)
Abstract Digital oil and gas field is an overly complex integrated information system, and with the continuous expansion of business scale and needs, oil companies will constantly raise more new and higher requirements for digital transformation. In the previous system construction, we adopted multi-phase, multi-vendor, multi-technology and multi-method, resulting in the problem of data silos and fragmentation. The result of the data management problems is that decisions are often made using incomplete information. Even when the desired data is accessible, requirements for gathering and formatting it may limit the amount of analysis performed before a timely decision must be made. Therefore, through the use of advanced computer technologies such as big data, cloud computing and IOT (internet of things), it has become our current goal to build an integrated data integration platform and provide unified data services to improve the company's bottom line. As part of the digital oilfield, offshore drilling operations is one of the potential areas where data processing and advanced analytics technology can be used to increase revenue, lower costs, and reduce risks. Building a data mining and analytics engine that uses multiple drilling data is a difficult challenge. The workflow of data processing and the timeliness of the analysis are major considerations for developing a data service solution. Most of the current analytical engines require more than one tool to have a complete system. Therefore, adopting an integrated system that combines all required tools will significantly help an organization to address the above challenges in a timely manner. This paper serves to provide a technical overview of the offshore drilling data service system currently developed and deployed. The data service system consists of four subsystems. They are the static data management system including structured data (job report) and unstructured data (design documentation and research report), the real-time data management system, the third-party software data management system integrating major industry software databases, and the cloud-based data visual application system providing dynamic analysis results to achieve timely optimization of the operations. Through a unified logical data model, it can realize the quick access to the third-party software data and application support; These subsystems are fully integrated and interact with each other to function as microservices, providing a one-stop solution for real-time drilling optimization and monitoring. This data service system has become a powerful decision support tool for the drilling operations team. The learned lessons and gained experiences from the system services presented here provide valuable guidance for future demands E&P and the industrial revolution.
Abstract Casing deformation and tubing eccentricity is a concern in the oil and gas industry for safety and operational reasons. Casing deformation or tubing eccentricity originates from various sources such as well completion, corrosion, formation swelling, collapse, salt dome creep, etc. It is important to implement a well-integrity surveillance program covering all of the casing and tubing strings for the full well life-cycle from initial completion to abandonment. However, there has been no effective logging method to evaluate though-tubing the condition of the casing string for deformation and eccentricity. This paper describes a new Deformation-and-Eccentricity (DEC) tool which is based on electromagnetic technology and designed to measure casing deformation and tubing eccentricity while logging inside completion tubing. The DEC tool generates a unique compressed-and-focused magnetic field which provides increased Signal-to-Noise Ratio (SNR) and employs an array of magnetic sensors to measure the magnetic flux density distributions azimuthally around the tool. The tool’s compressed-and-focused magnetic field is designed to (1) saturate the magnetic flux of the tubing and (2) inject more magnetic flux into the first casing behind the tubing, (3) to increase signal measurement sensitivity and SNR. The sensor matrix measures flux density changes which correspond to variations in distance between tubing and casing. The high resolution azimuthal magnetic sensor matrix delivers high accuracy measurements, which are used to image the flux density changes. A finite element based forward modeling and an optimized Gaussian Processes Regression method has been developed to process the raw logging data. DEC has a built-in orientation measurement based on gyro and accelerometers that are used to align the deformation and eccentricity images and index curves, as well as the tubing thickness image. The tool specifications as 1% of eccentricity ratio and 5% of deformation ratio accuracy in the range of casing OD up to 13-3/8”. DEC technology provides an advanced answer product for through-tubing casing deformation and eccentricity measurements in downhole well-integrity and plug-abandonment applications. When combined with other well-integrity measurements such as multi-finger caliper and multi-pipe thickness log tool a complete well integrity evaluation can be achieved throughout the life cycle of a well. For example, significant casing deformation can often indicate the potentially damaged cement behind the casing. Other applications for the technology include tubing clamp location for fiber-optics cables and control lines, and the orientation of multi-string tubing completions, etc. Performances of the tool have been validated through research simulations, lab tests, and field trials. The paper includes a field case study of a deviated gas production well with tubing buckling and casing micro-dogleg.
Abstract Lithological facies classification using well logs is essential in the reservoir characterization. The facies are manually classified from characteristic log responses derived, which is challenging and time consuming for geologically complex reservoirs due to high variation of log responses for each facies. To overcome such a challenge, machine learning (ML) is helpful to determine characteristic log responses. In this study, we classified the lithofacies by applying ML to the conventional well logs for the volcanic formation, onshore, northeast Japan. The volcanic formation of the Yurihara oil field is petrologically classified into five lithofacies: mudstone, hyaloclastite, pillow lava, sheet lava, and dolerite, with pillow lava being predominant reservoir. The former four lithofacies are the members of the volcanic system in Miocene, and dolerite randomly intruded later into those. Understanding the distribution of omnidirectional tight dykes at the well location is important for the estimation of potential near-lateral seal distribution compartmentalizing the reservoir. The facies are best classified by core data, which are unfortunately available in a limited number of wells. The conventional logs, with the help of the borehole image log, have been used for the facies classification in most of the wells. However, distinguishing dolerite from sheet lava by manual classification is very ambiguous, as they appear similar in these logs. Therefore, automated clustering of well logs with ML was attempted for the facies classification. All the available log data was audited in the target well prior to applying ML. A total of 10 well logs are available in the reservoir depth interval. To prioritize the logs for the clustering, the information of each log was first analyzed by Principal Component Analysis (PCA). The dimension of variable space was reduced from 10 to 5 using PCA. Final set of 5 variables, gamma-ray, density, formation photoelectric factor, neutron porosity, and laterolog resistivity, were used for the next clustering process. ML was applied to the selected 5 logs for automated clustering. Cross-Entropy Clustering (CEC) was first initialized using k-means++ algorithm. Multiple initialization processes were randomly conducted to find the global minimum of cost function, which automatically derived the optimized number of classes. The resulting classes were further refined by the Gaussian Mixture Model (GMM) and subsequently by the Hidden Markov Model (HMM), which takes the serial dependency of the classes between successive depths into account. Resulting 14 classes were manually merged into 5 classes referring to the lithofacies defined by the borehole image log analysis. The difference of the log responses between basaltic sheet lava and dolerite was too subtle to be captured with confidence by the conventional manual workflow, while the ML technique could successfully capture it. The result was verified by the petrological analyses on sidewall cores (SWCs) and cuttings. In this study, the automated clustering with the combination of several ML algorithms was demonstrated more efficient and reasonable facies classification. The unsupervised learning approach would provide supportive information to reveal the regional facies distribution when it is applied in the other wells, and to comprehend the dynamic behavior of the fluids in the reservoir.
Abstract In the modern oilfield, borehole images can be considered as the minimally representative element of any well-planned geological model/interpretation. In the same borehole it is common to acquire multiple images using different physics and/or resolutions. The challenge for any petro-technical expert is to extract detailed information from several images simultaneously without losing the petrophysical information of the formation. This work shows an innovative approach to combine several borehole images into one new multi-dimensional fused and high-resolution image that allows, at a glance, a petrophysical and geological qualitative interpretation while maintaining quantitative measurement properties. The new image is created by applying color mathematics and advanced image fusion techniques: At the first stage low resolution LWD nuclear images are merged into one multichannel or multiphysics image that integrates all petrophysical measurement’s information of each single input image. A specific transfer function was developed, it normalizes the input measurements into color intensity that, combined into an RGB (red-green-blue) color space, is visualized as a full-color image. The strong and bilateral connection between measurements and colors enables processing that can be used to produce ad-hoc secondary images. In a second stage the multiphysics image resolution is increased by applying a specific type of image fusion: Pansharpening. The goal is to inject details and texture present in a high-resolution image into the low resolution multiphysics image without compromising the petrophysical measurements. The pansharpening algorithm was especially developed for the borehole images application and compared with other established sharpening methods. The resulting high-resolution multiphysics image integrates all input measurements in the form of RGB colors and the texture from the high-resolution image. The image fusion workflow has been tested using LWD GR, density, photo-electric factor images and a high-resolution resistivity image. Image fusion is an innovative method that extends beyond physical constraints of single sensors: the result is a unique image dataset that contains simultaneously geological and petrophysical information at the highest resolution. This work will also give examples of applications of the new fused image.
Degenhardt, John J. (W. D. Von Gonten Laboratories) | Ali, Safdar (W. D. Von Gonten Laboratories) | Ali, Mansoor (W. D. Von Gonten Laboratories) | Chin, Brian (W. D. Von Gonten Laboratories) | Von Gonten, W. D. (W. D. Von Gonten Laboratories) | Peavey, Eric (Shell Fellow-UROC / Texas A&M University)
Abstract Many unconventional reservoirs exhibit a high level of vertical heterogeneity in terms of petrophysical and geo-mechanical properties. These properties often change on the scale of centimeters across rock types or bedding, and thus cannot be accurately measured by low-resolution petrophysical logs. Nonetheless, the distribution of these properties within a flow unit can significantly impact targeting, stimulation and production. In unconventional resource plays such as the Austin Chalk and Eagle Ford shale in south Texas, ash layers are the primary source of vertical heterogeneity throughout the reservoir. The ash layers tend to vary considerably in distribution, thickness and composition, but generally have the potential to significantly impact the economic recovery of hydrocarbons by closure of hydraulic fracture conduits via viscous creep and pinch-off. The identification and characterization of ash layers can be a time-consuming process that leads to wide variations in the interpretations that are made with regard to their presence and potential impact. We seek to use machine learning (ML) techniques to facilitate rapid and more consistent identification of ash layers and other pertinent geologic lithofacies. This paper involves high-resolution laboratory measurements of geophysical properties over whole core and analysis of such data using machine-learning techniques to build novel high-resolution facies models that can be used to make statistically meaningful predictions of facies characteristics in proximally remote wells where core or other physical is not available. Multiple core wells in the Austin Chalk/Eagle Ford shale play in Dimmitt County, Texas, USA were evaluated. Drill core was scanned at high sample rates (1 mm to 1 inch) using specialized equipment to acquire continuous high resolution petrophysical logs and the general modeling workflow involved pre-processing of high frequency sample rate data and classification training using feature selection and hyperparameter estimation. Evaluation of the resulting training classifiers using Receiver Operating Characteristics (ROC) determined that the blind test ROC result for ash layers was lower than those of the better constrained carbonate and high organic mudstone/wackestone data sets. From this it can be concluded that additional consideration must be given to the set of variables that govern the petrophysical and mechanical properties of ash layers prior to developing it as a classifier. Variability among ash layers is controlled by geologic factors that essentially change their compositional makeup, and consequently, their fundamental rock properties. As such, some proportion of them are likely to be misidentified as high clay mudstone/wackestone classifiers. Further refinement of such ash layer compositional variables is expected to improve ROC results for ash layers significantly.
Song, Lianteng (China National Petroleum Corporation) | Liu, Zhonghua (China National Petroleum Corporation) | Li, Chaoliu (China National Petroleum Corporation) | Ning, Congqian (China National Petroleum Corporation) | Hu, Yating (University of Electronic Science and Technology of China) | Wang, Yan (University of Electronic Science and Technology of China) | Hong, Feng (University of Electronic Science and Technology of China) | Tang, Wei (University of Electronic Science and Technology of China) | Zhuang, Yan (University of Electronic Science and Technology of China) | Zhang, Ruichang (University of Electronic Science and Technology of China) | Zhang, Yanru (University of Electronic Science and Technology of China) | Zhang, Qiong (University of Electronic Science and Technology of China)
Abstract Geomechanical properties are essential for safe drilling, successful completion, and exploration of both conventional and unconventional reservoirs, e.g. deep shale gas and shale oil. Typically, these properties could be calculated from sonic logs. However, in shale reservoirs, it is time-consuming and challenging to obtain reliable logging data due to borehole complexity and lacking of information, which often results in log deficiency and high recovery cost of incomplete datasets. In this work, we propose the bidirectional long short-term memory (BiLSTM) which is a supervised neural network algorithm that has been widely used in sequential data-based prediction to estimate geomechanical parameters. The prediction from log data can be conducted from two different aspects. 1) Single-Well prediction, the log data from a single well is divided into training data and testing data for cross validation; 2) Cross-Well prediction, a group of wells from the same geographical region are divided into training set and testing set for cross validation, as well. The logs used in this work were collected from 11 wells from Jimusaer Shale, which includes gamma ray, bulk density, resistivity, and etc. We employed 5 various machine learning algorithms for comparison, among which BiLSTM showed the best performance with an R-squared of more than 90% and an RMSE of less than 10. The predicted results can be directly used to calculate geomechanical properties, of which accuracy is also improved in contrast to conventional methods.
Abstract Log-facies classification aims to predict a vertical profile of facies at well location with log readings or rock properties calculated in the formation evaluation and/or rock-physics modeling analysis as input. Various classification approaches are described in the literature and new ones continue to appear based on emerging Machine Learning techniques. However, most of the available classification methods assume that the inputs are accurate and their inherent uncertainty, related to measurement errors and interpretation steps, is usually neglected. Accounting for facies uncertainty is not a mere exercise in style, rather it is fundamental for the purpose of understanding the reliability of the classification results, and it also represents a critical information for 3D reservoir modeling and/or seismic characterization processes. This is particularly true in wells characterized by high vertical heterogeneity of rock properties or thinly bedded stratigraphy. Among classification methods, probabilistic classifiers, which relies on the principle of Bayes decision theory, offer an intuitive way to model and propagate measurements/rock properties uncertainty into the classification process. In this work, the Bayesian classifier is enhanced such that the most likely classification of facies is expressed by maximizing the integral product between three probability functions. The latters describe: (1) the a-priori information on facies proportion (2) the likelihood of a set of measurements/rock properties to belong to a certain facies-class and (3) the uncertainty of the inputs to the classifier (log data or rock properties derived from them). Reliability of the classification outcome is therefore improved by accounting for both the global uncertainty, related to facies classes overlap in the classification model, and the depth-dependent uncertainty related to log data. As derived in this work, the most interesting feature of the proposed formulation, although generally valid for any type of probability functions, is that it can be analytically solved by representing the input distributions as a Gaussian mixture model and their related uncertainty as an additive white Gaussian noise. This gives a robust, straightforward and fast approach that can be effortlessly integrated in existing classification workflows. The proposed classifier is tested in various well-log characterization studies on clastic depositional environments where Monte-Carlo realizations of rock properties curves, output of a statistical formation evaluation analysis, are used to infer rock properties distributions. Uncertainty on rock properties, modeled as an additive white Gaussian noise, are then statistically estimated (independently at each depth along the well profile) from the ensemble of Monte-Carlo realizations. At the same time, a classifier, based on a Gaussian mixture model, is parametrically inferred from the pointwise mean of the Monte Carlo realizations given an a-priori reference profile of facies. Classification results, given by the a-posteriori facies proportion and the maximum a-posteriori prediction profiles, are finally computed. The classification outcomes clearly highlight that neglecting uncertainty leads to an erroneous final interpretation, especially at the transition zone between different facies. As mentioned, this become particularly remarkable in complex environments and highly heterogeneous scenarios.
Abstract Subsurface analysis-driven field development requires quality data as input into analysis, modelling, and planning. In the case of many conventional reservoirs, pay intervals are often well consolidated and maintain integrity under drilling and geological stresses providing an ideal logging environment. Consequently, editing well logs is often overlooked or dismissed entirely. Petrophysical analysis however is not always constrained to conventional pay intervals. When developing an unconventional reservoir, pay sections may be comprised of shales. The requirement for edited and quality checked logs becomes crucial to accurately assess storage volumes in place. Edited curves can also serve as inputs to engineering studies, geological and geophysical models, reservoir evaluation, and many machine learning models employed today. As an example, hydraulic fracturing model inputs may span over adjacent shale beds around a target reservoir, which are frequently washed out. These washed out sections may seriously impact logging measurements of interest, such as bulk density and acoustic compressional slowness, which are used to generate elastic properties and compute geomechanical curves. Two classifications of machine learning algorithms for identifying outliers and poor-quality data due to bad hole conditions are discussed: supervised and unsupervised learning. The first allows the expert to train a model from existing and categorized data, whereas unsupervised learning algorithms learn from a collection of unlabeled data. Each classification type has distinct advantages and disadvantages. Identifying outliers and conditioning well logs prior to a petrophysical analysis or machine learning model can be a time-consuming and laborious process, especially when large multi-well datasets are considered. In this study, a new supervised learning algorithm is presented that utilizes multiple-linear regression analysis to repair well log data in an iterative and automated routine. This technique allows outliers to be identified and repaired whilst improving the efficiency of the log data editing process without compromising accuracy. The algorithm uses sophisticated logic and curve predictions derived via multiple linear regression in order to systematically repair various well logs. A clear improvement in efficiency is observed when the algorithm is compared to other currently used methods. These include manual processing by a petrophysicist and unsupervised outlier detection methods. The algorithm can also be leveraged over multiple wells to produce more generalized predictions. Through a platform created to quickly identify and repair invalid log data, the results are controlled through input and supervision by the user. This methodology is not a direct replacement of an expert interpreter, but complementary by allowing the petrophysicist to leverage computing power, improve consistency, reduce error and improve turnaround time.
Craddock, Paul (Schlumberger-Doll Research Center) | Srivastava, Prakhar (Schlumberger-Doll Research Center) | Datir, Harish (Schlumberger) | Rose, David (Schlumberger) | Zhou, Tong (Schlumberger) | Mosse, Laurent (Schlumberger) | Venkataramanan, Lalitha (Schlumberger)
Abstract This paper describes an innovative machine learning application, based on variational autoencoder frameworks, to quantify the concentrations and associated uncertainties of common minerals in sedimentary formations using the measurement of atomic element concentrations from geochemical spectroscopy logs as inputs. The algorithm comprises an input(s), encoder, decoder, output(s), and a novel cost function to optimize the model coefficients during training. The input to the algorithm is a set of dry-weight concentrations of atomic elements with their associated uncertainty. The first output is a set of dry-weight fractions of fourteen minerals, and the second output is a set of reconstructed dry-weight concentrations of the original elements. Both sets of outputs include estimates of uncertainty on their predictions. The encoder and decoder are multilayer feed-forward artificial neural networks (ANN), with their coefficients (weights) optimized during calibration (training). The cost function simultaneously minimizes error (the accuracy metric) and variance (the precision or robustness metric) on the mineral and reconstructed elemental outputs. Training of the weights is done using a set of several-thousand core samples with independent, high-fidelity elemental and mineral (quartz, potassium-feldspar, plagioclase-feldspar, illite, smectite, kaolinite, chlorite, mica, calcite, dolomite, ankerite, siderite, pyrite, and anhydrite) data. The algorithm provides notable advantages over existing methods to estimate formation lithology or mineralogy relying on simple linear, empirical, or nearest-neighbor functions. The ANN numerically capture the multi-dimensional and nonlinear geochemical relationship (mapping) between elements and minerals that is insufficiently described by prior methods. Training is iterative via backpropagation and samples from Gaussian distributions on each of the elemental inputs, rather than single values, for every sample at each iteration (epoch). These Gaussian distributions are chosen to specifically represent the unique statistical uncertainty of the dry-weight elements in the logging measurements. Sampling from Gaussian distributions during training reduces the potential for overfitting, provides robustness for log interpretations, and further enables a calibrated estimate of uncertainty on the mineral and reconstructed elemental outputs, all of which are lacking in prior methods. The framework of the algorithm is purposefully generalizable that it can be adapted across geochemical spectroscopy tools. The algorithm reasonably approximates a ‘global-average’ model that requires neither different calibrations nor expert parameterization or intervention for interpreting common oilfield sedimentary formations, although the framework is again purposefully generalizable so it can be optimized for local environments where desirable. The paper showcases field application of the method for estimating mineral type and abundance in oilfield formations from wellbore logging measurements.