|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Much has been written about methods for estimating and interpreting log measurements. These methods are highly dependent on the quality of the original acquired data sets. Wireline and logging-while-drilling (LWD) technologies have advanced to a level where today’s analysts frequently assume the acquired measurements are correct, unless problems are encountered when integrating the data. The assumption is generally valid, but starts to fail when conditions within the borehole being surveyed degrade to the point of falling outside the physical measurement limitations of the instruments.
When wellbore conditions reach a point where data degradation occurs, the information must be corrected for the environmental and borehole effects or, in extreme cases, the data must be reconstructed. Data reconstruction/estimation can take many forms, including translation applications of regional trends, transformation of one type of measurement into another type, extrapolation of offset well data to the well of interest, use of offset openhole data combined with cased-hole data in predicting the measurements on adjoining wells when only the cased-hole data are available, extraction of measurements from seismic, etc.. Methods involved in these endeavors vary from empirical algorithms to regional trend analysis, to statistical inference, and to neural nets. Successful application of any method requires data that are representative of the formations when acquired under optimal conditions.
Interpretation algorithms applied to the data are no different, in that data quality is assumed for the analysis models to function correctly. Proprietary internal and third-party external interpretation packages have problems when the data quality suffers.
Proper data reconstruction requires an understanding of the quality of the acquired data (calibrations, accuracy, etc.), the instrument configuration in the tool string, the acquisition methodology and the condition of the wellbore environment when the data were acquired.
We examine the application of several methods used in data preconditioning and data reconstruction, along with some novel statistical methods the authors employ, with examples in various environments. Validation of the reconstructed data sets is also demonstrated.
This paper describes a well-based analysis program, written in the C programming language, called "Quantitative Petrophysical and Seismic Evaluation Technique'''' (QPSET). The program is designed to accomplish the evaluation of reservoir parameters in shaly-sand sedimentary sections in marginal hydrocarbon zones. The program flow is designed to complete an analysis in a single pass and uses two modified approaches for the evaluation of water saturation and acoustic impedance. Output values are stored in ASCII format and are therefore available for plotting in any graphic software package. Either the user selects the precise analysis algorithms employed, or they may be restricted by the data available. This paper reports results obtained using the program on data from seven wells located in the northern portion of the Gulf of Suez Rift Basin, spanning the Lower Middle Miocene Rudeis sedimentary section. (Rudeis thickness ranges from 500 ft. to 5000 ft. within the basin.) The program has successfully characterized the essential features of the Rudeis and permits some speculation as to the depositional environment for the sediments and the tectonic setting of the basin at the present day. In this latter regard, it is stressed that other information is required before adequate interpretations can be established.
Previous studies of resistivity anisotropy have neglected crossbedding effects. This article analyzes induction response in crossbedded reservoirs using a new computer modeling code. The code computes the response of an induction logging tool as it orthogonally traverses many beds, each of which possesses different crossbedding characteristics. The crossbedding in each medium is described by a uniaxial conductivity tensor whose principal axes have strike and dip angles oriented arbitrarily with respect to the bedding planes. The code is numerically efficient; response for a tool logging through several beds can be generated in less than 15 minutes on a modem workstation. Results show that, for anisotropy coefficients less than 5, computed responses for both two-coil and multicoil devices vary in a continuous manner as the sondes cross a single bed boundary separating two infinitely thick beds. Furthermore, after correction for skin effect, the limiting log values far from the bed boundary are entirely predictable from a previously published formula. However, in vertical wells, when the crossbedding dip angle is 75" or greater and the anisotropy coefficient greater than or equal to 5, anomalously large readings appear in the vicinity of the bed boundaries. These large readings are similar to the polarization horns that occur in dipping beds at high-contrast isotropic interfaces. In the case of a thin bed (e.g., < 5 ft) located between two massive shoulder beds, the large anomalies from the bed boundaries merge into a single anomaly at the center of the bed. This behavior was not expected and can be quantified only by modeling. Modeled results are also used to analyze the Schlumberger AIT Array Induction Imager instrument response in a crossbedded reservoir in the Nugget formation where we expect different values of R, and Rh.
Recent advances in data science and machine learning (ML) have brought the benefits of these technologies closer to the main stream of Petrophysics. ML systems, where decisions and self-checks are made by carefully designed algorithms, in addition to executing typical tasks such as classification and regression, offer efficient and liberating solutions to the modern Petrophysicist. The outline of such a system and its application in the form of a multi-level workflow to a 59-well multi-field study are presented in this paper.
The main objective of the workflow is to identify outliers in bulk-density and compressional slowness logs, and to reconstruct them using data-driven predictive models. A secondary objective of the project is to predict shear slowness in zones where such data do not exist.
The system is fully automated, designed to optimize the use of all available data, and provide uncertainty estimates. It integrates modern concepts for novelty detection, predictive classification and regression, as well as multi-dimensional scaling based on inter-well similarity.
Benchmarking of ML results against those created by human petrophysical experts show the ML workflow can provide high quality answers that compare favorably to those produced by petrophysical experts. A second validation exercise, that compares acoustic impedance logs computed from ML answers to actual seismic data, provides further evidence for the accuracy of the ML generated results.
The ML system supports the Petrophysicist by easing the burden on repetitive and burdensome quality control tasks. The efficiency gains and time savings created can be used for enhanced effective cross-discipline integration, collaboration and further innovation.
Seismic reservoir characterization (SRC) is a valuable tool for de-risking exploration prospects and guiding appraisal and development decisions. Standard techniques exploit inverted seismic amplitude versus reflection angle information to extract quantitative rock properties from pre-stack seismic datasets. Well logs play an important role in this process as they provide a means to calibrate petrophysical properties against geophysical rock properties inverted from seismic data, enabling quantitative mapping of reservoir properties such as facies, porosity and fluid type.
Abstract Subsurface analysis-driven field development requires quality data as input into analysis, modelling, and planning. In the case of many conventional reservoirs, pay intervals are often well consolidated and maintain integrity under drilling and geological stresses providing an ideal logging environment. Consequently, editing well logs is often overlooked or dismissed entirely. Petrophysical analysis however is not always constrained to conventional pay intervals. When developing an unconventional reservoir, pay sections may be comprised of shales. The requirement for edited and quality checked logs becomes crucial to accurately assess storage volumes in place. Edited curves can also serve as inputs to engineering studies, geological and geophysical models, reservoir evaluation, and many machine learning models employed today. As an example, hydraulic fracturing model inputs may span over adjacent shale beds around a target reservoir, which are frequently washed out. These washed out sections may seriously impact logging measurements of interest, such as bulk density and acoustic compressional slowness, which are used to generate elastic properties and compute geomechanical curves. Two classifications of machine learning algorithms for identifying outliers and poor-quality data due to bad hole conditions are discussed: supervised and unsupervised learning. The first allows the expert to train a model from existing and categorized data, whereas unsupervised learning algorithms learn from a collection of unlabeled data. Each classification type has distinct advantages and disadvantages. Identifying outliers and conditioning well logs prior to a petrophysical analysis or machine learning model can be a time-consuming and laborious process, especially when large multi-well datasets are considered. In this study, a new supervised learning algorithm is presented that utilizes multiple-linear regression analysis to repair well log data in an iterative and automated routine. This technique allows outliers to be identified and repaired whilst improving the efficiency of the log data editing process without compromising accuracy. The algorithm uses sophisticated logic and curve predictions derived via multiple linear regression in order to systematically repair various well logs. A clear improvement in efficiency is observed when the algorithm is compared to other currently used methods. These include manual processing by a petrophysicist and unsupervised outlier detection methods. The algorithm can also be leveraged over multiple wells to produce more generalized predictions. Through a platform created to quickly identify and repair invalid log data, the results are controlled through input and supervision by the user. This methodology is not a direct replacement of an expert interpreter, but complementary by allowing the petrophysicist to leverage computing power, improve consistency, reduce error and improve turnaround time.