|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
ABSTRACT Formation pressure testing provides important information for exploration and production activities. Accurate reservoir pressure measurements are necessary to help ensure that a well is drilled safely and to identify and evaluate the potential and value of the discovery. The interpretation of pressure gradients provides the reservoir compartmentalization structure of a well, oil-gas-water fluid contacts, and can indicate compositional grading, as evidenced by second order density changes. However, this is based on the assumption that the pressure testing quality is sufficient for high resolution analysis. Unfortunately, obtaining quality data from formation testing can be difficult and prolonged. Locations initially selected for formation pretesting along the wellbore are often not optimal, and the time spent conducting pressure testing on those locations is wasted. Specifically, in this 23-well study, only 57% of the locations selected were high quality, meaning that 43% of the pressure test locations were of suboptimal quality. Further, pressure testing of the suboptimal locations requires twice as much time as for high quality locations. Therefore, considerable operational time savings can be realized if low quality locations can be avoided. If only the optimal locations are chosen for pressure testing, then data quality can be significantly improved to reduce uncertainty during formation evaluation. A multivariate machine learning method is presented that builds a statistical correlation between the formation pressure test quality index and conventional wireline logging data. Primarily, the model is constructed from triple combo log data to a pressure test quality index. The procedure begins with logging data extraction and preparation. Both conventional wireline logging data and corresponding pretest data from 20 wells in one region are obtained. Data preprocessing and missing data estimation were conducted to help ensure the sample number of the conventional wireline logs matches the sample number of the pretest. Each well contains multiple pretest data, which results in a total of 935 samples (conventional logs and pretest pairs) for machine learning model development and validation. After the dataset preparation is completed, various learning algorithms were explored with an optimal learning algorithm selected to create the final model. The model is then applied to three additional case studies for independent validation, includingan easy pressure testing job, a typical pressure testing job, and a difficult pressure testing job. The time savings and quality improvement are shown for each. The novelty of the new machine learning model lies in the ability to predict the quality index log for the formation pretest based on previous conventional wireline logging data and to guide the wireline engineer to select the locations along the wellbore at which to conduct the pretest. This method reduces the number of unsuccessful pretest locations along the wellbore from 43% on average to between 15 and 5% for the three case studies, which results in significant time savings.
Abstract Downhole fluid analysis has the potential to resolve ambiguity in very complex reservoirs. Downhole fluid spectra contain a wealth of information to fingerprint a fluid and help to assess continuity. Commonly, a narrowband spectrometer with limited number of channels is used to acquire optical spectra of downhole fluid. The spectral resolution of this type of spectrometer is low due to limited number of narrowband channels. In this paper, we demonstrate a new type, compressive sensing (CS) based broadband spectrometer that provides accurate and high-resolution spectral measurement. Several specially designed broadband filters are used to simplify the mechanical, electrical, optical, and computational construction of a spectrometer, therefore provides measurement of fluid spectrum with high signal-to-noise ratio, robustness, and a broader spectral range. The compressive sensing spectrometer relies on reconstruction technique to compute the optical spectrum. Based on a large spectral database, containing more than 10000 spectra of various fluids at different temperature and pressure conditions, which were collected using conventional high resolution spectrometer in a lab, the basis functions of the optical spectra of three types of fluids (water, oil and gas/condensate) can be extracted. The reconstruction algorithm first classifies the fluid into one of three fluid types based on multichannel CS spectrometer measurements, the optical spectrum is reconstructed by using linear combination of the basis functions of corresponding fluid type, with weighting coefficients determined by minimizing the difference between calculated detector responses and measured detector responses across multiple optical channels. The reconstructed data may then be used for purposes such as contamination measurement, fluid property trends for reservoir continuity assessment, and digital sampling. Digital sampling is the process of extrapolating clean fluid properties from formation fluids not physically sampled. The reconstruction spectrum covers wavelengths from 500 nm to 3300 nm, which is a wider spectral region than has historically been accessible to formation testers. The expanded wavelength range allows access of the mid-infrared spectral region for which synthetic drilling-fluid components typically have higher optical absorbance. This reconstruction spectra may allow contamination to be directly determined. This paper will discuss the CS optical spectrometer design, fluid classification and spectral reconstruction algorithm. In addition, the applicability of the technique to fluid continuity assessment, sample contamination assessment and digital sampling will also be discussed.
ABSTRACT The hydrostatic pressure of the mud column in the wellbore is usually greater than the formation pressure causing mud filtrate to invade the formation in the vicinity of the wellbore. When oil-based mud (OBM) is used, unlike water-based mud (WBM), OBM is miscible with the formation fluid and alters its fluid properties and phase behavior. To be able to sufficiently correct for the contamination in the fluid sample, it is required that the contamination level be sufficiently low. This means that the engineer will continue to pump the contaminated fluid around the probe area until it is sufficiently clean. It is important to be able to measure the contamination level of the reservoir fluid as accurately as possible in real time before taking the sample, as taking additional downhole samples after the well is completed may be difficult if previous samples are not useable. Cleanup time depends on multiple parameters, including formation permeability, fluid viscosity, depth of invasion, and wellbore mud column overbalance pressure. Most current methods for predicting contamination rely on curve fitting to a single property such as fluid density, gas content, and color. Curve fitting relies on the assumption that when the properties being monitored do not change significantly as the pumping continues, the contamination level is low. However, this can also be because of a steady-state effect even at high contamination levels. Also, contamination value from curve fitting method is sensitive to data selection and also depends on the endmember filtrate and formation fluid properties which cannot measured directly either downhole or in the laboratory. In this work, we present a technique for predicting contamination using pumpout density and volume and formation properties such as drawdown mobility, overbalance, formation pressure, and drawdown pressure. By combining multiple parameters such as fluid density, drawdown mobility, formation pressure, overbalance, and drawdown pressures, predicted contamination values are better constrained. Moreover, this technique does not depend on end member properties. The technique is based on constraining pumpout data with formation properties from pressure testing data. In this technique, a large dataset of pump-out volume, density, and formation properties data acquired from wells from different regions of the world are used to develop a predictive model using a machine learning approach. The pumpout density is represented as optimized parameters of an inverse power law model. The estimated contamination value at the endpoint in the field case examples presented in this paper is fairly close the reported laboratory value of the sample taken at the end of each pumpout.
ABSTRACT Acquiring physical samples from an open hole is usually a one-opportunity event where a formation tester is sent downhole with a limited number of sample chambers, either on a logging-while-drilling (LWD) or wireline conveyance system. The samples are acquired, retrieved, and sent to a laboratory for analysis, which takes place weeks to months later. By the time the laboratory has performed an analysis, the section has been cemented, and perhaps the rig has finished operations and moved onto the next phase. Success of the sampling operation is predicated on the samples being acquired from the right locations (where to sample?), at the right time to minimize drilling fluid-filtrate contamination (when to sample?), and in a manner that preserves the integrity of the sample and is representative of the formation fluid (how to sample?). Digital sampling is a technique that that can be used to both optimize the when, where, and how of physical samples taken and further augment the information collected with sensor analysis from locations that are not physically sampled. This work shows a new workflow that can be used to extrapolate clean fluid properties with moderately high-contamination levels in a rapid pumpout. Based on the extrapolated clean fluid properties, an operator can make a decision whether to continue the pumpout to obtain physical samples or abort the pumpout if the fluid properties extrapolated (digital sampling) at the location are sufficient for the operation decision making. The workflow starts with applying principal component analysis (PCA) to a multichannel sensor measurement of fluid pumped out of the formation during a formation test sampling operation. Because the fluid pumped out contains only two endmembers (clean formation fluid and mud filtrate), the PCA scores of sensor measurements form a line in the PCA space, and solution bands of endmembers can be estimated based on physical constraint of sensor measurements (non-negative, etc.). Then, a trend-fitting method is used to predict the asymptote of the first principal component score. The asymptote value can be inverted to sensor signal using PCA inversion, and the sensor signal represents the clean formation-fluid measurement. Lastly, machine-learning-based composition models can be used to predict the clean fluid compositions based on the sensor signal. The composition data then is used to predict fluid physical properties, such as bubblepoint, viscosity, and compressibility, using an Equation of State (EOS) model. A series of rapid pumpouts at different depths can be used to map a formation for selection of where to sample, constrain contamination models to improve contamination estimation, determine when to sample, and optimize the pumpout parameters to obtain a representative sample in the shortest period of time. We have applied this workflow to a number of formation sampling jobs at multiple wells, the realtime results match with the laboratory analysis result in term of contamination level and clean fluid properties (compositions, GOR, bubblepoint, density, etc.)
Acquiring reservoir fluid samples through formation testers is critical to asset evaluation in most oil and gas drilling operations. From the time this technology was introduced to the industry, the key challenges have been in planning the job, estimating contamination during operation, and obtaining clean fluid samples in the shortest time possible. The objective of this paper is to create a new data-driven model to proactively simulate the cleaning process in order to provide a practical job-planning tool that optimizes fluid sampling. After detailed analysis of formation pump-out cleaning behavior and oil-well sampling, a parametric study with nearly a hundred thousand scenarios was designed to model fluid behavior during sampling. The simulation scenario is a multi-component model with radial geometry, capable of handling complex reservoir rock, fluid composition, probe geometry, and sampling conditions. Compositional simulation output is then used to generate the comprehensive database of the fluid sampling and cleaning processes. The study is used to determine the sensitive parameters related to sampling and contamination. Full factorial experimental design was used to build nearly one hundred thousand scenarios with more than 10 relevant parameters. Outputs were analyzed through a variety of visualization and statistical techniques to understand cleaning behavior in different initial and operating conditions. One-factorial analysis and statistical tests, including analysis of variance (ANOVA), were used to determine the significance of the different parameters. The most influential parameters have been selected and used as input to the representative model in order to predict pumpout volume and corresponding contamination. In this work, multiple data-driven models such as Neural network, Random Forest, and Gradient Boosting are presented. Furthermore, multiple mathematical equations have been compared to fit the contamination trend, and methods of estimating their best fit parameters are presented. Blind testing has been performed to evaluate performance of the developed models, showing promising results. The workflow, database, and the developed models can be used to perform forward modeling of sampling jobs in different reservoirs, drilling muds, and operating conditions for both wireline and logging while drilling (LWD). This enables an effective and practical job-planning tool implementation, whereby a tool string can be optimized to reduce sampling time while improving the quality. The state-of-the-art workflow deployed in a commercial reservoir simulator combines physics, programming, statistical analysis, and machine learning techniques to tackle the challenging problem of sampling. The workflow and data can be used during operations with various wireline formation testing (WFT) and LWD testing tools to optimize cleanup and sampling of formation fluids. Simulations of different realizations of reservoir properties, drilling mud invasion profiles, and cleanup operations also helped develop a useful and diverse pumpout database.