|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Both the Rawlins and Schellhardt and Houpeurt analysis techniques are presented in terms of pseudopressures. Flow-after-flow tests, sometimes called gas backpressure or four-point tests, are conducted by producing the well at a series of different stabilized flow rates and measuring the stabilized BHFP at the sandface. Each different flow rate is established in succession either with or without a very short intermediate shut-in period. Conventional flow-after-flow tests often are conducted with a sequence of increasing flow rates; however, if stabilized flow rates are attained, the rate sequence does not affect the test. Fig 1 illustrates a flow-after-flow test.
Volkov, Nikita Alekseevich (MIPT Engineering Center) | Dakhova, Elizaveta Yuryevna (MIPT Engineering Center) | Adrianova, Alla Mikhailovna (Gazpromneft Science & Technology Centre) | Budennyy, C???? Andeevich (MIPT Engineering Center)
Abstract The importance of quality, consistent data is difficult to overestimate. The better the field data describes the real system, the higher the predictive ability of models at all levels based on it and the higher the accuracy of production decisions. This issue is particularly relevant in the context of data from the mechanized well stock. The paper presents both methods of data analysis in real time, and an approach for retrospective analysis (analysis of historical data) in the application to well data in the operation of the ESP. The key advantage of the model presented in the paper is that it allows considering a complex set of time dependencies, taking into account their mutual influence. In order to account for the dependencies between physical quantities and time, a model using probabilistic neural networks has been developed allowing for retrospective filtering and data filtering in streaming mode. The model is based on the principle of the conditional variational autoencoder model. This model of a neural network is characterized by the fact that it allows establishing the main dependencies in the data with acceptable quality under given conditions. A special feature of the research model is its probabilistic nature, that is, it is able to calculate data distributions, as well as distributions of values on certain layers. The distribution at the model output is used to estimate the degree of abnormality of objects. Special data transformations, the introduction of weights and the addition of key features, allow dealing with missing values (often observed in field data due to sensor malfunction, features of measurement measures, inconsistencies in time scales for several measurements, etc.), and fix important patterns in the data. The key point in working with features is a special scheme for preprocessing time series, namely, the introduction of weights depended on the frequency of measurements, and the calculation of weighted quantiles for different time intervals.
Sabinin, Grigory (Lomonosov Moscow State University, Russia) | Chichinina, Tatiana (Mexican Institute of Petroleum, Mexico) | Tulchinsky, Vadim (V.M. Glushkov Institute of Cybernetics, Kiev, Ukraine) | Romero-Salcedo, Manuel (Mexican Institute of Petroleum, Mexico)
Abstract Nowadays, Machine Learning (ML) is actively used in geophysical prospecting including seismic exploration. This study focuses on the applicability and feasibility of Deep Learning for the inverse problem in seismic exploration that is the estimation of the rock-physics parameters for a fractured reservoir, from seismic data. The main goal of this paper is to prove the efficiency of a neural network in estimating fractured medium parameters, represented as anisotropy parameters of HTI model. (HTI is "Horizontal Transverse Isotropy".) As such fracture parameters, we consider the normal and tangential weaknesses of fractures ΔN and ΔT, Thomsen anisotropy parameters ε, δ, γ, as well as the crack density e and the crack aspect ratio α (fracture opening). In addition, we consider a fractured medium, in which there are two fracture networks, characterized by two pairs of weaknesses (ΔN1, ΔT1) and (ΔN2, ΔT2); this is the so-called orthorhombic model. We validate the accuracy of our neural network by comparing the predicted parameter values with the a priori given. We use mathematic formulae, which relate the considered parameters estimation to different effective-medium anisotropy models of a fractured medium, such as Schoenberg's Linear Slip model, Hudson's model for penny-shaped cracks and Thomsen's model for aligned cracks in porous rock. In our study, seismic signatures (seismograms of the reflected waves PP and PS) of both the vertical UZ-component and the horizontal one UX are the inputs for the neural network. At the output, the network predicts fracture parameters and anisotropy parameters. The neural network is trained on synthetic seismograms of reflected waves, which were generated using 2D-elastic numerical finite-difference modelling. Thus we demonstrate the applicability of Deep Learning for estimation of the fractured medium parameters, by training the neural network on synthetic seismograms. The normal and tangential weaknesses of fractures ΔN and ΔT, the crack density and the crack aspect ratio (crack opening) are successfully estimated as well as the anisotropy parameters ε, δ and γ. In the prediction of ΔN and ΔT, the relative error does not exceed 1.7% and 1.4%, respectively, and in the prediction of crack density e — from 0.9% to 1.4%. In predicting the anisotropy parameters ε, δ and γ, the error does not exceed 1.6%, 1.7%, and 1.8%, respectively. However, in estimating the value of crack opening α, the result is an order of magnitude worse, an error of 14.2%. For the orthorhombic model, the prediction results are slightly worse than for the HTI model, but still within the acceptable accuracy. In predicting the fracture parameters for the first fracture network (ΔN1, ΔT1 and e1) the error does not exceed 2.3%, 4.2%, and 2.3%, respectively, and for the second fracture network (ΔN2, ΔT2 and e1) — respectively 4.3%, 5.7%, and 3.7%. This slight deterioration in the results (in comparison with HTI) is explained by the complicated formulation of the last task with the orthorhombic model, in which various deviations in the inclination of cracks were introduced (the angles β1 and β2). In general, we have successfully developed a neural network to solve the problem on fractured reservoir characterization. Finally, it produces fairly accurate results that prove the effectiveness of Deep Learning in inversion for fracture parameters from the seismic data.
Yang, Huihui (Shell International Exploration and Production, Inc.) | Lu, Ligang (Shell International Exploration and Production, Inc.) | Tsai, Kuochen (Shell International Exploration and Production, Inc.)
Summary Identification of vuggy intervals and understanding their connectivity are critical for predicting carbonate reservoir performance. Although core samples and conventional well logs have been traditionally used to classify vuggy facies, this process is labor intensive and often suffers from data inadequacies. Recently, convolutional neural network (CNN) algorithms have approached human-level performance at multiimage classification and identification tasks. In this study, CNNs were trained to identify vuggy facies from a well in the Arbuckle Group in Kansas, USA. Borehole-resistivity images were preprocessed into half-foot intervals; this complete data set was culled by removing poor-quality images to generate a cleaned data set for comparison. Core descriptions along with conventional gamma ray, neutron/density porosity, photoelectric factor (PEF), and nuclear magnetic resonance (NMR) T2 data were used to label these data sets for supervised learning. Hyperparameters defining the CNN network size (numbers of convolutional layers/filters and the numbers of fully connected layers/neurons) and minimize overfitting (dropout rates, patience, and minimum delta) were optimized. The median losses and accuracies from five Monte Carlo realizations of each hyperparameter combination were the metrics defining CNN performance. After hyperparameter optimization, median accuracy for vuggy/nonvuggy facies classification was 0.847 for the cleaned data set (0.813 for the complete data set). This study demonstrated the effectiveness of using microresistivity image logs in a CNN to classify facies as either vuggy or nonvuggy, while highlighting the importance of data quality control. This effort lays the foundation for developing CNNs to segment images to estimate vuggy porosity.
Early estimates of gas well performance were conducted by opening the well to the atmosphere and then measuring the flow rate. Such "open flow" practices were wasteful of gas, sometimes dangerous to personnel and equipment, and possibly damaging to the reservoir. They also provided limited information to estimate productive capacity under varying flow conditions. The idea, however, did leave the industry with the concept of absolute open flow (AOF). AOF is a common indicator of well productivity and refers to the maximum rate at which a well could flow against a theoretical atmospheric backpressure at the reservoir. The productivity of a gas well is determined with deliverability testing.
Figure 1.1--Production System and associated pressure losses. Mathematical models describing the flow of fluids through porous and permeable media can be developed by combining physical relationships for the conservation of mass with an equation of motion and an equation of state. This leads to the diffusivity equations, which are used in the petroleum industry to describe the flow of fluids through porous media. The diffusivity equation can be written for any geometry, but radial flow geometry is the one of most interest to the petroleum engineer dealing with single well issues. The radial diffusivity equation for a slightly compressible liquid with a constant viscosity (an undersaturated oil or water) is ....................(1.1) The solution for a real gas is often presented in two forms: traditional pressure-squared form and general pseudopressure form. The pressure-squared form is ....................(1.2) and the pseudopressure form is ....................(1.3) The pseudopressure relationship is suitable for all pressure ranges, but the pressure-squared relationship has a limited range of applicability because of the compressible nature of the fluid.
In the petroleum industry, accurately predicting production potential of undrilled unconventional horizontal wells requires the use of highly complex models and is a constant research in progress. Deterministic approaches such as decline curve analysis (DCA) are used in most cases to estimate production potential due to ease of implementation. The main disadvantage of using DCA by itself is that it requires an existing well to forecast the production, which is very expensive to drill. This paper shows the procedures for building a flexible machine learning based decline curve-spatial method that can be easily used to predict the estimated ultimate recovery (EUR) of newly proposed wells without the requirement of costly data or other time consuming methods. A type of artificial neural network (ANN) called feed forward neural network (FFNN) was used as the machine learning method during this process. In order to achieve this goal, production and well data were collected from public domain sources. Power law exponential (PLE) DCA method was implemented on a portion of the existing wells with sufficient production history. The data then was divided into training and test sets. The training set was fed into the ANN model and the results were compared with the results obtained from other inherently spatial methods such as universal kriging, geographically weighted regression (GWR), and generalized additive model (GAM). Finally, the EUR of the new wells were compared to the original training and test data to observe any discrepancies in the prediction and necessary adjustments to the model hyperparameters were made when there were discrepancies. Analyzing and observing all the results from the various combinations of methods indicates that using ANN without any spatial correlation is a less reliable method to estimate the production potential of new wells in the Marcellus shale gas reservoir when compared to other inherently spatial methods. This study shows that it is necessary to include spatial correlations between wells in the EUR prediction of new wells in the ANN models. The predictive strength of other spatial methods were within a reasonable range. One of the main input parameters that may increase the accuracy of the model are completion parameters. However, most of these completion parameters that will affect the production are very difficult and expensive to obtain. These parameters which could not be included as inputs can increase the accuracy of these models when available. This experimental study indicates that there are many other possible variations of this method other than the methods discussed in this paper. It also shows that the spatial correlation of wells is highly important when predicting the new well production in unconventional reservoirs. This is a flexible method, which can be easily modified to find better predictive methods for other areas of interest.
ABSTRACT The interpretation of pressure data recorded during a well test has been used for many years to evaluate reservoir characteristics. The pressure derivative has been applied as a powerful diagnostic tool in interpretation of pressure tests for a single well with wellbore storage and skin in a homogeneous reservoir. The objective of this study is to illustrate the applications of the pressure derivative plot, a) as a powerful diagnostic tool for reservoir model identification; b) to perform type-curve analysis; and, c) as a stand-alone specialized plot for evaluating basic reservoir parameters for single-well tests. The application of the pressure derivative has significantly made well test interpretation easier to perform. Also, the pressure derivative allows greater accuracy in reservoir flow parameter determination eliminating the need to perform the complementary specialized analysis. The cases from Southern Iraq reservoirs as compared with theoretical and published examples highlight their practical application in this area. This paper illustrates the advantages of the derivative approach, in obtaining a better solution to a problem, in comparison to the more traditional dimensionless pressure change type-curve and semi-log methods. 1. INTRODUCTION The analysis of pressure data recorded during a well test has traditionally been based upon the determination of straight lines drawn on plots with specific scales (Al-Rbeawi, 2018). A type curve analysis approach was introduced in the petroleum industry by Agarwal et al. (1970) as a valuable tool when used in conjunction with conventional semi-log plots. The type curve is a graphic representation of the theoretical response during a test of an interpretation model that represents the well and the reservoir being tested. For a constant-pressure test, the response is the change in production rate; for a constant-rate test, the response is the change in pressure at the bottom of the well. The type curves are derived from solutions to the flow equations under specific initial and boundary conditions. For the sake of generality, type curves are usually presented in dimensionless terms, such as a dimensionless pressure vs. a dimensionless time. A given interpretation model may yield a single type curve or one or more families of type curve, depending on the complexity of the model. The type curve analysis consists of finding a type curve that "matches" the actual response of the well and the reservoir during the test. The reservoir and well parameters, such as permeability and skin, can then be calculated from the dimensionless parameters defining that type curve. Type curves are advantageous because they may allow test interpretation even when wellbore storage distorts most or all of the test data; in that case, conventional methods fail.
ABSTRACT It has long been known that significantly different estimates of the Hoek-Brown mi constant occur if curve fitting regressions are made including or ignoring tensile strength data. These differences in curve fit regression estimates can become even more pronounced if inappropriate corrections are made to adjust indirect (Brazilian) tensile test (BTS) results to equivalence with direct tensile strength (DTS) values. Equations for properly correcting BTS results to calculate theoretically equivalent DTS and Hoek-Brown (H-B) pseudo tensile strength values are presented, along with a suggested iterative approach for utilizing derived H-B strength estimates for better refining a representative failure envelope. The concept of this envelope being valid only in the compressional range and being transitional to a hybrid sigmoidal tensile envelope at lower stress ranges is discussed. Suggestions are then provided on how to prescriptively address the problems created by incorrectly utilizing tensile strength test data in defining the compressional envelope. Methods and guidelines for resolving these problems are presented. 1. INTRODUCTION There has been much debate in the literature and throughout the rock mechanics community regarding whether or not direct or indirect tensile strength data should be incorporated in the regression methodology for correct derivation of the Hoek-Brown (H-B) constant mi for a specific data set. In the latest update paper on the H-B Criteria, Hoek and Brown, 2018 recommend only including compressional data as basis for establishing the regression constant, while others (e.g., Richards and Read, 2011 et seq) suggest that tensile strength data should be incorporated as part of the curve fitting procedure in order to properly condition the curve fit in the low stress range. The principal dilemma of the problem is that the tensile intercept of the Hoek-Brown envelope does not generally match with actual uniaxial tensile strengths for most rock types, especially for those with low mi values, as is clear from Figure 1. Hence Hoek and Martin, 2014 and Hoek and Brown, 2018 recommend adopting a tensile cut off, rather than including tensile data in the overall regression fit.