|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Summary. We propose that reservoir permeability may be statistically distributed in a variety of ways. Two hypothetical cases of reservoir layering are statistically analyzed. This analysis suggests that a restricted family of functions all related to the normal distribution can be used to represent permeability distributions. The log-normal distribution is one member of the family. Several sets of field data are analyzed. The analyses show that (1) permeability data are not necessarily log-normally distributed, (2) all the permeability data are not necessarily log-normally distributed, (2) all the permeability distributions considered are closely approximated by members permeability distributions considered are closely approximated by members of the proposed family of functions, and (3) improved porosity/permeability relationships result when the permeability distribution is known.
For many years, reservoir permeability has been recognized as a random-valued property of the formation. Law was among the first to analyze reservoir permeability statistically. He studied three horizons from a sandstone reservoir and concluded that permeability has a log-normal probability density function (PDF). Law also showed that, by knowing the mean and variance of the PDF for Core plug data from a new well, he could successfully predict the well's productivity index. Since Law's study, many investigators have analyzed permeability distributions. Bennion analyzed 60,000 sandstone, and 24,000 limestone permeabilities from Canadian reservoirs. He concluded that the logarithm of permeability gave a PDF that is skewed to the right and leptokurtic in comparison to a normal PDF. Lambert studied the data from 689 wells in 22 fields and compared the permeability distribution of each well with three possible PDF's: normal, log-normal, and exponential. She concluded possible PDF's: normal, log-normal, and exponential. She concluded that 285 wells have approximately normal PDF's, 297 have distributions best described as log-normal, and 102 wells have approximately exponential PDF's. Freeze, by considering such empirical studies as Bennion's and Law's and indirect evidence, concludes that permeability is log-normally distributed. These analyses have lacked a statistical framework within which the studies could be conducted. No definitive assurance can be given that another study will not yield substantially different conclusions. Furthermore, no appropriate alternatives to the log-normal distribution have been proposed to accommodate inconclusive results such as Bennion's. This paper presents analyses of six data sets for Permeability distributions in light of a theoretical framework. The theory indicates that a continuous range of PDF's is possible and that the log-normal and normal distributions are two members in the range. How to select the most appropriate PDF for the data set being analyzed is discussed. Correlations involving permeability improve when the PDF is chosen appropriately. Also, the behavior of estimators of the average reservoir permeability vary as the PDF is changed. PDF is changed. Theoretical Analysis of Permeability Distributions
The central limit theorem (CLT) is first reviewed because of its essential role in the analysis. Then, two cases of reservoir heterogeneity are studied to obtain the permeability PDF for each. From these cases, we develop a general proposition for the distribution of permeability.
CLT. Consider a set of n + 1 random variables, xj, i=1,2.....n, and s n, where
The CLT states that, under certain conditions, s n becomes normally distributed as n increases without bound. The conditions require that moments up through the third exist, and that the xi be mutually independent. Not all versions of the CLT require that the variables xi be independent, but it is unlikely that the conditions required by these versions apply to our situations. In the limit, however, sn will be normally distributed regardless of the distribution of the xi, provided that the moment and independence conditions are satisfied. It has been observed that the sum sn may obtain near normality for quite small n. In practice, the requirement of independence is difficult to verify, but we assume that it does hold in the following analysis.
Analysis of Layered Models. The two-layer models to be investigated are shown in Figs. 1 and 2. Each block of permeable material has n layers of equal size and the permeabilities of all the layers, ki, i = 1,2...n, are assumed independent and identically distributed. The total permeability of each model to flow as indicated in the figures is denoted ki. For the case of layering parallel to the flow direction (Fig. 1),
Layering in series with the flow path (Fig. 2) yields
Comparisons of Eq. 1 with Eqs. 2 and 3 suggest that either kt is approximately normally distributed (AND) or that (kt) -1 is AND, depending on the layering configuration. We say "approximately" because neither kt nor (kt)-1 can be precisely normally distributed because both quantities are precisely normally distributed because both quantities are nonnegative by definition. A strictly normally distributed variable always has a nonzero probability of obtaining a negative value. If the ratio of the standard deviation to the mean of the distribution is small (i.e., less than 0.43), however, then the approximation is quite good.
Summary Conventional multiple regression for permeability estimation from well logs requires a functional relationship to be presumed. Because of the inexact nature of the relationship between petrophysical variables, it is not always possible to identify the underlying functional form between dependent and independent variables in advance. When large variations in petrological properties are exhibited, parametric regression often fails or leads to unstable and erroneous results, especially for multivariate cases. In this paper, we describe a nonparametric approach for estimating optimal transformations of petrophysical data to obtain the maximum correlation between observed variables. The approach does not require a priori assumptions of a functional form, and the optimal transformations are derived solely based on the data set. Unlike neural networks, such transformations can facilitate physically based function identification. An iterative procedure involving the alternating conditional expectation (ACE) forms the basis of our approach. The power of ACE is illustrated using synthetic as well as field examples. The results clearly demonstrate improved permeability estimation by ACE compared to conventional parametric-regression methods. Introduction A critical aspect of reservoir description involves estimating permeability in uncored wells based on well logs and other known petrophysical attributes. A common approach is to develop a permeability-porosity relationship by regressing on data from cored wells and, then, to predict permeability in uncored wells from well logs. Multiple regression is used when large variations in petrological properties exist (e.g., a wide range in grain sizes, high degree of cementation, diagenetic alteration, etc.) and a simple permeability-porosity relationship no longer holds good. There are several limitations, however, to such an approach. Many of these arise from the inexact nature of the relationship between petrophysical variables and a priori assumptions regarding functional forms used to model the data - all leading to biased estimates. When prediction of permeability extremes is a major concern, the high and low values are enhanced through a weighting scheme in the regression. Besides being subjective in nature, such weighting can cause the prediction to become unstable, which leads to erroneous results. Most importantly, conventional regression assumes independent variables to be free of error, which is highly optimistic for geologic and petrophysical data. Jensen and Lake introduced power transformations for optimization of regression-based permeability-porosity predictions. The underlying theory is that if the joint probability distribution function (JPDF) of two variables is binormal, the relationship will be linear. Several methods exist to estimate the exponents for power transformation. One method, described by Emerson and Stoto and adopted by Jensen and Lake, is based on symmetrizing the probability distribution function (PDF). Another method is a trial-and-error approach based on a normal probability plot of the data. By power transforming permeability and porosity separately, the authors are able to improve permeability-porosity correlations. Using a trial-and-error method for selecting exponents for power transformation is time consuming, however, and symmetrizing the PDF does not necessarily guarantee a binormal distribution of transformed variables. In addition, there are no indications as to whether power transformations will work for multivariate cases. Nonparametric regression techniques based on statistical and optimization theory have been developed to offer much more flexible data analysis tools for exploring the underlying relationships between dependent and independent variables. In this paper, we describe the application of a very general and computationally-efficient nonparametric regression algorithm called ACE to permeability estimation from well logs and other petrophysical data. The algorithm provides a method for estimating transformations in multiple regression without prior assumptions of a functional relationship. The ACE transformations are shown to be optimal and yield maximum correlation between the variables in the transformed space. We also propose a new approach that allows for data correction/equilibration for the dependent as well as independent variables. The power of the ACE method lies in its ease of use, particularly for multivariate regression, and its ability to identify and correct for outliers without subjective assumptions. The organization of this paper is as follows. First, we discuss the underlying theory of the ACE method and its implementation. Second, we apply ACE to synthetic examples to illustrate function identification during multiple regression and also to show data correction features. Finally, we present field examples involving petrophysical data from two formations - the Admire sand in the El Dorado Field in Kansas (a shallow delta sand), and the Schneider Buda Field in Texas (a Cretaceous reefal limestone). In the Appendix we describe a data-smoothing technique called Supersmoother that is used to replace conditional expectation calculations for finite data set. ACE Technique The ACE algorithm, originally proposed by Breiman and Friedman, provides a method for estimating optimal transformations for multiple regression that results in a maximum correlation between a dependent (response) random variable and multiple independent (predictor) random variables. A brief description of the theory of ACE and its implementation as applied to continuous random variables are given in the following sections. See Breiman and Friedman for further details.
Mochizuki, Masahito (Division of Materials and Manufacturing Science, Graduate School of Engineering, Osaka University Suita, Osaka, Japan) | Mikami, Yoshiki (Division of Materials and Manufacturing Science, Graduate School of Engineering, Osaka University Suita, Osaka, Japan) | Iyota, Muneyoshi (Division of Materials and Manufacturing Science, Graduate School of Engineering, Osaka University Suita, Osaka, Japan) | Inoue, Hiroshige (Steel Research Laboratories, Nippon Steel Corporation Futtsu, Chiba, Japan) | Kasuya, Tadashi (Steel Research Laboratories, Nippon Steel Corporation Futtsu, Chiba, Japan)
The effect of the transformation expansion of high-strength weld metals on the residual tensile stress in y-groove cold clacking test welds was discussed. A series of numerical simulation considering the effect of phase transformation such as volumetric change and difference in mechanical properties was performed to evaluate the residual stresses. First, the effect of transformation start temperature on the residual stress in the weld metal of y-groove cracking test specimen was investigated. The lower the martensitic transformation start temperature becomes, the lower the restraint stress σw becomes. However, the local stress σlocal at the weld root where cold cracks occur was much higher due to stress concentration. In addition, the local stress does not always decrease as decreasing martensitic transformation start temperature. Second, the restraint condition of numerical simulation model of y-groove cracking test was varied to decrease restraint intensity. As a result, the local stress σlocal increased with decreasing martensitic transformation temperature under low restraint condition. This means that the low transformation temperature welding wire do not necessarily effective for the prevention of cold cracking when the restraint strength varied.
Recently, in order to make actual use of 980 MPa class high-strength steels, a method for the prevention of cold cracking in welds using weld wires with low transformation temperatures are proposed (Nakamura, 2003). The mechanism for the prevention of cold cracking with the welding wire is summarized into the following two major points: (1) reduction in tensile stress in weld metals by transformation expansion, (2) entrapment of diffusible hydrogen to retained austenite. In this study, the former point, the effect of the transformation expansion in weld metals on the reduction of tensile stress is discussed. Yamamoto (2008) et al. shows that cold cracking in HT780 steels is reduced when the low-temperature-transformation welding wire was applied.
Summary Pulse testing is an effective way of understanding fluid reservoirs. Several noise components bias high-sensitivity pressure measurements so that aconsiderable part of the tests cannot be interpreted conventionally, thusimpeding pulse testing's wide-range application. To improve noise filtering, we developed a computerized method of evaluation and noise suppression, which can enhance interpretability effectively. Only 17% of 617 tests performed in Hungary can be interpreted in the usual way, whereas 78% can be interpreted by the method reviewed in this paper. After an enumeration of the pressure changes (noises) disturbing pulse tests, we illustrate our cycle-sum-up method of noise suppressing and the related interpretation procedure. We also report on results of the reproducibility investigations, and we give a brief outline of the basic principles of interpreting reservoirs with multiple faults. Introduction Pulse tests (interference tests) are well tests where the pressure response to an active (producing or injecting) well is recorded in one or more distant observation wells. From the time lag of the response and from the trend of the pressure change, one can calculate transmissibility and storage parameters characteristic of the reservoir space between the wells. When performing a pulse test, we have found, in contrast to a single-well test, the following benefits:Large-scale average parameters valid for a reservoir volume commensurable to the distance from the active to the observation well are obtained. The storage of the reservoir domain in question can be evaluated. "Single flow period" pulse (interference) tests have a drawback: it is generally troublesome (often impossible) to eliminate those evaluation errors that are imposed by irrelevant changes of the reservoir pressure. The pulse-test method was developed within the sphere of interference investigations to solve this problem by noise suppression. The essence of this method is that alternating pulses of flow and shut-in are generated in the active well. Under the influence of pulsing production (or injection), a pulsed pressure change is recorded in the observation well. Fig. 1 shows afield example of the pulse-test recordings of pressure and flow rate. The periodic changes are easy to separate from the monotonous pressure changes, and the Brigham read-offs are independent from the monotonous components. The interpretation of the test by the conventional Brigham method consists in picking the delay, tL, and amplitude, ?p, of the pressure change from the record (Fig. 1), then calculating the transmissibility, T, and the storage, S, of the influenced part volume of the reservoir from these read-offs. When evaluating the pulse tests by this conventional method, only the monotonous component can be separated from the pressure noise of the record. With the majority of the tests, however, a nonmonotonous noise component appears, the value of which is commensurable to that of the test-related pressure change. An example of such a result is given in Fig. 2, wheretL delay and ?p amplitude, needed for the Brigham evaluation, cannot be fixed. (The pulsation-related pressure and flow data of Fig. 2 are listed in Table 1.) To filter out noise, we apply a cycle-sum-up method of suppressing. This method separates periodic pressure changes from nonperiodic (monotonous and random) pressure noise components, as well as from periodic noises having frequencies other than that of the test operation (such as the tide of the earth's crust and the effect of the daily fluctuations of temperature). Character and Origin of Pressure Changes Disturbing the Test Results Changes on the pressure records that do not originate in the flow-pace of the active well are treated as pressure noise. One kind of pressure noise is a result of the actual pressure change in the reservoir and in the wellbore. Another kind is noise of the pressure gauge (e.g., the temperature-dependence of electric circuit parts) not induced by physical pressure changes, although we record it as a pressure change. Noise components, by their character, can be classified as follows (for sometypes we indicate the degree of pressure change by our own test experience):Monotonous component of noise (e.g., the interfering effect of other producing and/or injecting wells of the same field). A noise component of expressible periodicity is the tidal effect of the earth's crust; depending on porosity and pore-size distribution of the reservoir tested, it may cause pressure changes of 0.1 to 5 kPa (0.015 to 0.7psi) in the reservoir. The effect of daily fluctuations of the surface temperature is of quasi-periodic character. Because of the varying fluid temperature in the head equipment of the observation wells, a flow arises between the well and the layer. Depending on the permeability of the layer and of the well's environment and on variations of surface temperature, pressure changes having amplitudes of0 to 30 kPa (0 to 4.3 psi) are detected that have minima and maxima, each returning after about 24 hours. This is imperfect periodicity; its actual value varies with the effective temperature. Also, the temperature-dependence of some instruments located on surface may result in disturbing noise components. Fully random noise components of dispersion character, such as fluctuations of the atmospheric pressure, instrument/gauge noises, and other effects of unknown origin. In wells open toward the surface, the barometric changes may cause anomalies as high as ±4 kPa (0.6 psi). Though one may try to separate the effects of crustal tide, temperature, and interference of other producing/injecting wells, we handle these together with the random noises. Noise-Filtrating Transformations When selecting the noise-filtration method to be applied, we started from the fact that pulse test is a periodically repeating process. Hence, if were gard the individual cycles as independent measurements and add up their recorded values, the useful signal will show up against the random (and therefore each other partly counterbalancing) noises. This procedure is also widely applied in cases of other measurements with periodic exertion of some influence. For example, in the seismic method with weight droppings, a single dropping of the weight gives geophone signals of no use, but the sum of, for example, 50 droppings may already be evaluated. The basis of our cycle-sum-up method is about the same, but mathematically, we adapted it for time-variant physical processes. In Fig. 3, we illustrate the individual pulses and their g1 average function values taken from the recorded pressure data series of Fig. 2, unfit for evaluation in the traditional way. The x-axis indicates the time within the individual pulsed cycles (0 t ?t). The cycle-sum-up noise suppressing can be realized by means of the following mathematical transformation.Equation 1 where N is the total number of the pulsed cycles.
Abstract We propose a two-stage approach to integrating seismic data into reservoir characterization. First, we use a non-parametric approach to calibrate the seismic and well data through an optimal transformation to obtain the maximal correlation between two data sets. These optimal transformations are totally data-driven and do not assume any a priori functional relationship. Next, cokriging or stochastic cosimulation is carried out in the transformed space to generate conditional realizations of reservoir properties. The proposed approach allows for non-linearity between reservoir properties and seismic attributes and exploits the secondary data to its fullest potential. Furthermore, cokriging or cosimulation is considerably simplified when carried in conjunction with the optimal transformations because of a significant reduction in the variance function calculations particularly when multiple seismic attributes are involved. The proposed approach has been applied to synthetic as well as field examples. The synthetic examples involve reproducing a pre-generated primary data set using sparse primary and multiple dense secondary data sets. A comparison with traditional kriging and cokriging is also presented to illustrate the superiority of our proposed approach. The field example uses 3-D seismic and well log data from a 2 mi2 area of the Stratton gas field in South Texas — a fluvial reservoir system. Using multiple seismic attributes in conjunction with well data, we estimate pore-footage distribution for a selected zone in the middle Frio formation. Introduction It is well-recognized that integration of seismic data into reservoir characterization can play a significant role in reducing uncertainties in interwell reservoir properties. However, use of seismic data in reservoir characterization still remains rather limited primarily due to the inexact nature of the relationship between seismic and reservoir properties. Many seismic characteristics exhibit complicated effects of reservoir parameters such as lithology, petrophysics and fluid content. Hence, the link between seismic and reservoir properties is often non-unique, multivariate, and non-linear. Currently, there are two common approaches for integrating seismic data during reservoir characterization. The first approach involves inversion of seismic data to obtain seismic velocity distribution and then generating reservoir properties either using empirical models or through data calibration with existing wells. This approach is rather straight-forward and easy to implement. However, when applied to reservoirs complicated by large variations in lithology, fluid saturation, and other petrological factors, the inverted seismic velocity alone may not be sufficient to characterize reservoir properties with confidence. The second approach is more statistical in nature. It involves extraction of various seismic attributes from the formation under consideration and then estimating reservoir properties using multivariate statistical correlation or pattern, recognition algorithms. This approach, although not as direct as the first approach, can incorporate as many seismic attributes as might be available and thus is able to deal with more complicated reservoirs. However, traditional stochastic cosimulation techniques are not suitable for cases where multiple seismic attributes are involved. P. 37