Nordtvedt, J.E. (RF - Rogaland Research) | Ebeltoft, E. (RF - Rogaland Research) | Iversen, J.E. (RF - Rogaland Research) | Sylte, A. (RF - Rogaland Research) | Urkedal, H. (RF - Rogaland Research) | Vatne, K.O. (RF - Rogaland Research) | Watson, A.T. (Texas A&M University)
We describe a novel methodology for the determination of three-phase relative permeability functions at reservoir conditions. Two- and three-phase displacement experiments are conducted on a low-permeability chalk sample, and estimates of the three-phase relative permeability and capillary pressure functions are obtained. Three-phase relative permeabilities are also calculated using the Stone correlation, and they are evaluated by simulating the experimental data.
Determination of relative permeability functions from displacement experiments has received a lot of attention over the past 50 years. The vast majority of the work has been directed towards determination of two-phase properties from laboratory experiments. Most applications involving three- phase relative permeabilities have utilized correlations in which two-phase relative permeability data are extrapolated into three-phase regions. Although this would be tractable, if accurate, these correlations have largely remained untested due to the lack of sufficient measurements of three-phase relative permeability functions.
Analysis of three-phase experimental data has often been based on several generally unsupported simplifications (e.g.. the neglect of capillary pressure, incompressible fluids, uniform saturation profiles, and each relative permeability being a function of its own saturation only). In an effort to meet such simplifications, experiments have frequently been conducted under flowing conditions that are unrepresentative of those encountered within the reservoir. Consequently, the estimated three-phase properties may not be suitable for describing reservoir flow.
We report here the application of a methodology to overcome these problems. We have constructed an experimental apparatus whereby two- and three-phase displacement experiments may be performed at reservoir conditions. The experimental process is modeled by a general purpose three-phase simulator which includes all the pertinent physical effects. We then choose the appropriate relative permeability and capillary pressure functions, through solution of a series of optimization problems, so that the quantities calculated with the simulator are consistent with the measured values. This methodology is demonstrated with a low-permeability chalk sample.
We also calculate three-phase relative permeabilities from the two-phase data using the Stone model, and evaluate the effectiveness of that model by simulating the data collected during the experiment. The estimates of the three-phase relative permeabilities obtained with the Stone model do not accurately reconcile the experimental data.
Three-Phase Flow Apparatus
The schematic of the flow apparatus is shown in Fig. 1. It consists of a pumping system, a three-phase separator, a core holder and a X-ray scanner system for in-situ saturation measurements. Each of these components is described below. For further details on experimental set-up, see Ebeltoft et al.
Pumping system. The pumping system consist of eight computer controlled cylinders that pumps reservoir fluids into the core sample at reservoir conditions. Cylinders are paired to act as a pump.
The development of a strategy for the detailed three-dimensional description of permeability was a key ingredient of the recent reservoir characterization study of the Ekofisk Field. Since the ultimate objective of this characterization effort was the construction of a new full field 3-D reservoir flow model, permeability and its heterogeneity received special focus. Permeability has a tremendous influence on history matching of reservoir fluid flow models and in turn reservoir management decisions. This is particularly true in a mature, waterflooded field such as Ekofisk.
The Ekofisk Field is a high porosity, low matrix permeability naturally fractured chalk. Fluid flow characteristics of the reservoir are largely governed by the distribution, orientation and interconnectivity of the natural fracture system. To honor this mechanism, an algorithm was developed based on the log linear relationship between fracture spacing (intensity) data from core and well test effective permeability. To capture the intrinsic heterogeneity and complex nature of Ekofisk Field, the basic relationship between fracture intensity and permeability was modified to incorporate variations associated with: 1) chalk facies; 2) fracture type; 3) porosity; 4) structural location; S) structural curvature; and 6) silica content.
To calibrate the algorithm, permeability determined from distributing total well test flow capacity (kh) based on production log contribution was used as a tuning parameter. As a final step, geostatistical techniques were used to ensure that permeabilities derived from the algorithm matched those obtained from well test analysis.
The algorithm developed distributes permeability three-dimensionally within the geologic model while capturing the heterogeneity of the naturally fractured chalk network.
Ekofisk is a very prolific field located in the Norwegian Sector of the North Sea. The field was discovered in 1969 with first production in 1971. Production for the 12 months in 1995 averaged 214,000 BOPD and 655,000 MSCFD from 70 wells. Water injection commenced in 1987 and presently averages 790,000 BWPD into 32 active injection wells. The high productivity sustained at Ekofisk can be attributed to the natural fracture system intrinsic to the chalk. In certain highly fractured areas of the field effective permeability can be 60 times greater than matrix permeability.
Recently, water breakthrough has been observed in areas of the field not consistent with flow model predictions. Permeability heterogeneity is the obvious and likely explanation; thinner zones of higher permeability are allowing water to travel faster between water injectors and producers than in a strictly homogeneous system. In order for our fluid flow model to predict future performance and optimum well placement - the two most important objectives of building and history matching a model - we must be able to determine in which directions water or gas will preferentially flow from the injector site. Permeability is the reservoir property that has the most significant and direct impact on the preferred flow path of the injected fluids.
This work investigates the possibility of characterizing the permeability distribution in areally heterogeneous reservoirs permeability distribution in areally heterogeneous reservoirs through the interpretation of pressure response to cyclic flow rate variations. A mathematical model is formulated for the pressure behavior due to sinusoidal flow rates in reservoirs with continuously and smoothly varying radial permeability distributions. The solution is obtained by the application of a regular perturbation analysis and a correlation between the radius of cyclic influence and the frequency of a sinusoidal flow rate is developed. Also, the analytical pertubation solution allows a qualitative assessment of the effects of areal reservoir heterogeneities on the pressure response either at the active well or at an observation well. Because the perturbation method results in only an approximate solution, a quantitative method is proposed to describe the permeability distribution based on the fully correct permeability distribution based on the fully correct multicomposite radial reservoir model. The permeability in each zone of the equivalent reservoir is estimated by employing a nonlinear regression procedure. The correlation mentioned before may be used to design pulse test flow rate schedules such that an optimum result can be achieved in terms of parameter estimation. Several factors affecting the quality and the uncertainty in the estimated permeability distribution are investigated, along with examples permeability distribution are investigated, along with examples including a variety of geometric and permeability distribution configurations. These factors include the type of test (constant rate drawdown or pulse test), flow rate, total testing time, reservoir zonation and configuration, and permeability distribution. In particular, the frequency of flow rate changes and the point in the system at which to measure and interpret the pressure data were found to be important.
Interpretation of pressure tests in wells has been one of the most important sources of information for obtaining reservoir properties such as transmissivity, storativity and presence of properties such as transmissivity, storativity and presence of boundaries and of formation heterogeneities. Both constant and variable flow rates have been employed in the execution of well tests. One of the difficulties in the constant rate tests is to maintain the flow at a fixed value during the entire test. If the rate varies considerably, multiple-rate analyses have to be applied.
In some instances, it is desirable to use a multiple-well test with variable rate. One that is commonly employed is the pulse test. This kind of test can give both qualitative and quantitative information about the reservoir. The presence of an oscillatory response at the observation point will indicate that there is communication between the two wells being tested. In addition, due to the nature of the pressure response, any existing pressure trends in the reservoir, generated by other producing or injecting wells, can be removed from the recorded pressure data. Quantitative information obtained from pulse tests or from any other test based on oscillatory flow rates are usually the average transmissivity and storativity. Much work has been done in the area of design and analysis of such kinds of test in the last thirty years. The pressure response to a sinusoidal flow rate in homogeneous systems was studied by Businov and Umrichin and by Kuo. The works of Businov and Umrichin are discussed by Streltsova in good detail. A method for testing a well using sinusoidal flow rates was patented by Kuo and Vanmeurs. Although this kind of test poses a serious difficulty for field implementation, it is the basis for the understanding of the principles underlying the pulse testing interpretation. The use of principles underlying the pulse testing interpretation. The use of harmonic flow rates has also been proposed to determine fracture properties in field situations or permeability in laboratory properties in field situations or permeability in laboratory experiments. The pulse testing technique is discussed by Johnson et al. They assumed an infinite homogeneous reservoir and equal periods of flow and shut-in. Field application of pulse tests for characterizing a reservoir is presented by pulse tests for characterizing a reservoir is presented by McKinley et al. Their data indicated that pulse test pressure behavior was influenced by between-well formation parameters. Values of transmissivity and storativity showed considerable variation with direction from a central pulsing well. Brigham provided equations and correlation charts for the design and provided equations and correlation charts for the design and analysis of pulse tests. He also assumed equal production and shut-in times. One important conclusion, contrary to the statements of Johnson et al., was that the time required by a pulse test should be the same as for interference test, since pulse test should be the same as for interference test, since the basic equations are the same. Kamal and Brigham developed a method to interpret pulse tests with unequal production and shutin periods. The effects of areal heterogeneities on pulse test results were studied by Vela and McKinley using a numerical reservoir simulator. A rectangle of influence was defined as the region where heterogeneities could affect the parameters estimated by the pulse test analysis.
This work presents a qualitative analysis of the uncertainties in performance prediction for a reservoir characterization process in which static and dynamic data are combined. Using primary, singlephase production data as the dynamic constraint in the inversion, the quality of the models thereby obtained is evaluated when they are used in alternative or subsequent production scenarios. The conclusions are that, provided the well configuration remains the same as that used for the dynamic constraint in the inversion, the models perform well when the production stage remains single- phase, primary depletion, even at the individual well level and also for future prediction. For multi-phase primary production, there is also a reasonable match in performance, although not as good as for a single-phase case. This is because such depletion processes are controlled by the permeability in the near-wellbore region, which are adequately reconstituted by the characterization process used, while the multi-phase density variations with time were observed to have little impact. However, the fine scale, inter-well permeability patterns of the different realizations obtained by the integration of the dynamic and static data are not sufficiently resolved and so are unable to adequately capture the interwell permeability heterogeneity patterns which are necessary for accurate waterflood performance matching.
It is important and becoming more popular to integrate the static and dynamic data for reservoir characterization. By integrating different types of data, the final reservoir characterization will be more precise or unique. In other words, the uncertainty of the reservoir description will be reduced. However, for optimal reservoir management, it is more critical to understand the uncertainty of the future performance prediction. In geostatistics, the norm is to generate multiple realizations of the reservoir description and conduct the reservoir simulation. The multiple realizations of the future performance prediction form the prediction error bound. This is a more flexible approach than the traditional serial modeling approach for performance prediction. Conventionally, the geostatistical modeling incorporates the static data -- such as seismic, geological data -- quite well, but may not reproduce the dynamic performance. The incorporation of the dynamic data into the modeling may yield better history matching. However, the prediction characteristics need to be understood.
Recently, inverse theory and other approaches have been applied to integrate the static and dynamic data. In 1992, Deutsch applied simulated annealing to integrate the engineering data with variogram data for geostatistical modeling. In 1994, Sagar et al. applied simulated annealing to incorporate well test data and the spatial relationship for modeling the permeability field with more efficient forward modeling. In 1994, Oliver applied inversion methods to incorporate the spatial relationship and well-test data to generate multiple realizations of the permeability field. Chu et al. implemented a sensitivity coefficient approach in a more efficient way to perform the inversion. In 1995, Reynolds et al. extended that work and achieved better efficiency by using reparameterization. All these methods use the spatial relationship and dynamic data as constraints for the inversion of the permeability field. This works well if the spatial relationship is available and the correlation range is long. However, if the spatial relationship is not easy to obtain, this approach may not work. Also, honoring the spatial relationship does not necessarily mean honoring other data such as geological description or seismic modeling results. Recently, Huang and Kelkar proposed a method to integrate the dynamic data with static data -- especially seismic data -- which obviates the need to honor the spatial relationship such as variograms.
We examine induction log responses to layered, dipping, and anisotropic formations analytically. The analytical model is especially helpful in understanding induction log responses to thinly laminated binary formations, such as sand/shale sequences, that exhibit macroscopically anisotropic resistivity. Two applications of the analytical model are discussed.
In one application we examine special induction log shoulder-bed corrections for use when thin anisotropic beds are encountered. It is known that thinly laminated sand/shale sequences act as macroscopically anisotropic formations. Hydrocarbon-bearing formations also act as macroscopically anisotropic formations when they consist of alternating layers of different grain-size distributions. When such formations are thick, induction logs accurately read the macroscopic conductivity, from which the hydrocarbon saturation in the formations can be computed. When the laminated formations are not thick, proper shoulder-bed corrections (or thin-bed corrections) should be applied to obtain the true macroscopic formation conductivity and to estimate the hydrocarbon saturation more accurately.
The analytical model is used to calculate the thin-bed effect and to evaluate the shoulder-bed corrections. We will show that the formation resistivity and hence the hydrocarbon saturation are greatly overestimated when the anisotropy effect is not accounted for and conventional shoulder-bed corrections are applied to the log responses from such laminated formations
In another application, we examine the effect of shale anisotropy in thinly laminated sand/shale sequences. It is known that the macroscopic conductivity of a laminated formation is determined uniquely by the sand and shale laminae conductivity and the sand/shale ratio. Conversely, the sand-laminae conductivity is estimated from the macroscopic formation conductivity, and the hydrocarbon saturation in the sand laminae is computed from the sand conductivity.
The shale-laminae conductivity itself may be anisotropic in such laminated sequences. How do induction logs respond to such laminated formations when shale laminae are anisotropic? How accurate are the estimates of the sand-laminae resistivity and of the hydrocarbon saturation in these sand laminae? To answer these questions we used the analytical model to examine the effect of shale lamina anisotropy on induction log responses to thinly laminated formations. We learned that the macroscopic formation resistivity and hence the sand-lamina resistivity can be greatly overestimated if the shale anisotropy is not accounted for in interpreting induction log data from laminated formations. On the other hand, the estimate of the net-to-gross ratio is insensitive to shale anisotropy except for low sand-laminae resistivity (Rsd/Rsh < 5).
Tilted, multilayered formations are not uncommon, and the induction log response to such formations is complex, especially when the layers are only a few feet thick. Deviated boreholes are also common, and the log response from such a borehole is complicated when the bedding planes are intersected at an oblique angle. The induction log response in tilted and layered formations or from a deviated borehole can be modeled analytically. Such analytical tool models are very important in analyzing and interpreting induction tool responses.
Reservoir simulations are limited to large scale grid blocks due to prohibitive computational costs of fine grid simulations. Rock properties, such as permeability, are measured on smaller scale than coarse scale simulation grid blocks. Therefore, the properties defined on a smaller scale are upscaled to a coarser scale. Few prior studies on permeability upscaling paid special attention to the problem of the radial flow in the vicinity of a wellbore.
This study presents an analytical method to calculate effective permeability for a coarse-grid well-block from fine-grid permeabilities. The method utilizes serial and parallel averaging procedures modified for radial flow. The method is validated with numerical simulations of primary and secondary recovery processes involving 2- and 3-D systems. The results of coarse grid simulations with the permeabilities upscaled through the new well-block approach agree well with the results of the fine grid simulations with initial permeability distributions.
Reservoir heterogeneity can be described on a fine scale by using current stochastic models. In spite of the recent developments in computational speed, it is still not feasible to flow simulate reservoirs with stochastic grid blocks. Coarsened grid-block scheme, therefore, is required to ease the computational burden. Rock properties for coarse scale grid blocks are obtained by utilizing a proper upscaling procedure. Permeability, by far, is the most important property that affects flow performance. Therefore, unlike porosity, permeability upscaling requires a robust upscaling procedure. Several authors have discussed methods to upscale permeability, but few have addressed the problem of upscaling in the vicinity of wellbore although the well performance is more affected by the permeability of this region than by any permeability away from it. The large portion of pressure drop in a reservoir is realized in the near-wellbore region.
White and Horne proposed a method to calculate well-block transmissibility. Their method was based on running simulations with fine scale grid blocks for different boundary conditions. The pressures and fluxes from fine scale simulations were averaged and summed to obtain pressures and fluxes on a coarse scale. The least square method is utilized to convert the coarse scale pressures and fluxes into a block transmissibility.
Palagi et al. presented an upscaling procedure for the permeability at the interface of Voronoi grid blocks. They used the power law averaging to calculate homogenized permeability. To find the optimum value for the power law coefficient, , the simulation results of the initial permeability distribution were compared with the results of various upscaled distributions obtained with various values of . The that minimizes the difference between the fine grid and coarse grid results is accepted as the optimum .
Ding presented an upscaling procedure to calculate the equivalent coarse grid transmissibility based on the results of fine grid simulation. To account for a well in a coarse grid block, the numerical productivity index for a coarse grid block was defined.
The methods discussed above are based on the fine-grid numerical simulations. Numerical simulations are to be repeated for all coarse grid blocks and require a preprocessing task that usually take up several man-hours. To avoid lengthy numerical simulations, this study presents an analytical method to calculate upscaled well-block effective permeability. The method is based on the understanding that the effective permeability should preserve the ratio of the fluid flux and the potential drop realized across the fine grid blocks with the heterogeneous permeabilities. The method can be applied to either 2- or 3-D flow simulations. It is based on the incomplete-layers concept and the radial flow averaging laws that are applied to a Cartesian grid scheme.
Discovered in 1955, the Lower Lagunillas Member reservoir of the Miocene Lagunillas Formation of Bloque IV of the Bachaquero field was and was originally estimated to contain 2 billion barrels of oil. This reservoir interval has traditionally been interpreted to have been deposited in a delta plain setting and to comprise three reservoir subdivisions that were developed as a single drainage unit. Sedimentological interpretation of four cored wells has led to the development of a new model of deposition in tidally influenced lower delta plain and delta front settings. This model is supported by Fourier Transform Infrared (FTIR) measurement of illite concentrations and prompt neutron capture boron measurements that are indicative of a brackish water depositional setting.
The conceptual geological model has been used to guide correlation of wireline logs from 46 wells in the central part of Bloque IV and to provide a high-resolution sequence stratigraphic model of the Lower Lagunillas reservoir. Eleven genetic layers are identified that are separated by locally developed intraformational seals into up to eight drainage units. High permeability, tidally influenced channel-fill sands have acted as preferential conduits for gas influx, leaving bypassed oil in lower quality sands that were deposited as lagoonal deltas and bars.
This reservoir model has been supported by re-examination of production data and by openhole measurements in a recent infill well and cased-hole logging of two other wells in the study area. The new model will form the basis for redevelopment of the Lower Lagunillas reservoir to further increase recovery from this mature field.
This sedimentological investigation formed part of a wider study aimed at generating a detailed characterization of the Lower Lagunillas reservoir in a pilot study area located in the central part of Bloque IV of the Bachaquero field, Lake Maracaibo, Venezuela. A prerequisite to providing reliable estimates of log-derived reservoir properties in the interwell areas and ultimately predicting the distribution and movement of fluids within the pilot study area was to establish a robust geological framework. This was achieved by interpretation of core and wireline log data within a high-resolution sequence stratigraphic framework. In this account, we will illustrate how evaluation of dynamic reservoir data within this high-resolution sequence stratigraphic framework has been used to validate the model and to successfully identify targets for horizontal infill wells within this mature reservoir.
The Maracaibo basin covers an area of some 50,000 km2 in the northwestern part of Venezuela (Fig. 1). The Cretaceous to Middle Eocene sedimentary fill of the basin was deposited in an extensional tectonic regime created by the opening of the proto-Caribbean, as the North American and South American plates began to drift apart. In the Late Eocene, the tectonic regime became transpressional (strike-slip) as a result of the development of a convergent plate margin in northwest South America.
Late Eocene, Oligocene and Early Miocene sediments are not present in the area of Bloque IV and a prominent unconformity represents substantial erosion of the underlying Eocene deltaic sediments of the Misoa Formation, during the initial, Late Eocene, phase of basin inversion (Fig. 2). Subsequent deposition of Miocene sediments took place in a foreland basin setting under a transpressive tectonic regime. The oldest Miocene rocks of the area, the La Rosa Formation, overlie the base Miocene unconformity and were deposited during an Early Miocene (Burdigalian) transgressive-regressive cycle of sedimentation (Fig. 2). This marine transgression moved progressively from the northeast to the southwest of the basin.
Geostatistics techniques are being used increasingly to model reservoir heterogeneity at a wide range of scales. A variety of techniques are now available which differ in their underlying assumptions, complexity and applications. This paper introduces a novel methodology of geostatistics to model dynamic gas-oil contacts and shales in the Prudhoe Bay reservoir.
The proposed methodology integrates the reservoir description and surveillance data within the same geostatistical framework. The methodology transforms surveillance logs and shale data to indicator variables. These variables are then utilized to analyze the vertical and horizontal spatial correlation and cross-correlation of gas and shale at different times and to develop variogram models. Conditional simulation methods are used to generate three-dimensional distributions of gas and shales in the reservoir. Both methods provide a measure of uncertainty in the resulting descriptions. These conditional simulation methods capture the complex three-dimensional distribution of gas-oil contacts through time.
The results of the geostatistical methodology are compared with conventional techniques as well as with the infill wells drilled after the study. The predicted gas-oil contacts and shale distributions are in close agreement with the gas-oil contacts observed at the infill wells.
Geostatistical techniques provide a framework to integrate and model several sources of reservoir data at different scales. With the recent development of high-speed and large-memory computer workstations, geostatistics has become a powerful tool for detailed reservoir analysis, description and evaluation. These technologies make it possible to integrate geological, geophysical and petrophysical data for building more realistic reservoir models.
In the Prudhoe Bay field, reservoir description and monitoring fluids in- place through time are the key elements for field development, reservoir management, and predicting performance for different reservoir mechanisms. The stakes include reduction of gas and water handling costs, selection of completion and recompletion intervals, selection of better infill well locations, development of better reservoir simulation models and reduction of the effort required for fluid mapping.
Prudhoe Bay is the largest field in North America. During the 16 years of operations, the field has produced more than 7 billion barrels of oil. The major producing mechanisms in Prudhoe Bay are gravity drainage, waterflood and miscible gas flood. The interactions between these mechanisms, the reservoir architecture and heterogeneities (shales, faults and fractures of different shapes and sizes) result in complex gas and water movement through time. Fig. 1 illustrates the gas movement in a cross-section along the main dip direction in a gravity drainage region of the reservoir. Gas movement is affected significantly by shales of varying sizes which may not be continuous between wells. The gas tends to move underneath the shales resulting in isolated gas tongues or fingers which breakthrough at different times at the wells (gas underruns) and oil regions which are bypassed (oil lenses). For a given well, the cased-hole logs at different times show multiple gas-oil contacts (Fig. 1). Under these conditions, it is difficult to interpret and visualize the inter-well distribution of gas in three-dimensions.
Often, pressure gauge systems for surface read-out (SRO) wireline work or for permanent installations do not perform according to their specifications i.e. the pressure resolution obtained is lower than the gauge design values. This seems natural because the borehole environment is nastier than the quiescence of laboratory calibration setup. Nevertheless, it is difficult to attribute the loss of resolution to a single problem.
This paper introduces the functional components of the pressure gauge system where loss of resolution may occur. Specifically, cable related problems, crossover, signal transmission, signal processing, time stamping and temperature compensation are addressed. Determination of pressure resolution from a processed signal is shown via example calculations. The role of transducer specifications on overall data quality is addressed. In other words, what causes a 0.01 psi rated transducer to yield a signal of only 0.75 psi quality?
Field data from prospects Tahoe and Bullwinkle are used to illustrate some of the gauge related problems and the solutions being proposed by the industry to overcome some of them.
The ability of the pressure gauge systems to resolve to the gauge manufacturer's specifications requires a unique set of laboratory type circumstances. In general, the end user pressure resolution is lower. In certain situations, it may be necessary to spend more effort in designing the gauge system to meet the user requirements. Examples of such situations may be the high permeability Gulf of Mexico (GOM) sands or a permanent downhole gauge (PDG) installation or a surface read-out (SRO) wireline gauge in a deep/hot well. Each of these examples will be illustrated in the paper
There are a number of reasons for having the system resolution close to the transducer resolution. One of them being the use of pressure derivative methods in pressure transient analysis. In addition, use of rate deconvolution methods require precise data acquisition. Other more physical reasons may be that in high permeability sands, the analyzable part of the pressure buildup signal could be only 2-4 psi cumulative. If the system resolution is within the same order of magnitude it would be difficult to distinguish the signal. This is illustrated in Figure 1 which shows a few examples of the magnitude of pressure signals in GOM reservoirs. The graph shows the change in pressure for a shut-in well after 0.1 hours, assuming that the wellbore storage and momentum effects in the well have subsided by then. If the wellbore effects last longer (e.g thermal effects in injectors), the magnitude of signals would be even smaller.
As reservoir/production engineers we concern ourselves with the quality of the pressure gauge that is put in the hole but do not usually worry about the total system of measuring, transmitting, processing and storing data. The system, as a whole, contributes to the ultimate pressure resolution rather than the gauge alone. Veneruso and Economides' (1989) discuss some of the potential noise and non-signal components of the data.
As the system components vary with the application type, a description for a permanent downhole gauge and the techniques of data measurement and transmission is outlined below, in order to cover the most comprehensive systems used today.
We discuss and compare three different approaches for permeability determination from logs from a practical point of view. The three methods, empirical, statistical, and the recently introduced "virtual measurement," make use of empirically determined models, multiple variable regression, and artificial neural networks, respectively. We apply all three methods to well log data from a heterogeneous formation and compare the results with core permeability, which is considered to be the standard. Our comparison focuses on the predictive power of each method.
Reservoir management strategies are as realistic as the "image" of spatial distribution of rock properties. Permeability is the most difficult property to determine and predict. Many investigators1-10 have attempted to capture the complexity of permeability function in models with general applicability. While these studies contribute to a better understanding of the factors controlling permeability, they demonstrate that it is an illusion that a "universal" relation between permeability and variables from wireline logs can be found.
The regression approach, which uses statistics instead of "stiff," deterministic formalism, tries to predict a conditional average, or expectation of permeability.11-15 The newest method, called "virtual measurement,"16,20 makes use of the artificial neural networks that are model-free function estimators. Neural networks are flexible tools that can learn the patterns of permeability distribution in a particular field and then predict permeability from new data by generalization.
To compare the capabilities of these approaches, we apply all three methods to wireline log data from a heterogeneous oil-bearing formation and compare the results with core-determined permeability, which is considered to be the standard. We test these methods for their model development as well as their predictive capabilities.
Empirical models are based on the correlation between permeability, porosity, and irreducible water saturation. Table 1 presents the four empirical models used the most: Tixier, Timur, Coates & Dumanoir, and Coates. All these methods, except Coates & Dumanoir, assume certain values for cementation factor and/or saturation exponent and are applicable to clean sand formations where conditions of residual water saturation exist. Coates & Dumanoir have proposed an improved empirical permeability technique. With the support of core and log studies, they adopted a common exponent, w, for both the saturation exponent, n, and cementation exponent, m. Coates & Dumanoir also presented a method for testing whether the formation is at irreducible water saturation. However, they noted that if the reservoir is heterogeneous, it may fail that test and still be at irreducible water saturation. Their method is the first that satisfies the condition of zero permeability at zero porosity and when Swirr=100%. Because of the corrections provided, this method can be applied to formations that are not at irreducible water saturation and to shaley formations. Values for the exponents m and n are not needed because they are found as a result of the computation.
The cementation factor, m, and the saturation exponent, n, are the biggest sources of uncertainty in permeability determination by means of empirical models.3 They can be obtained by laboratory measurements, which is seldom the case, or approximated according to some general guidelines and/or experience. Methods for deducting the cementation factor have had a long history. In this study, we use the method based on the establishment of a "water line" in a zone that is 100% water saturated.15
Multiple Variable Regression
Multiple regression is an extension of the regression analysis that incorporates additional independent variables in the predictive equation. In this study, the dependent variable is the logarithm of permeability because permeability seems to be log-normal and the independent variables are well log variables. Wendt and Sakurai12 established a general procedure for permeability prediction by multiple variable regression. They also pointed out the shortcomings of using this technique. When the regression method is used for prediction, the distribution of predicted values is more narrow than that of the original data set. Kendall and Stuart15 explained this, stating that the regression model "...either exhibits a property of a bivariate distribution or, when the regressor variables are not subject to error, gives the relation between the mean of the dependent variable and the value of the regressor variables." That is, the regression provides the best estimate of the average. The assumption that the error is related only to the dependent variable (permeability measurements) and not to the independent variables (log variables) can be verified by comparing repeat runs of properly calibrated instruments with the main runs of the logs, provided that there is no bias in the measurement. Logs of acceptable quality have errors with a relatively small unbiased scatter that is a function of the physics of the tool, its response characteristics, and the borehole environment. If the deviations are indeed random, then they would be expected to be normally distributed with a mean value of zero.
The correlation matrix of all independent and dependent variables should be analyzed to establish whether there is a dominant independent variable or whether the independent variables are essentially uncorrelated with each other. This gives the analyst some guidelines for selecting the variables and the order in which they should enter the model. However, sensible judgment is still required in the initial selection of variables and also in the critical examination of the model through analysis of residuals.
Neural networks, unlike conventional programs, are general-purpose systems that attempt to achieve good performance by dense interconnection of simple computational elements. For this reason, they are also called connectionist models. The solution to a problem is not explicitly encoded in the program, but is "learned" by supplying examples of previously solved problems to the network. After the network has learned how to solve the example problems, it is said to be "trained." This is called "supervised training." New data from the same knowledge domain can then be entered to the trained neural network that then outputs a solution. There are also neural networks that learn unsupervised, like Kohonen's self-organizing map network. There are also methods that allow scientists to extract fuzzy rules from a developed neural model. These rules can relate inputs (log responses) to output (permeability) by use of a series of fuzzy rules by dividing the domain of each variable into fuzzy subsets. Implementation of this technique in reservoir characterization is currently under investigation.