Beliakova, N. (Shell International Exploration and Production) | van Berkel, J.T. (Shell International Exploration and Production) | Kulawski, G.J. (Shell International Exploration and Production) | Schulte, A.M. (Shell International Exploration and Production) | Weisenborn, A.J. (Shell International Exploration and Production)
The Hydrocarbon Field Planning Tool (HFPT), recently developed by Shell, provides capabilities for rigorous integrated subsurface-surface production forecasting in the medium to long term (1-30 years). HFPT can be used for gas, gas-condensate, oil and mixed gas-oil fields. HFPT models allow business optimisation by making more efficient use of the existing assets and by reducing investment costs in the new fields.
In the simulation, HFPT uses a pressure-balanced solution of the integrated system: from the reservoir(s), through wells and surface facilities, to the delivery point. A wide range of fluid models is available, from simple gas-condensate and black oil PVT models to multi-component models with EOS flash calculations. HFPT provides optimisation functionality for maximising the returns in oil and gas fields, while accommodating operational preferences for production allocation and network constraints. It can also model injection networks and optimal lift gas distribution.
The need for an integrated approach to dynamic field modelling has now been accepted by many players in the oil industry .
Issues which can be analysed in an integrated model, and which cannot be adequately addressed in a stand-alone reservoir model (or multiple stand-alone models), include:
Pressure interaction between surface and subsurface.
Pressure interference between different reservoirs and wells connected to a shared surface facility. An example is a high-pressure well backing out a low-pressure well.
Mixing of dissimilar fluids from different reservoirs in the production network.
Influence of facility constraints, e.g. separator limits, on a set of reservoirs connected to a shared facility.
Production optimisation in the overall system against a set of common criteria.
A number of field studies, performed using integrated subsurface-surface models, have already been reported, e.g. , showing benefits of such models.
Over the past ten years, a wide range of applications from commercial vendors have appeared on the market which allow modelling of the subsurface and surface in an integrated way. Most of the available tools are designed either for a very simple reservoir description or for a simple surface description, or both.
Hydrocarbon Planning Tool (HFPT) has been developed by Shell to fulfil the need for rigorous integrated subsurface-surface production modelling. It has been designed for accurate medium to long term forecasting, for optimising of production from existing fields, for analysing near-field potential in mature fields and for developing new fields. Currently, HFPT focuses mainly on medium to long term forecasting which is dominated by subsurface behaviour. However, surface and process facilities models can also be modelled in great detail, when needed.
Requirements for Integrated modeling
An integrated subsurface-surface model consists of the following main data modules, illustrated in Figure 1:
Surface production system model.
Processing facilities model.
Overall integration and control, contracts, optimisation targets, development planning.
The paper presents a new methodology for building integrated risk models for reservoir and operational costs. Although the resulting models are fairly simple, they have proved to be adequate in several real-life applications in large field development projects as a part of a total value chain risk analysis. By using these models it becomes possible to optimize intervention strategies with respect to the total net present value of the project or other success criteria such as break-even price.
When analyzing operational costs related to an oil or gas field, one usually develops a static cost profile based on the assumed expected yearly production throughout the lifetime of the field. However, when reservoir uncertainty is introduced, this approach breaks down, as different production scenarios must be matched with corresponding operational cost profiles. A typical risk analysis involves carrying out some sort of Monte Carlo simulation. To be able to do this, it is necessary to establish a more dynamic link between reservoir and operational costs. While some of these costs are more or less constant independent of the production, others depend heavily on how much the reservoir produces. In this paper we focus on the last category. It is natural to divide such costs into two subcategories: rare events with high costs, e.g., well interventions such as side-track operations, and frequent events with low costs, typically scale squeeze operations. Two different models will be presented dealing with these issues. For the rare events with high costs case, we present a lifetime model where the time is measured in cumulative production. For the frequent events with low costs, the relation between production and operational costs is developed based on curve fitting.
In recent years, several oil field development projects in the North Sea have experienced major cost overruns of a magnitude that threatened their profitability. The planned versus the actual costs, the time it takes to reach ‘first oil', the return of the investment, etc. are now subject to a much more thorough investigation (See Kaasen1.), and public debate, than earlier. Recently the NORSOK initiative, (Matland et al.2) put the focus on economics in offshore projects. The interest in this issue became even stronger as the oil price dropped in 1998 and beginning of 1999. If the cost overruns combined with low oil prices prevailed, several large oil fields would no longer be profitable, and few new fields would be developed.
Although the issue is rather complex, there is one question that deserves to be in focus of the present debate: Do the traditional economic analyses carried out before a field is decided to be developed, properly reflect the risks involved? By calculating the risk premium as an addition to the ‘risk-free' cost of capital, and by using the resulting interest to discount the cash flows, one conceals the spread, the size of the up- and downside risk inherent in the investment. In order to describe all aspects of the risks involved, it is necessary to model the different uncertainties and dependencies explicitly.
P/Z plots of tight gas reservoirs based on non-extrapolated pressures are generally curved and can deviate strongly from the theoretical straight line. Interpretation of these plots by the conventional straight-line method may significantly underestimate GIIP. The paper presents and explains the p/z behavior of tight gas reservoirs for a number of simple prototype reservoirs. It is shown that in many cases the curved p/z plots can still be interpreted quantitatively. The key to the interpretation is the concept of semi-steady-state time. Even if no quantitative interpretation is possible, a p/z plot is still a powerful diagnostic tool. When p/z plots can not be analyzed quantitatively, one has to resort to history matching of shut-in bottomhole pressures and of continuously measured wellhead pressures by using a numerical reservoir simulator.
Material balance (p/z) analysis is a popular and powerful tool for the evaluation of the production performance of gas reservoirs. Application to tight gas reservoirs is not straightforward, however, and its results can be grossly misleading. Some even rule out the application of material balance analysis to tight gas reservoirs altogether because depletion of tight gas reservoirs would violate the basic assumptions of the material balance method (Ref. 1). The main problem is not so much that the material balance assumptions no longer hold (mass conservation also works in tight gas reservoirs) but that it becomes difficult to obtain representative reservoir pressures from observed shut-in pressures. This is because for tight gas reservoirs the shut-in periods are generally too short for a stable well pressure to develop. In addition, production from a tight gas reservoir may take place under transient conditions for a long time. As a result only a small part of the reservoir will be influenced by production.
The purpose of this paper is (1) to present and explain the main characteristics of the p/z behavior of tight gas reservoirs, and (2) to provide guidelines for the interpretation of p/z plots of tight gas reservoirs. To this end we have numerically simulated p/z behavior of the following prototype reservoirs: (1) a homogeneous reservoir depleted by a single vertical production well in the center, (2) a homogeneous reservoir depleted by a single vertical production well off-center, (3) a reservoir depleted by two production wells of different flow capacity, (4) a faulted reservoir depleted by a single well, and (5) a commingled stratified reservoir depleted by a single well.
We have generated the p/z data points by shutting in the production wells periodically and by taking the pressures at the end of the shut-in periods as the observed pressures for the p/z plots. As for the depletion process we have restricted ourselves to volumetric pressure depletion. The true p/z curve for this kind of depletion is a straight line, no matter how complex the reservoir is. In the analyses of the observed p/z plots we have focussed primarily on the estimation of gas-initially-in-place.
The properties of the prototype reservoirs have been selected to cover ranges that are typical of North Sea tight gas reservoirs. The initial pressure is 300 102kPa. The porosity is 10 percent. The permeabilities range from 0.01 to 10 mD. The thickness of the reservoir is 50 m. The chosen gas properties represent a typical dry gas with a gas gravity of 0.65. For the production constraints we have assumed a maximum well rate of 1 106m3/d and a minimum wellhead pressure of 20 102kPa throughout the production history. In all cases we have simulated a total production time of 15 years.
We have performed the simulations with the commercial IMEX black-oil reservoir simulator developed by the Computer Modelling Group. We have made sure and verified that IMEX accurately simulates the pressure distribution in the prototype gas reservoirs during depletion and during shut-in periods. Appendix A details the simulation input data for each of the prototype reservoirs.
In recent years, it is shown that the inverse problem theory based on Bayesian estimation provides a powerful methodology not only to generate rock property fields conditioned to both static and dynamic data, but also to assess the uncertainty in performance predictions. To date, standard applications of inverse problem theory given in the literature assume that rock property fields obey multinormal distribution and are second order stationary. In this work, we extend the inverse problem theory to cases where rock property fields (only porosity and permeability fields are considered) can be characterized by fractal distributions. Fractional Gaussian noise (fGn) and fractional Brownian motion (fBm) are considered. To the authors' knowledge, there exists no study in the literature considering generation of fractal rock property fields conditioned to dynamic data; particularly to well-test pressure data.
We show that available Bayesian estimation techniques based on the assumption of normal/second-order stationary distributions can be directly used to generate conditional fGn rock property fields to both hard and pressure data because fGn distributions are normal/second-order stationary. On the other hand, we show that because fBm is not second-order stationary, these Bayesian estimation techniques can only be used with implementation of a pseudo-covariance (generalized covariance) approach to generate conditional fBm fields to static and well-test pressure data.
Using synthetic examples generated from two and three-dimensional single-phase flow simulators, we show that the inverse problem methodology can be applied to generate realizations of conditional fBm/fGn porosity and permeability fields to well-test pressure data. We conclude by showing how one can then assess the uncertainty in reservoir performance predictions appropriately using these realizations.
Recent studies1,2 have presented procedures based on inverse problem theory of Tarantola3 to generate realizations of rock property fields conditioned to a prior geostatistical model (means and variograms), hard data, and multiwell pressure data for one, two and three dimensional single-phase flow problems. These studies show that the inverse problem theory based on Bayesian estimation coupled with their procedures provides a powerful methodology not only to generate rock property fields conditioned to different data types, but also to assess the uncertainty in reservoir performance predictions. To date, standard applications of inverse problem theory given in the literature1-2,4-6 assume that rock property fields obey multinormal (Gaussian) distribution and are second order stationary.
Recent studies7-8 have shown that fractals like fractional Gaussian noise (fGn) and fractional Brownian motion (fBm) are promising approaches to characterize porosity and/or permeability, in general the hydraulic property variations in the subsurface. Samples of geologic property variations observed in some horizontal and vertical well logs show the persistence of fGn with Hurst coefficient, H>0.7 (see Refs. 7 and 8).
In the literature, there exist stochastic interpolation methods7,9 that can be used to generate conditional fractal simulations honoring variograms and hard data (measurements of porosity and permeability at wells). However, there exists no study in the literature that considers generating fractal fields conditioned to dynamic data, in particular to well-test pressure data. Thus, the objective of this work is to generate fractal (fGn and fBm) porosity and permeability fields conditioned to variograms, hard data and well-test pressure data by using the inverse problem theory.
Jonkman, R.M. (International Oil & Gas Services) | Bos, C.F.M. (Netherlands Institute of Applied Geoscience TNO) | Breunese, J.N. (Netherlands Institute of Applied Geoscience TNO) | Morgan, D.T.K. (Uncertainty Management Ltd.) | Spencer, J.A. (Reserves Management Ltd.) | Søndenå, E. (Norwegian Petroleum Directorate)
On behalf of a group of sponsors consisting of the Norwegian Petroleum Directorate (NPD) and most E&P companies active in Norway, a workgroup was set up to author a report on the Best Practices and Methods in Hydrocarbon Resource Estimation, Production and Emissions Forecasting, Uncertainty Evaluation and Decision Making. The workgroup is part of Norway's forum for Forecasting and UNcertainty (FUN).
Following a detailed data acquisition and interviewing phase to make an inventory of the current practice of all sponsors involved, the workgroup postulated a relationship between a company's practices and its economic performance. A key distinguishing factor between companies is the degree to which probabilistic methods are adopted in integrated multi-disciplinary processes, aimed at supporting the decision making process throughout the asset life cycle and portfolio of assets.
Companies have been ranked in terms of this degree of integration and best practices are recommended. In many companies a gap seems to exist between available and applied technology. Data and (aggregated) information exchange between Governments and companies is also discussed. A best practice based on their respective decision making processes is recommended.
Norway's forum for Forecasting and UNcertainty evaluation (FUN, ref. 1) was established in 1997, and has 18 member companies plus the Norwegian Petroleum Directorate (NPD). The forum is a Norwegian Continental Shelf arena to determine best practice and methods for hydrocarbon resource and emissions estimation, forecasting uncertainty evaluation and decision making. It focuses on matters related to forecasting and uncertainty evaluation of future oil and gas production. Its main purpose is to optimize the interplay between the private industry and the national authorities wishing to regulate their national assets.
The basic question that kicked off the FUN Best Practices project was whether the accuracy of Norway's historical production forecasts has been disappointing because of erroneous contributions from the companies or because of wrong aggregation by NPD. The question was posed which "Best Practices" could improve this situation. Whereas reserves form the basis for production, capex, opex and emissions forecasting, the decision making process in the various companies and national authorities links the various components together. Using the latest WPC/SPE guidelines for reserves reporting (allowing the use of probabilistic methods), the project concentrated on assessing the potential advantages of probabilistic techniques when used in combination withfully integrated asset management workflow processes.
After a discussion of the current practices of the various companies and authorities visited, "Best Practices" are formulated in the fields of estimating reserves, production, costs and emissions forecasting, decision-making, planning and communications. The paper concludes with recommendations on how to move from the "current practices" to the desired "Best Practices".
Oil recovery operations are seeing increased use of integrated geomechanical and reservoir engineering to help manage fields. This trend is partly a result of newer, more sophisticated measurements that are demonstrating that variations in reservoir deliverability are related to interactions between changing fluid pressures, rock stresses and flow parameters such as permeability. Several recent studies, for example, have used finite-element models of the rock stress to complement the standard reservoir simulation.
We discuss current work on fully coupling the geomechanical elastic/plastic rock stress equations to a commercial reservoir simulator. This finite difference simulator has black-oil, compositional and thermal modes and all of these are available with the geomechanics option. In this paper, the focus is on the implementation of the stress equations into the code. Some work on benchmarking against an industry standard stress code is also shown as well as an example of the coupled stress/fluid flow. Our goal in developing this technology within the simulator is to provide a stable, comprehensive geomechanical option that is practical for large-scale reservoir simulation.
There are many recent reports of geomechanical modelling being used predictively for evaluation of alternative reservoir development plans. In the South Belridge field, Kern County, California, Hansen et al1 calibrated finite-element models of depletion-induced reservoir compaction and surface subsidence with observed measurements. The stress model was then used predictively to develop strategies to minimise additional subsidence and fissuring as well as to reduce axial compressive type casing damage. Berumen et al2 developed an overall geomechanical model of the Wilcox sands in the Arcabuz-Culebra field in the Burgos Basin, northern Mexico. This model, combined with hydraulic fracture mapping together with fracture and reservoir engineering studies, was used to optimise fracture treatment designs and improve the planning of well location and spacing.
The subject of fluid flow equations which are solved together with rock force balance equations has been discussed extensively in the literature. Kojic and Cheatham3,4 present a lucid treatment of the theory of plasticity of porous media with fluid flow. Both the elastic and plastic deformation of the porous medium containing a moving fluid is analyzed as a motion of a solid-fluid mixture. Corapcioglu and Bear5 present an early review of land subsidence modelling and then present a model of land subsidence as a result of pumping from an artesian aquifer.
Demirdzic et al6,7 have advocated the use of finite volume methods for numerical solution of the stress equations both in complex domains as well as for thermo-elastic-plastic problems.
A coupling of a conventional stress-analysis code with a standard finite difference reservoir simulator is outlined by Settari and Walters8. The term "partial coupling" is used because the rock stress and flow equations are solved separately for each time increment. Pressure and temperature changes as calculated in the reservoir simulator are passed to the geomechanical simulator. Updated strains and stresses are passed to the reservoir simulator which then computes porosity and permeability. Issues such as sand production, subsidence, compaction that influence rock mass conservation are handled in the stress-analysis code. This method will solve the problem as rigorously as a fully coupled (simultaneous) solution if iterated to full convergence. An explicit coupling, i.e. a single iteration of the stress model, is advocated for computational efficiency.
The use of a finite element stress simulator with a coupled fluid flow option is discussed by Heffer et al9 and by Gutierrez and Lewis10.
There is a potential for improving the reliability of standard core tests for seismic monitoring studies. A primary concern is to quantify and correct for core damage effects, which significantly enhance the stress dependency of wave velocities. Careful laboratory procedures and modeling efforts may reduce such effects. However, no simple procedure is currently available to eliminate this problem. The use of simplified laboratory test procedures, in particular application of an inappropriate effective stress principle, may lead to erroneous interpretations.
Time lapse (4D) seismics provides a potentially powerful tool to identify changes in a reservoir induced during production. This is accomplished by running repeated seismic surveys throughout the production period, looking for changes in the seismic response. Such changes can in principle be ascribed to several parameters, the most obvious being fluid saturation, pore pressure and temperature1,2. Thus, by monitoring the reservoir at various time steps during an enhanced oil recovery operation such as a water injection, one may identify non-flooded compartments within the reservoir. This information permits subsequent positioning of new production and injection wells or modification of the existing depletion strategy in a way that improves the total recovery of the reservoir significantly. During the last 5 years or so the number of commercial 4D seismic surveys has increased from less than 5 to around 25 per year. The cost of a reservoir monitoring project is in many places comparable to that of drilling a new well, and benefits have in many cases proven so large that most companies now consider it as a natural part of reservoir management.
There are, however, a number of factors that influence the success for such surveys, be it related to the reservoir itself in terms of depth, stresses, temperature and structural and compositional complexity, or to intrinsic reservoir properties like the rock and fluid properties at the given reservoir conditions. The success is also affected by the quality of the seismic acquisition parameters during the surveys, for instance the degree of repeatability between subsequent surveys3, as well as the final processing of the seismic data (see for instance Lumley et al4 for a technical risk summary). Due to this substantial variability, one should always perform a seismic monitoring feasibility study in advance to quantify to what extent expected production induced changes may be detectable or not from a planned seismic monitoring study. Such a study needs integrated input from a number of disciplines: After building a proper reservoir model, reservoir simulations have to be undertaken to produce relevant scenarios to be expected throughout production. Thereafter, these must be translated into corresponding seismic parameters from rock physical principles before, finally, seismic modeling can be undertaken for various acquisition geometries and subsequent processing alternatives can be tested out.
Traditionally, seismic monitoring parameters have been deduced from post-stack data through changes in the vertical P-wave reflection coefficient, expressed by the corresponding acoustic impedance ZP=?VP, where VP is the acoustic P-wave velocity and ? the density of mass. This has essentially allowed for inversion for only one effective reservoir parameter. Knowing that there may be concurrent changes in several parameters, this has made the interpretation of the seismics difficult. More recently, however, practical use of AVO data has been introduced5, enabling also the corresponding shear wave impedance ZS to be determined. This simultaneous determination of P- and S-wave impedances has allowed for distinction between changes in multiple reservoir properties like for instance both saturation and pore pressure, assuming that other parameters remain constant.
An investigation of alternative EOR processes having potential application in the giant Ekofisk chalk field is presented. Technical feasibility, process readiness, oil recovery potential, and related uncertainties and risks of five selected EOR processes, namely hydrocarbon (HC) WAG, nitrogen (N2) WAG, carbon dioxide (CO2) WAG, air injection and microbial EOR (MEOR), are assessed for possible application at Ekofisk. The objective of the screening study was to evaluate and rank the EOR alternatives and to select the most attractive process(es) on which to pursue further work toward possible field pilot testing. The focus of the paper is on the technical assessment of the relative oil recovery potential of each process, and on the importance of identifying critical operational and logistical considerations for implementation of an EOR processes in the offshore North Sea operating environment.
Estimates of potential EOR incremental oil recovery for the Ekofisk field can be quite significant. However, key project development and implementation issues and additional cost elements must be weighed equally with oil recovery forecasts in any EOR process ranking. Some of these issues (e.g. injection gas supply, facilities requirements, and the impact of EOR on chalk compaction, subsidence and wellbore integrity) may be significant enough to eliminate a process from further consideration.
In addition, there are significant differences in the quantity and quality of key laboratory and field data supporting the viability of the various EOR processes being considered. Only a limited amount of field-specific data are available to calibrate the performance predictions for some of the processes. There is also a wide variation in the technical readiness of each process to begin field pilot design studies. Table 1 summarizes the state of technical readiness for field implementation of each process and identifies some of the major risk elements and remaining work required to progress these EOR processes at Ekofisk.
The Ekofisk Field is located in the Norwegian Sector of the North Sea, Figure 1. The reservoir is an elongated anticline with the major axis running North-South covering roughly 12,000 acres, Figure 2. It produces from two fractured chalk horizons, the Ekofisk and Tor Formations, separated by a tight zone. The overlying Ekofisk Formation has a depth of about 9,600 feet and thickness varies from 350 - 500 feet with porosities from less than 30% to 48%. The underlying Tor Formation thickness varies from 250 - 500 feet with porosities from less than 30% to 40%. About two thirds of the 6.4 billion STB OOIP is in the Ekofisk Formation. The initial reservoir pressure was 7135 psia at a depth of 10,400 feet. The field initially contained an undersaturated volatile oil with a bubble point pressure of 5560 psia at the temperature of 268°F.
Ekofisk1 was discovered in 1969 and test production was started in 1971 from the discovery well and three appraisal wells. Commercial test rates prompted development of the field from three platforms. Permanent facilities with 54 well slots and 300,000 STB/D (design capacity) process facilities were operational in May 1974. Development drilling was started June 1974. Oil production peaked at 350,000 STB/D in October 1976. Produced gas was reinjected2 until a gas pipeline was installed to Emden, Germany, September 1977.
van de Hoek, P.J. (Shell International Exploration and Production B.V.) | Kooijman, A.P. (Shell International Exploration and Production B.V.) | De Bree, Ph. (Shell International Exploration and Production B.V.) | Kenter, C.J. (Shell International Exploration and Production B.V.) | Sellmeyer, H.J. (Delft Geotechnics) | Willson, S.M. (Terra Tek, Inc.)
This paper presents a theoretical and experimental investigation into the possibility of cavity re-stabilisation after initial sand failure by formation of stable wormholes (‘pipes') behind the casing / liner. Piping has been observed in dikes in The Netherlands.
A model has been developed to compute the degree of cavity re-stabilisation and associated sand production rates. The theoretical results show that formation of wormholes can result in re-stabilisation and associated reduction in sand production.
The theoretical work has been verified by laboratory sand production experiments. Experiments on friable sandstones show that formation of wormholes can take place as part of a re-stabilisation process. Experiments in homogeneous unconsolidated sandstones, on the other hand, show that under elevated stresses, it is impossible to form stable wormholes in this case. Experience with dikes however show that wormholes in these sands can occur when they are interbedded with shales/clays.
The results above in homogeneous sandstone are further supported by tests with horizontal pre-drilled, uncemented liners in weakly consolidated artificial sandstones. Formation of stable sand arches over the liner holes takes place at low values of far-field stress. However, elevated values of far-field stress lead to a continuous, high-rate (‘massive') sand production.
Field observations of sand production show that after initial sand failure, some form of downhole cavity re-stabilisation takes place1. For example, beaning up a well will lead to a high initial sand production rate, which subsequently declines and in many cases eventually disappears. Another example of re-stabilisation is the observation of continuous low-rate sand production during many years in (sometimes strongly) depleting reservoirs without apparent cavity collapse and subsequent massive sand production.
Previous sand failure prediction research has shown that the primary role of fluid flow in sand production is the transport of loose sand (debris) resulting from compressive failure, rather than failure of the intact sandstone itself2. The subject of this paper is the potential importance of wormhole formation within this sandstone debris surrounding the well, as a cavity stabilising mechanism. It is well-known that the presence of a zone of failed rock material, supporting the intact rock can result in a significantly increased stability of the intact rock. If damaged rock is not produced out, rock failure cannot progress and (further) sand production will not occur. A proper understanding of the behaviour of the damaged rock and of the circumstances under which it is produced out (for example, by the presence of a watercut3), is essential for interpreting and predicting sand production behaviour observed in the field.
A number of different mechanisms that can stabilise failed sandstone against being transported into the wellbore has been identified. Arch formation in loose, dry sand for dilatant sands and for not too high loads (to prevent grain crushing) has been experimentally demonstrated4. In addition, arch stability is greatly enhanced by capillary cohesion5. Another proposed mechanism is significant permeability increase of the failed sand pack as a result of fluid-flow induced dilation6. The sand pack, which is held together by capillary forces, can undergo large deformations (up to 20%) without loosing significant strength. The corresponding permeability increase can significantly stabilise the sand pack by reduction of the near-wellbore pore pressure gradient.
In fractured reservoirs, data directly related to fractures are scarce and uni-dimensional (i.e. cores and image logs). Other types of data are better distributed and have proved to be related to fracturing but only indirectly (e.g. lithology or large scale structure).
In such reservoirs, however, one has to understand fracture distribution and behavior at the field scale. A methodology has been developed within TotalFinaElf to define the relationships of all sources of data to fracturing and to integrate them and compared to another independent published method. To that end, a systematic work flow which goes from 1D to 2D and from static to dynamic data has been defined and various technologies tested.
A field case in North Africa is taken to illustrate this methodology. In this field, fracture data from image logs have been related to: 1) production data; 2) 3D seismic attributes (coherency, amplitude, structural curvature) and fault interpretation and strain; 3) log data such as porosity, thickness and lithology index. The former type of data is used to understand the contribution of each fracture set to flow. The latter two types of data are used to better map fracture distribution at the field scale. Ultimately, this mapping is calibrated with the production data of the other wells where fracturing data are not available and is then used to validate the specific role of fracturing in this field.
A better reservoir simulation and infill well planning can be subsequently achieved.
Fractured reservoirs are by nature highly heterogeneous. In such reservoirs, fracture systems control permeability and can also control porosity. Fracture modeling is therefore a key development issue and requires an integrated approach from geology to reservoir simulation and well planning. The geometry (i.e. static model) of the fracture network is generally defined from well data (i.e. cores or image logs) using conventional structural geology techniques. Then, fracture permeability can be assessed by relating the fracture aperture to the fracture excess conductivity measured on electrical image logs1 and/or to critically stressed fractures within the present day stress field2. However, it is the authors' opinion that such approaches can only give, in the best case, a relative estimate of the fracture permeability. A quantitative modeling of fracture flow behavior is therefore required (i.e. dynamic model). At the well scale, this can be done by constructing Discrete Fracture Networks (DFN)3-4 through which flow is modeled and which are matched to well test data5. Ultimately, these models can help in determining the fracture parameters required in dual porosity / dual permeability reservoir flow simulation6,7. However, if these DFN models are appropriate for reservoir sector models, their application to full field simulation is somewhat difficult since their extrapolation outside the well scale can be limited by the heterogeneous vertical and lateral distribution of the fracture networks. The modeling of the spatial distribution of fracturing at the scale of the entire field and its calibration to well data is the purpose of this paper.