This paper describes the design of FLEX, an object-oriented, flexible grid, black-oil reservoir simulator helps in dealing with the complexity of this problem. This approach is particularly useful because of the difficulties associated with generation and use of flexible grid geometries (like Voronoi, median, boundary adapting grids, etc.).
The entire problem is divided into subsystems like geometry, gridnodes, gridnode connectivity, grid, reservoir fluid flow, and matrix. Each of these subsystems have objects which are closely related. The dependency of these subsystems is established. A detailed analysis of each subsystem leads to identifying the classes, which are a set of objects having similar behavior. Attributes and behavior of the classes are assigned. After establishing relationships between the classes, they are arranged into hierarchies. About one hundred major classes have been identified and designed to achieve the desired behavior from FLEX. The programming language used is C++.
Reservoir simulators are inherently complex. A simulator has to deal with issues such as reservoir and grid geometry, fluids, flow calculation, matrix computations, several well and production constraints, visualization, etc. The most important feature of FLEX, a black oil simulator, is its ability to handle complexities arising from flexible grids. Verma and Aziz (1996) give a description of flexible grids in reservoir simulation. The flexibility in grids increases geometrical complexities as well as complexities in flow calculation. These complexities need sophisticated data structures (and associated procedures) to simplify the problem. It is expected that FLEX will change with time to incorporate new features. One of the important considerations in designing the simulator is the ease with which the simulator can be expected to handle new problems. All these factors combined to make the development process of FLEX quite complex. This paper describes the advantages of using an object-oriented approach for the development of reservoir simulators. The philosophy followed in designing FLEX is that advocated by Booch (1994) and Cheriton (1995).
Basic Features of FLEX
FLEX solves flow equations based on the control volume formulation (see Verma and Aziz, 1996). It uses the Newton-Raphson method to iteratively solve for the variables. A connection-based approach is employed to form the Jacobian matrix and the residual vector (see Lim, Shiozer and Aziz, 1995 and Verma and Aziz, 1996). Presently the simulator is developed to handle only two immiscible phases.
The gridnodes can be located so that they represent reservoir geometry, wells, faults, etc. Figure 1 is an example of the flexible grid generation capabilities of FLEX.
An object-oriented approach was followed in the design of FLEX to handle the complexities associated with a flexible grid simulator, and to provide for future enhancements.
The existence of multiple processors on a network or a parallel machine enables sets of reservoir simulations to be performed. It is then possible to build up a model of the reservoir as a response surface. This can be a function of input variables related to symbols in the user's data definition. Controlling these variables by a master program and using experimental design methods in a series of runs we can model the behavior of the simulation. This would then predict its response to further engineering requests. Sequential designs are particularly relevant when a bank of available processors exists. Once the basic number of runs required to parameterize the reservoir response functions have been performed. additional simulations may be used to obtain error bars on quantities such as recoverable oil.
Once the response surface exists, both automatic optimization and large scale risk analysis predictions may be performed at high speed.
This approach combines well with multiple realization geological modeling. Running a number of geologies enables the error involved in the simulation of a given engineering scenario to be quantified - this can be used to predict the uncertainty on all the predictions of the study.
By monitoring the results of the simulations interactively, and measuring the quality of the history match obtained with each, it is possible to condition subsequent runs - for example by reducing the use of realizations which consistently yield poor matches.
We describe software to perform such multiple realization studies, acting as an interactive supervisor through a simple open PVM (parallel virtual machine) interface to a reservoir simulator.
Current reservoir simulation is addressing the problem of uncertainty. Errors in the basic description of the reservoir may be estimated by comparing more than one geostatistically generated realization of the rock property distribution. Performing a number of simulations gives us the possibility of assessing the errors involved in a study. We need a system for converting a number of runs using different geologies into an assessment of the risks involved in management and economic terms. Further, as we wish to engineer the field into the future, we wish to understand the response of the reservoir to factors which are under the engineer's control.
A way of doing this is to set up a parameterisation of the reservoir response. To investigate this we clearly need to perform a number of simulations - these may be regarded as numerical experiments. The choice of the set of variables which define these runs constitutes the experimental design. It is possible to use any sufficient set and extract some information. However, the following are clearly desirable:
Geostatistical techniques generate fine-scale reservoir description that can integrate a variety of data such as cores, logs, and seismic traces. However, predicting dynamic behavior of fluid flow through multiple fine-scale realizations has still remained an illusive goal. Typically an upscaling algorithm is applied to obtain a coarse scale heterogeneity model. Most of the upscaling algorithms are based on single phase pressure solution and are thus questionable at best for multiphase flow applications. Pseudo-relative permeabilities have often been used as a tool for multiphase flow upscaling But such approaches are highly process dependent and thus, have limited applicability. We describe a powerful, versatile, multiphase three dimensional streamline simulator for integrating fine-scale reservoir descriptions with dynamic performance predictions. Unlike conventional streamtube models, the proposed approach relies on the observation that in a velocity field derived by finite difference, streamlines can be approximated by piece-wise hyperbolas within grid blocks. Thus, the method can be easily applied in 3-D and incorporated into conventional finite-difference simulators. Once streamlines are generated in three dimensions, a variety of one dimensional problems can be solved analytically along the streamlines. The power and utility of the streamline simulator is demonstrated through application to a detailed characterization and waterflood performance of the La Cira field, Colombia, South America. We illustrate the advantage of the streamline simulator through comparisons with a commercial simulator for a waterflood pattern. The streamline simulator is shown to be orders of magnitude faster than traditional numerical simulators and does not suffer from numerical dispersion or instability. We illustrate the use of this simulator for evaluation of multiple, fine-scale realizations of heterogeneity models and quantification of uncertainty in predicting dynamic behavior of fluid flow.
A geostatistical approach is commonly used to reproduce reservoir heterogeneities1. The objective is to generate a few "typical descriptions incorporating heterogeneity elements that are difficult to include by conventional methods. Conditional simulation is used for creating property (permeability, porosity, etc.) distribution with a prescribed spatial correlation structure that honors measured data at well locations. Stochastic reservoir modeling provides multiple equiprobable, reservoir models, all data intensive, rather than a single, smooth usually data poor deterministic model. Experience has shown that these data intensive, stochastic reservoir models yield a better history match of production data, yet provide a measure of uncertainty in prediction of future performance.
Fine-scale realizations are the most detailed representation of the heterogeneities that exist in the petroleum reservoir. The ideal flow simulation process would be to input this fine-scale data in its entirety. However conventional numerical simulators do not allow this readily. Reservoir models built for conventional simulators using the fine-scale data are huge and unmanageable. The flow simulation process thus becomes very tedious, slow and expensive. This is in addition to any hardware limitations that may exist. Typically an upscaling algorithm is applied to obtain a coarse-scale heterogeneity model. This coarse-scale model is then input into the conventional simulators. However, most of the upscaling algorithms are based on single phase pressure solution and are thus questionable at best for multiphase flow applications. Pseudo-relative permeabilities have often been used as a tool for multiphase flow upscaling But such approaches are highly process dependent and have limited applicability. There is a definite need for a fast and powerful simulator that allows the easy use of fine-scale realizations as such without the need for any upscaling.
In this paper we describe a new, fully three-dimensional, multiphase, streamline simulator for modeling waterflood performance.
Monitoring the performance of the Kuparuk River Unit waterflood at a multi-well, pattern level is a critical part of field operations. The reservoir performance analysis optimizes allocation of injected fluid, helps identify well work and infill drilling opportunities, supports reservoir management strategies, and provides a basis for development planning. Faulting and stratigraphy of the reservoir make it difficult to determine areal and vertical allocation factors for fluids in the surveillance patterns; therefore material balance calculations are required to judge their validity. The problem is exacerbated by the number of patterns analyzed and the need to share intermediate results between all of engineers that need to be involved in the process.
To help solve the problem, a suite of programs for interactive pattern material balance was developed. The program suite includes a principle material balance calculation application along with several ancillary programs for interactive database updates and post-processing.
The program suite allows engineers to interactively change input parameters and review material balance results. Internal checks ensure consistency throughout the field. The programs are fully integrated with a large central relational database which includes tables for areal and vertical allocation factors, production, injection, and static pattern information.
The Kuparuk River Field is located west of the Prudhoe Bay Unit on the North Slope of Alaska. The field is a highly faulted reservoir with an areal extent of over 200 square miles. The field is under active waterflood with some areas also under immiscible or miscible WAG (water, alternating with gas) flood.
A formal review of reservoir performance at the pattern level is done on an annual basis. The review includes defining patterns and allocating fluids produced from or injected into the patterns. This is followed by a comprehensive review of the performance on a pattern-by-pattern basis and rate forecasts. The review is necessary to help identify workover and infill drilling opportunities to optimize the waterflood It is also required to help with operational considerations such as allocation of injected fluids throughout the field.
The validity and usefulness of the surveillance review is highly dependent on accurate allocation of production and injection to each pattern in the Kuparuk River reservoir. This is a difficult problem because over 600 patterns in 2 zones are analyzed. The process is led by a team of five to six engineers. An additional 15 to 20 engineers and geoscientists are involved in defining patterns and evaluating allocation factors. Because of the large number of individuals involved and the large amount of data, a group oriented solution was required in order to achieve consistency across the field. Internal data checks were also required so that wells could not be over or under allocated and to accurately account for all fluids.
History of Software Development Efforts. Several computing solutions have been explored. The first software solution was a mainframe material balance program tied to SAS databases. Iterations required editing ASCII text files and overnight batch runs to update databases followed by a batch processing for pattern analysis. Visualization consisted of batch programs that submitted plots to central printers, often with half-day turnaround times. A major advantage of the system was its use of a centralized database and enforced consistency. The disadvantages of long turnaround times and other difficulties within the mainframe environment made the process inefficient and manpower intensive.
As personal computers became more popular, much of the process was ported to spreadsheets. The analysis process consisted of downloading data from mainframe databases into a text format that was read into the spreadsheets. Engineers manipulated and modified the data within the spreadsheets to perform individual pattern analyses. The process was somewhat more efficient in terms of manpower, but consistency was nearly non-existent.
The primary mission of the Society of Petroleum Engineers (SPE) is to disseminate petroleum engineering technology. One of the most important sources of petroleum engineering technology is the extensive inventory of SPE technical papers. The SPE technical paper library currently contains over 28,000 papers and is expanding at a rate of 2,000 papers per year. Challenged with increasing publishing costs, an increase in papers that have been approved for publication and the goal to streamline the publication process, the SPE Electronic Publishing Committee (EPC) working together with the SPE publishing staff has turned to technology to help address these issues. One important product of these efforts is the SPE Masterdisc and SPE Image Library.
The SPE Masterdisc is a MS-Windows, PC-DOS and Macintosh computer based software system that links a commercial search engine to an indexed database. The database represents the first page of each technical paper submitted to the SPE from 1951-1995 in text format. All published and unpublished papers have been captured to the database. Each word is searched. along with paper number, title, author, organization, year, and meeting to match search criteria set by the user.
SPE Image Library
In January 1994 the SPE Board authorized advancement of SPE funds to match SPE Foundation contributions to begin work on the SPE Image Library. The SPE Image Library is a multi-disc cd-rom based product that contains images of all the pages to all the SPE technical papers published from 1951 - 1995 and comes with the SPE Masterdisc. There are about 300,000 images that have been captured in tagged image file (.tif), graphic format. The Masterdisc searchable database has been linked to the Image Library database resulting in the ability to read or print the full paper instantaneously on your computer.
How does the SPE Image Library work? The user enters search criteria that can be author, any combination of paper number, keywords in title, organization, journal, meeting, location, year, word or phrase. Once this form is completed, (Fig. 1), the user presses the enter key and the total number of papers found in the database are displayed. For our example here, the words Ozkan and Erdal were found in the author field for twenty SPE papers.
A simple click on the search results icon displays a list of the papers, (Fig. 2). The user can scroll through this list, print it, return to the search screen and modify search criteria or select a paper for viewing. The highlighted paper number 20964 was selected for viewing. The user can now decide to view the first page of this paper (Masterdisc text of page one only) or the entire paper (Image Library graphic files). By clicking on the highlighted paper, the text of page one is displayed.
Now the user can click on the camera icon to display the entire paper. The software prompts for the appropriate cd-rom disc number. In this example disc number 18 is requested. After disc 18 is substituted for the Masterdisc in the cd-rom drive, the first page of the paper is displayed.
We describe the development of a knowledge-based system to predict relative permeabilities to describe the flow of fluids in oil, gas or condensate reservoirs. The software applies heuristic knowledge and artificial intelligence techniques to identify the appropriate experimental methods for measuring the relative permeabilities, and to decide on the relevant mathematical models and computational steps to simulate the experiments. The selected models and computational steps are used together with the built-in database to generate the relative permeability data. Rules that relate the combination of field development scenario, fluid PVT properties, rock lithology and petrophysical properties are included in the knowledge base.
The basis of the software is that, in some instances, precisely defined rules based on quality published data and our expertise can do better than deterministic and purely statistical methods. This view is especially true in areas with limited and/or poor-quality data, as currently exists in gas/condensate and gas/water relative permeability predictions. The paper describes the software design approach, philosophy and architecture. The mathematical and heuristic models used to generate the relative permeability data are briefly described. The target applications of the software are as follows: 1) Tool to generate relative permeability and capillary pressure data for input to numerical simulators and material balance calculations; 2) Tool to perform a series of "what if' calculations to determine the effects of lithology, fluids saturations and PVT properties, interfacial tension and velocity on endpoint saturations and relative permeability functions; 3) Tool to analyse/interpret laboratory coreflood data; 4) Tool to generate relative permeability data when coreflood data is not available or is incomplete (e.g. when only endpoint data are available); and 5) Tool for use by the reservoir engineer to design a special core analysis program for a new field or study.
Relative permeability is used to describe multiphase flow in a porous medium. Such data are important input to many reservoir engineering calculations, providing a basic description of the way in which the phases will move in the reservoir. Definition of the flow process can have a significant effect on the predicted hydrocarbon production rate and duration and is important in calculating the volume of recoverable hydrocarbon reserves. The predicted production rates, the plateau level and duration, plus the expected water cut will all influence development plans. The number of wells, the balance between injectors and producers, the sizing of separation equipment, and design of facilities in general can all be impacted upon by the multiphase flow properties of the reservoir. Ultimately, together with many other inputs, relative permeability assists in determining reservoir economics, and hence guiding investment decisions.
Although ways to determine relative permeability from measurements made in the field have been proposed, they are fraught with problems and have never been regularly used. The most common method for determining relative permeability has been laboratory special core analysis. Laboratory measurement of representative relative permeability data on a reservoir core-fluid system is a complex task. The experiments are costly, typically more than $100,000 each, and time consuming, often taking up to six months to complete.
A new approach to quantifying parameter uncertainty has been implemented in the numerical welltest simulator GTFM. The method incorporates inverse techniques, advanced statistical analysis, and probabilistic techniques. The approach is holistic in that it accounts for uncertainty introduced at all stages of the testing and analysis procedure. The methodology expands upon accepted methods by giving greater emphasis to model assumption diagnostics and parameter uncertainty.
Model assumption diagnostics are used to distinguish between competing flow models. Candidate flow models are evaluated based on the distribution of residuals between the measured data and the model response. A flow model is accepted if its residuals are normally distributed and discarded if the residuals deviate from the normal distribution.
Typically, parameter uncertainty from pressure-transient tests is analyzed using a discrete sensitivity analysis in which a single parameter is varied while keeping all other parameters constant. More recently, inverse techniques have focused on the estimation of confidence intervals which put bounds on the values of fitting parameters. This approach is limited by the assumption that there are no correlations among fitting parameters. and that non-fitting parameters are known perfectly.
For our new probabilistic-based approach, joint-confidence regions are calculated to quantify the fitting-parameter uncertainty resulting from parameter correlation. For non-fitting parameters. distributions are assigned, sampled using a Latin Hypercube sampling routine, and an inverse procedure performed for each set of sampled parameters. This process results in a distribution of joint-confidence regions for the fitting parameters (Fig. 1), which in turn is an expression of uncertainty in the non-fitting parameters. The new approach also makes it possible to graphically display the correlations between fitting and non-fitting parameters for sampled populations.
Forward simulations using parameter combinations from Figure 1 result in virtually no change in the goodness-of-fit of the simulations to the data (Fig. 2). This approach to quantifying parameter uncertainty differs from other approaches in that the simulated fit shows no degradation within the parameters uncertainty range because correlations between parameters is not neglected.
Uncertainty in fitting-parameter values can result from uncertainty in the conceptual flow model, data noise, correlations among fitting parameters, and correlations among fitting parameters and imperfectly known nonfitting parameters.
This paper presents a new parallel ILU preconditioner. It is based on the technique of Sequential Staging of Tasks (SST) which overcomes the difficulty of recursiveness arising from ILU factorization. This new parallel ILU preconditioner is easy to reconstruct from the sequential version. The only requirement is to insert the synchronization codes of stages of tasks into the sequential version, and any other modifications to the original serial codes are not needed. The characteristic of matching various ordering schemes is still maintained, and the new merit of handling different numbers of processors is obtained. Numerical results were obtained using a thermal model with different grid system. The parallel speedup is satisfactory.
Simulation of thermal recovery processes using a fully implicit treatment of component concentrations, phase saturations, pressure and temperature requires solution of large systems of linear equations. Currently, the most robust techniques for solving this large system of linear equations are preconditioned conjugate gradient like methods such as ORTHOMIN which is widely used in traditional solvers for sequential computers.
The major computation of ORTHOMIN is vector inner product and is easy to be paralleled. However, the most robust preconditioners such as ILU factorization and nested factorization are not suitable for parallel computer because of their intrinsical recursiveness. It is difficult to parallel ILU preconditioner directly. At present, the general methods to parallel preconditioners are the use of new preconditioning methods such as parallel nested factorization preconditioning. These new preconditioning methods have high parallel efficiency in parallel computers, but also have limitations:
1. limited types of ordering schemes;
2. comprehensive modifications of the sequential codes;
3. slow convergence.
This paper presents a new parallel ILU preconditioner based on the technique of Sequential Staging of Tasks (SST). Using the SST technique, the new preconditioner exploits the small scale parallelism of ILU factorizations, and achieves a temporal, larger scale parallelism within certain computing domain, consequently obtains an applicable parallel preconditioner. The new parallel preconditioner maintains all the characteristics of the sequential version, and is easy to reconstruct from the sequential version. The new preconditioner is applicable to computers with different number of processors. The numerical experiments show that the parallel speedup is satisfactory. On an NP 1/52 Mini-Supercomputer System (produced by GOULD Co. in 1989, shared main memory, symmetrical operation system UTX/32) with two processors, the parallel speedup is 1.85.
Consider the linear system,
As a preconditioner of ORTHOMIN, ILU factorization provides a matrix M, a "good" approximation to coefficient matrix A and easy to factor, convergence may be accelerated by solving the equivalent system,
Such preconditionings should offset the added cost of factoring M and performing a forward and back solution with each matrix-vector multiplication by reducing the number of iterations substantially.
Main stages of ILU factorization are as follow:
1. A symbolic factorization, defining the non-zero structure of the incomplete factorization.
For k=l, NB Do
2. Inverting the main elements
An original three dimensional (3D) fracture modelling environment which is designed to allow users to interactively construct near well fracture models suitable for use as input to fluid flow simulators is described. Data input consists of a surface model of the seismic scale structure, well fracture data, and user defined parameters. Sedimentary discontinuities and faults can be extrapolated from the wells in 3D. Within this near well model tension fractures can be generated using a genetic process which aims to reproduce the initiation, growth and interaction of fractures in order to create realistic fracture systems. Graphs are available for calibrating the fractures to assumed statistical distributions.
Modelling fractures within a reservoir is a complex problem both from a geological and a reservoir engineering point of view. The problems posed are at least as complex as those associated with sedimentary modelling and the production consequences of fractures can be dramatic both in a positive and negative sense. It is increasingly acknowledged that in order to understand and forecast the behaviour of fracture influenced reservoirs we must attempt to realistically reproduce the fracture system geometry and use this as the basis for fluid flow calculation.
The geometrically simplistic models of fractures in most existing simulators, e.g. the cubic model of Warren and Root, are unrepresentative of most natural fracture systems. It is for this reason that several companies have begun work on fracture modellers which attempt to produce more realistic fracture systems.
A design criterion is developed to select the optimum bottomhole rotary assembly configuration (drill collar size and stabilizer positions). This allows maximum correction in the hole inclination by changing the weight on the bit while drilling a section of a hole. A computer algorithm is written in FORTRAN code to optimize the bottomhole assembly configuration. This algorithm can be used with any BHA model that calculates forces at the bit under static or dynamic conditions. A case study is presented to explain the design algorithm.
Directional wells are usually kicked off from vertical by some type of bent sub/bent housing and downhole motor. When the hole inclination becomes sufficiently high, drilling may resume with a steerable system or a rotating system.
In conventional rotary systems a packed-hole assembly uses a sufficient number of stabilizers so that no significant change in hole angle occurs. Likewise, proper stabilizer positioning can increase the hole angle via the fulcrum principle, or it can decrease via the pendulum principle (1985). Conventional rotary systems are normally designed to drill holes with a constant rate of change of inclination. However, discrepancies between prediction and field results frequently occur. Therefore, experience and knowledge are necessary tools in the selection of such a drilling system.
Steerable systems are widely used in drilling horizontal and extended reach wells. This is mainly because, drilling direction can be changed in a more controllable way to follow a predetermined trajectory without the need of changing the assembly. The bottomhole configuration of steerable systems, however, creates major torque and drag problems while drilling highly deviated wells in the sliding mode. It becomes difficult to have even torque distribution on the bit which is essential to control tool-face. High drag can limits the ability of maintenance of a constant weight on the bit. This reduces penetration rate and makes it hard to control tool-face as the reaction torque at the bit varies. Furthermore, drag can limit the total horizontal displacement.