This paper describes the modeling, development, and implementation of a database for multiphase flow data from a large-scale research laboratory.
The outdoor laboratory facilities include a total of 1000 m pipe connected to fluid processing utilities, liquid pumps, and a 700 kW gas compressor operating at system pressures up to 90 bar. Normally, approximately 100 physical and virtual instruments are used, and at the moment more than 400000 measurements have been logged, processed, and stored in the database. The database communicates with systems for data acquisition and analysis, and researchers can access data from their workstations.
The database is implemented in a relational database management system (RDBMS) with SQL interface. Before the implementation of a database can start, it is important to develop a data model which represents the selected part of the real world in a way that satisfies the system requirements. The different aspects of the data model are treated thoroughly and include (1) experiment, (2) geometry, (3) instruments, (4) measurements, (5) fluids and fluid composition, and (6) project related data.
Entity-Relationship diagrams of the model are presented and explained. Timeseries are stored in the database using a specially developed C function. Other functionality is handled by database triggers and procedures stored in the database itself (written in a procedural extension to SQL), and UNIX shell scripts. Problems and advantages of data redundancy and reuse are discussed, as well as the need to avoid destroying the "history" in the database. The communication with the systems for data acquisition, analysis, and presentation is described.
During the past 10 years, thousands of large-scale two-phase experiments have been performed at the SINTEF Multiphase Flow Laboratory at Tiller near Trondheim in Norway.
Experiments have been performed in different pipe configurations; inclinations varying from -1 to +90 , diameters ranging from 0.1 m to 0.29 m, and at system pressures up to 90 bar. The laboratory has been operated with various hydrocarbon liquids such as diesel, lube oil, and naphtha, whereas nitrogen is used as gas phase.
The outdoor laboratory facilities include a total of 1000 m pipe connected to fluid processing utilities, liquid pumps, and a 700 kW gas compressor operating at system pressures up to 90 bar. The laboratory is performing advanced multiphase flow experiments for the oil and gas industry at near realistic field conditions, and the results are used in the development of pipeline transportation simulation software.
Experimental work with extensive instrumentation creates large amounts of data, often with a complex internal structure. When such experiments are carried out for many years, the need for a reliable database system with good performance is obvious. At the SINTEF Multiphase Flow Laboratory, the results were stored in an older database which lacked the required functionality and was expensive and time consuming to maintain. To utilize the advantages of a modem database system, a new database model was developed and implemented, together with necessary utility programs. A concern has also been the ability to store field data, and the possibility to retrieve the relevant input information necessary to run pipeline simulation software.
Normally about 100 physical and virtual instruments are used in an experiment, and at the moment more than 400000 measurements have been logged, processed, and stored in the database.
In this paper, we present a new approach to software development for solving engineering problems. The paper presents the possible solutions to the problems that we encounter during the development of engineering software where we incorporate different techniques and tools such as a database, case base, knowledge base, expert help, on-line help, multiple-applications, and guidance board/map. The paper also discusses the reusability of code that is derived for a different application. Finally, the paper illustrates how we use this approach to develop a comprehensive software system.
The paper concludes that engineering software must be versatile in nature. The software should be a working tool for industry experts and experienced engineers. The software should also be a training and learning tool for young and inexperienced engineers.
Petroleum engineers use software everyday to solve engineering problems. The petroleum industry invests substantial manpower and financial support to develop these engineering tools. During software development, several challenges are faced by the developer. These challenges include (1) offering appropriate technical help to users so that comprehensive data stets can be developed, (2) guiding the user to arrive at correct decisions and choices using proper engineering domain knowledge, expertise, and experience, (3) providing ample guidelines for the proper use of the software, (4) offering automatic data transfer mechanisms among different models, and (5) providing sufficient help and support to satisfy the different needs and requirements of the users. Developing software to address these challenges can increase engineering work efficiency and reduce software training and support costs.
The paper describes our approach to software development based upon our experience in developing a comprehensive system for the design of reservoir stimulation treatments. We have successfully addressed the five (5) challenges listed above. We conclude that although different technologies are used to solve different problems, they can all be integrated in a single package that contains engineering help, data transfer, and expert decision making mechanisms. In our software, we have used Artificial Intelligence technology to build knowledge bases that can help engineers make correct decisions. We have integrated the knowledge bases with sophisticated numerical simulators for the purpose of modeling well stimulation treatments. At the same time, the software provides easy avenues for the users to build comprehensive data sets with the use of databases, case bases, and built-in work sheets.
The existence of multiple processors on a network or a parallel machine enables sets of reservoir simulations to be performed. It is then possible to build up a model of the reservoir as a response surface. This can be a function of input variables related to symbols in the user's data definition. Controlling these variables by a master program and using experimental design methods in a series of runs we can model the behavior of the simulation. This would then predict its response to further engineering requests. Sequential designs are particularly relevant when a bank of available processors exists. Once the basic number of runs required to parameterize the reservoir response functions have been performed. additional simulations may be used to obtain error bars on quantities such as recoverable oil.
Once the response surface exists, both automatic optimization and large scale risk analysis predictions may be performed at high speed.
This approach combines well with multiple realization geological modeling. Running a number of geologies enables the error involved in the simulation of a given engineering scenario to be quantified - this can be used to predict the uncertainty on all the predictions of the study.
By monitoring the results of the simulations interactively, and measuring the quality of the history match obtained with each, it is possible to condition subsequent runs - for example by reducing the use of realizations which consistently yield poor matches.
We describe software to perform such multiple realization studies, acting as an interactive supervisor through a simple open PVM (parallel virtual machine) interface to a reservoir simulator.
Current reservoir simulation is addressing the problem of uncertainty. Errors in the basic description of the reservoir may be estimated by comparing more than one geostatistically generated realization of the rock property distribution. Performing a number of simulations gives us the possibility of assessing the errors involved in a study. We need a system for converting a number of runs using different geologies into an assessment of the risks involved in management and economic terms. Further, as we wish to engineer the field into the future, we wish to understand the response of the reservoir to factors which are under the engineer's control.
A way of doing this is to set up a parameterisation of the reservoir response. To investigate this we clearly need to perform a number of simulations - these may be regarded as numerical experiments. The choice of the set of variables which define these runs constitutes the experimental design. It is possible to use any sufficient set and extract some information. However, the following are clearly desirable:
A design criterion is developed to select the optimum bottomhole rotary assembly configuration (drill collar size and stabilizer positions). This allows maximum correction in the hole inclination by changing the weight on the bit while drilling a section of a hole. A computer algorithm is written in FORTRAN code to optimize the bottomhole assembly configuration. This algorithm can be used with any BHA model that calculates forces at the bit under static or dynamic conditions. A case study is presented to explain the design algorithm.
Directional wells are usually kicked off from vertical by some type of bent sub/bent housing and downhole motor. When the hole inclination becomes sufficiently high, drilling may resume with a steerable system or a rotating system.
In conventional rotary systems a packed-hole assembly uses a sufficient number of stabilizers so that no significant change in hole angle occurs. Likewise, proper stabilizer positioning can increase the hole angle via the fulcrum principle, or it can decrease via the pendulum principle (1985). Conventional rotary systems are normally designed to drill holes with a constant rate of change of inclination. However, discrepancies between prediction and field results frequently occur. Therefore, experience and knowledge are necessary tools in the selection of such a drilling system.
Steerable systems are widely used in drilling horizontal and extended reach wells. This is mainly because, drilling direction can be changed in a more controllable way to follow a predetermined trajectory without the need of changing the assembly. The bottomhole configuration of steerable systems, however, creates major torque and drag problems while drilling highly deviated wells in the sliding mode. It becomes difficult to have even torque distribution on the bit which is essential to control tool-face. High drag can limits the ability of maintenance of a constant weight on the bit. This reduces penetration rate and makes it hard to control tool-face as the reaction torque at the bit varies. Furthermore, drag can limit the total horizontal displacement.
A new approach to quantifying parameter uncertainty has been implemented in the numerical welltest simulator GTFM. The method incorporates inverse techniques, advanced statistical analysis, and probabilistic techniques. The approach is holistic in that it accounts for uncertainty introduced at all stages of the testing and analysis procedure. The methodology expands upon accepted methods by giving greater emphasis to model assumption diagnostics and parameter uncertainty.
Model assumption diagnostics are used to distinguish between competing flow models. Candidate flow models are evaluated based on the distribution of residuals between the measured data and the model response. A flow model is accepted if its residuals are normally distributed and discarded if the residuals deviate from the normal distribution.
Typically, parameter uncertainty from pressure-transient tests is analyzed using a discrete sensitivity analysis in which a single parameter is varied while keeping all other parameters constant. More recently, inverse techniques have focused on the estimation of confidence intervals which put bounds on the values of fitting parameters. This approach is limited by the assumption that there are no correlations among fitting parameters. and that non-fitting parameters are known perfectly.
For our new probabilistic-based approach, joint-confidence regions are calculated to quantify the fitting-parameter uncertainty resulting from parameter correlation. For non-fitting parameters. distributions are assigned, sampled using a Latin Hypercube sampling routine, and an inverse procedure performed for each set of sampled parameters. This process results in a distribution of joint-confidence regions for the fitting parameters (Fig. 1), which in turn is an expression of uncertainty in the non-fitting parameters. The new approach also makes it possible to graphically display the correlations between fitting and non-fitting parameters for sampled populations.
Forward simulations using parameter combinations from Figure 1 result in virtually no change in the goodness-of-fit of the simulations to the data (Fig. 2). This approach to quantifying parameter uncertainty differs from other approaches in that the simulated fit shows no degradation within the parameters uncertainty range because correlations between parameters is not neglected.
Uncertainty in fitting-parameter values can result from uncertainty in the conceptual flow model, data noise, correlations among fitting parameters, and correlations among fitting parameters and imperfectly known nonfitting parameters.
The primary mission of the Society of Petroleum Engineers (SPE) is to disseminate petroleum engineering technology. One of the most important sources of petroleum engineering technology is the extensive inventory of SPE technical papers. The SPE technical paper library currently contains over 28,000 papers and is expanding at a rate of 2,000 papers per year. Challenged with increasing publishing costs, an increase in papers that have been approved for publication and the goal to streamline the publication process, the SPE Electronic Publishing Committee (EPC) working together with the SPE publishing staff has turned to technology to help address these issues. One important product of these efforts is the SPE Masterdisc and SPE Image Library.
The SPE Masterdisc is a MS-Windows, PC-DOS and Macintosh computer based software system that links a commercial search engine to an indexed database. The database represents the first page of each technical paper submitted to the SPE from 1951-1995 in text format. All published and unpublished papers have been captured to the database. Each word is searched. along with paper number, title, author, organization, year, and meeting to match search criteria set by the user.
SPE Image Library
In January 1994 the SPE Board authorized advancement of SPE funds to match SPE Foundation contributions to begin work on the SPE Image Library. The SPE Image Library is a multi-disc cd-rom based product that contains images of all the pages to all the SPE technical papers published from 1951 - 1995 and comes with the SPE Masterdisc. There are about 300,000 images that have been captured in tagged image file (.tif), graphic format. The Masterdisc searchable database has been linked to the Image Library database resulting in the ability to read or print the full paper instantaneously on your computer.
How does the SPE Image Library work? The user enters search criteria that can be author, any combination of paper number, keywords in title, organization, journal, meeting, location, year, word or phrase. Once this form is completed, (Fig. 1), the user presses the enter key and the total number of papers found in the database are displayed. For our example here, the words Ozkan and Erdal were found in the author field for twenty SPE papers.
A simple click on the search results icon displays a list of the papers, (Fig. 2). The user can scroll through this list, print it, return to the search screen and modify search criteria or select a paper for viewing. The highlighted paper number 20964 was selected for viewing. The user can now decide to view the first page of this paper (Masterdisc text of page one only) or the entire paper (Image Library graphic files). By clicking on the highlighted paper, the text of page one is displayed.
Now the user can click on the camera icon to display the entire paper. The software prompts for the appropriate cd-rom disc number. In this example disc number 18 is requested. After disc 18 is substituted for the Masterdisc in the cd-rom drive, the first page of the paper is displayed.
This paper describes the design of FLEX, an object-oriented, flexible grid, black-oil reservoir simulator helps in dealing with the complexity of this problem. This approach is particularly useful because of the difficulties associated with generation and use of flexible grid geometries (like Voronoi, median, boundary adapting grids, etc.).
The entire problem is divided into subsystems like geometry, gridnodes, gridnode connectivity, grid, reservoir fluid flow, and matrix. Each of these subsystems have objects which are closely related. The dependency of these subsystems is established. A detailed analysis of each subsystem leads to identifying the classes, which are a set of objects having similar behavior. Attributes and behavior of the classes are assigned. After establishing relationships between the classes, they are arranged into hierarchies. About one hundred major classes have been identified and designed to achieve the desired behavior from FLEX. The programming language used is C++.
Reservoir simulators are inherently complex. A simulator has to deal with issues such as reservoir and grid geometry, fluids, flow calculation, matrix computations, several well and production constraints, visualization, etc. The most important feature of FLEX, a black oil simulator, is its ability to handle complexities arising from flexible grids. Verma and Aziz (1996) give a description of flexible grids in reservoir simulation. The flexibility in grids increases geometrical complexities as well as complexities in flow calculation. These complexities need sophisticated data structures (and associated procedures) to simplify the problem. It is expected that FLEX will change with time to incorporate new features. One of the important considerations in designing the simulator is the ease with which the simulator can be expected to handle new problems. All these factors combined to make the development process of FLEX quite complex. This paper describes the advantages of using an object-oriented approach for the development of reservoir simulators. The philosophy followed in designing FLEX is that advocated by Booch (1994) and Cheriton (1995).
Basic Features of FLEX
FLEX solves flow equations based on the control volume formulation (see Verma and Aziz, 1996). It uses the Newton-Raphson method to iteratively solve for the variables. A connection-based approach is employed to form the Jacobian matrix and the residual vector (see Lim, Shiozer and Aziz, 1995 and Verma and Aziz, 1996). Presently the simulator is developed to handle only two immiscible phases.
The gridnodes can be located so that they represent reservoir geometry, wells, faults, etc. Figure 1 is an example of the flexible grid generation capabilities of FLEX.
An object-oriented approach was followed in the design of FLEX to handle the complexities associated with a flexible grid simulator, and to provide for future enhancements.
We describe the development of a knowledge-based system to predict relative permeabilities to describe the flow of fluids in oil, gas or condensate reservoirs. The software applies heuristic knowledge and artificial intelligence techniques to identify the appropriate experimental methods for measuring the relative permeabilities, and to decide on the relevant mathematical models and computational steps to simulate the experiments. The selected models and computational steps are used together with the built-in database to generate the relative permeability data. Rules that relate the combination of field development scenario, fluid PVT properties, rock lithology and petrophysical properties are included in the knowledge base.
The basis of the software is that, in some instances, precisely defined rules based on quality published data and our expertise can do better than deterministic and purely statistical methods. This view is especially true in areas with limited and/or poor-quality data, as currently exists in gas/condensate and gas/water relative permeability predictions. The paper describes the software design approach, philosophy and architecture. The mathematical and heuristic models used to generate the relative permeability data are briefly described. The target applications of the software are as follows: 1) Tool to generate relative permeability and capillary pressure data for input to numerical simulators and material balance calculations; 2) Tool to perform a series of "what if' calculations to determine the effects of lithology, fluids saturations and PVT properties, interfacial tension and velocity on endpoint saturations and relative permeability functions; 3) Tool to analyse/interpret laboratory coreflood data; 4) Tool to generate relative permeability data when coreflood data is not available or is incomplete (e.g. when only endpoint data are available); and 5) Tool for use by the reservoir engineer to design a special core analysis program for a new field or study.
Relative permeability is used to describe multiphase flow in a porous medium. Such data are important input to many reservoir engineering calculations, providing a basic description of the way in which the phases will move in the reservoir. Definition of the flow process can have a significant effect on the predicted hydrocarbon production rate and duration and is important in calculating the volume of recoverable hydrocarbon reserves. The predicted production rates, the plateau level and duration, plus the expected water cut will all influence development plans. The number of wells, the balance between injectors and producers, the sizing of separation equipment, and design of facilities in general can all be impacted upon by the multiphase flow properties of the reservoir. Ultimately, together with many other inputs, relative permeability assists in determining reservoir economics, and hence guiding investment decisions.
Although ways to determine relative permeability from measurements made in the field have been proposed, they are fraught with problems and have never been regularly used. The most common method for determining relative permeability has been laboratory special core analysis. Laboratory measurement of representative relative permeability data on a reservoir core-fluid system is a complex task. The experiments are costly, typically more than $100,000 each, and time consuming, often taking up to six months to complete.
Past publications have indicated that matrix treatment failures are in the order of 30%. To improve the success rate for matrix treatments, current work has been on real time field monitoring. These systems calculate the evolution of skin during matrix stimulations. However, these systems can only inform you how your treatment is performing. A need for a system that optimizes fluids prior to pumping is needed so that an engineer can take true advantage of monitoring acid treatments.
This paper describes the development of an integrated matrix stimulation model for sandstone and carbonate formations that assists in determining formation damage, selection and optimization of fluid volumes, provides a pressure skin response of the acid treatment and forecasts the benefit of the treatment. The model includes three expert advisors for the novice engineer, a kinetic based multilayer reservoir model and a geochemical model to determine rock fluid compatability problems. Additional modules that provide support for the user are a scale predictor, critical drawdown, ball sealer forecaster and a fluid database for the selection of fluids and additives. A production forecast module is included to forecast the benefit of the stimulation.
Formation damage can occur from natural or induced mechanisms that reduce the capability of flow between the formation and the near wellbore region, thus giving a rise to a positive skin. To mitigate this damage, matrix technology using reactive and non reactive fluids are pumped into the formation. StimCADE (Stimulation Treatment Integrated Model Computer Aided Design and Evaluation) was developed as an integrated software application used to identity, prevent and mitigate formation damage. The goal of StimCADE is to optimize stimulation treatments, recognize failures and maximize job success.
Within ARCO, matrix stimulation treatments fail to improve productivity in one out of three treatments. A summary of these failures is shown in Table 1. The current practices for selecting wells for matrix stimulation are evaluating well production/injection histories, offset well performance and pressure transient analysis. Design techniques to improve the wells performance are based on 'rules of thumb'
To improve ARCO's matrix treatments a real time monitoring system1 was developed based on Paccaloni and Provost work. This technique calculates a transient or "apparent" skin vs. time as shown in Fig. 1. The adaptation of this technique has improved the area of incorrect field procedures. Since then several authors have expanded on these ideas by calculating a derivative skin vs. time and using an inverse injectivity plot as diagnostic tools.
To prevent the use of the wrong fluid, Expert systems were developed by ARCO and others. However, these tools were based on rules of thumb, providing no analytical solutions. Past experience indicates that knowledge systems are often discarded by the engineer after a few uses and have only found utility as teaching tools. To overcome this limitation, and to circumvent the loss of expertise within the industry, the expert systems provided within the new software are integrated to an analytical model.
This paper examines how to optimize matrix treatments using an integrated design strategy. This software utilizes expert systems linked to analytical acidizing simulators along with several peripheral tools to achieve the optimized treatment.
StimCADE is an integrated program designed to allow the user to enter data, calculate and obtain results.
The popularity of Laptop computers has dramatically increased because of their portability, low cost and increasing competitiveness with Workstations and mini-computers performance. As a result, many organizations are beginning to migrate their engineering applications to Laptop Computers. In 1994 Schlumberger Dowell started to migrate the full suite of mini-computer based CADE (Computer Aided Design and Evaluation) applications to Laptop computers. This paper chronicles the issues faced and resolved during the successful migration of the software.
Over a period of 10 years, Dowell had developed 6 CADE (Computer Aided Design and Evaluation) software products, consisting of approximately 650 thousand lines of code (KLOC), which resided on VAX/VMS systems. A need was identified to quickly and economically move this software to a fast and portable computer platform. The company has made 4 major transitions since 1975 (General Electric Time Sharing to Honeywell 6000 to IBM 4341 to VAX/VMS). The last phase has lasted 10 years.
The objective of this project was to migrate the CADE software to a laptop computer as quickly and economically as possible. The resulting system was required by the customer to work on a typical 1994 laptop computer configuration, i.e., a PC with a 486 - 33 MHz processor with 8 MB of RAM and less than 50 MB of free disk space.
This paper describes the experiences encountered and the solutions used during the migration effort. The pre-migration status, the feasibility study prior to migration and a description of the actual development phase are covered. The section on the feasibility phase gives the details on how third party software was selected. The paper concludes with a postmortem of the project and a summary of the lessons learned.
The CADE software is a suite of computer applications for designing and evaluating the various services provided by an oilfield service pumping company. These services include hydraulic fracturing, matrix acidizing, coiled tubing, cementing and sand control. The software was developed using Ada, Fortran and C languages. About 70% (450 KLOC) of the product was developed in Ada and approximately 30% (200 KLOC) in Fortran with only a trace of C. Fortran was used exclusively for numerical calculations, while Ada was used for the Human Interface and the remainder of the numerical calculations. Ada was the predominant language because of its strong data typing, information hiding and generic template features which made the source code easy to maintain. Ada also helped to minimize the most common programming errors made using the more prevalent mainstream languages.
The original CADE software was developed for use on the DEC MicroVAX II mini-computer. More than 200 of these mini-computer machines are networked together around the world using the company's worldwide network. At the introduction of the CADE software in 1985, the mini-computer offered a significant performance improvement over the previously used time sharing computing facility. The mini-computers also allowed the software to be used in remote locations where low telephone communication quality had precluded connection to the time sharing computer.
User Survey. A survey of the CADE user community was conducted in late 1992 to determine user satisfaction and the direction of future development. The user response was considered excellent at 59%, with 71 of a total 120 questionnaires being returned. The survey contained questions about the user profile and specific issues regarding the various products. P. 157