In the Shell Group, we have developed a state-of-the-art data managementsystem for the storage of resource (field, reservoir, etc.) related data. It iscalled RISRES (Reservoir Information System - REServoir module). First releasedin 1993, it is currently in operational use in eight Shell Group operatingcompanies around the globe. This article describes the specification andconstruction process, as well as various aspects of the system which make it apowerful resource data management system.
The ability to store, retrieve and share data relating to oil and gasreservoirs is critical to their efficient management. Inordinate amounts oftime are often spent collecting, verifying, storing, sharing, re-collecting andre-verifying data for the purposes of studies, analyses and reports.
RISRES is a state-of-the-art resource data management system which wasdeveloped to meet this challenge by addressing a set of business requirementsdescribed below. Features which make it state-of-the-art include:
- extensibility to store any data
- unambiguous versioning and time-stamping
- flexible definition of subsurface and reporting structures
- ability to store data in any form from numbers to document files
- auditing and security features
- transparent interfacing to applications.
These features, and others which position RISRES as a general resource datamanagement system both inside and outside the Shell Group, are discussed insome detail below.
Although it clearly satisfied the stated business requirements, introductionof RISRES to the majority of the Shell Group led to considerable initialresistance in the user community. These experiences are discussed, and theresulting set of data-organization and interfacing concepts, in particularclose integration with Shell Group petroleum engineering applications, are alsodescribed.
The data explosion has resulted in the need to handle ever-increasingquantities of data for life-cycle management of petroleum resources. In thelate 1980s it was observed that just the gathering and validation of data forreservoir-related studies occupied a significant fraction of each study'sresources, and that data gathering or analysis would often have to be repeatedbecause previous study data and results would get "lost".
One area of particular concern at this time was resource volumes. Companiesin the Shell Group were working with a combination of legacy systems andspreadsheets. The "old technology" legacy systems were generallyinflexible to the evolving requirements of resource classification and wereoften seen as a "black hole" into which data disappeared, never to beseen again. These problems tended to be addressed by the use of mainframe andPC based spreadsheets, or other "quick fixes", to fill in the gaps notcovered by the legacy systems, in some cases replacing them altogether.However, these spreadsheets and files were usually in disparate andundocumented locations on hard disks around the company offices, generallyincluded little or no validation, and often contained hidden errors. In short,they represented an inappropriately fragile audit trail for the main assets ofthe companies: their hydrocarbon resource volumes.
The Shell Group lacked a database to hold gathered, validated and processeddata, including hydrocarbon volumes, applicable at a resource level (i.e.fields, reservoirs, blocks, etc.). This led to the establishment of theReservoir Information Systems (RIS) project in 1990.
This paper describes the modeling, development, and implementation of a database for multiphase flow data from a large-scale research laboratory.
The outdoor laboratory facilities include a total of 1000 m pipe connected to fluid processing utilities, liquid pumps, and a 700 kW gas compressor operating at system pressures up to 90 bar. Normally, approximately 100 physical and virtual instruments are used, and at the moment more than 400000 measurements have been logged, processed, and stored in the database. The database communicates with systems for data acquisition and analysis, and researchers can access data from their workstations.
The database is implemented in a relational database management system (RDBMS) with SQL interface. Before the implementation of a database can start, it is important to develop a data model which represents the selected part of the real world in a way that satisfies the system requirements. The different aspects of the data model are treated thoroughly and include (1) experiment, (2) geometry, (3) instruments, (4) measurements, (5) fluids and fluid composition, and (6) project related data.
Entity-Relationship diagrams of the model are presented and explained. Timeseries are stored in the database using a specially developed C function. Other functionality is handled by database triggers and procedures stored in the database itself (written in a procedural extension to SQL), and UNIX shell scripts. Problems and advantages of data redundancy and reuse are discussed, as well as the need to avoid destroying the "history" in the database. The communication with the systems for data acquisition, analysis, and presentation is described.
During the past 10 years, thousands of large-scale two-phase experiments have been performed at the SINTEF Multiphase Flow Laboratory at Tiller near Trondheim in Norway.
Experiments have been performed in different pipe configurations; inclinations varying from -1 to +90 , diameters ranging from 0.1 m to 0.29 m, and at system pressures up to 90 bar. The laboratory has been operated with various hydrocarbon liquids such as diesel, lube oil, and naphtha, whereas nitrogen is used as gas phase.
The outdoor laboratory facilities include a total of 1000 m pipe connected to fluid processing utilities, liquid pumps, and a 700 kW gas compressor operating at system pressures up to 90 bar. The laboratory is performing advanced multiphase flow experiments for the oil and gas industry at near realistic field conditions, and the results are used in the development of pipeline transportation simulation software.
Experimental work with extensive instrumentation creates large amounts of data, often with a complex internal structure. When such experiments are carried out for many years, the need for a reliable database system with good performance is obvious. At the SINTEF Multiphase Flow Laboratory, the results were stored in an older database which lacked the required functionality and was expensive and time consuming to maintain. To utilize the advantages of a modem database system, a new database model was developed and implemented, together with necessary utility programs. A concern has also been the ability to store field data, and the possibility to retrieve the relevant input information necessary to run pipeline simulation software.
Normally about 100 physical and virtual instruments are used in an experiment, and at the moment more than 400000 measurements have been logged, processed, and stored in the database.
The primary mission of the Society of Petroleum Engineers (SPE) is to disseminate petroleum engineering technology. One of the most important sources of petroleum engineering technology is the extensive inventory of SPE technical papers. The SPE technical paper library currently contains over 28,000 papers and is expanding at a rate of 2,000 papers per year. Challenged with increasing publishing costs, an increase in papers that have been approved for publication and the goal to streamline the publication process, the SPE Electronic Publishing Committee (EPC) working together with the SPE publishing staff has turned to technology to help address these issues. One important product of these efforts is the SPE Masterdisc and SPE Image Library.
The SPE Masterdisc is a MS-Windows, PC-DOS and Macintosh computer based software system that links a commercial search engine to an indexed database. The database represents the first page of each technical paper submitted to the SPE from 1951-1995 in text format. All published and unpublished papers have been captured to the database. Each word is searched. along with paper number, title, author, organization, year, and meeting to match search criteria set by the user.
SPE Image Library
In January 1994 the SPE Board authorized advancement of SPE funds to match SPE Foundation contributions to begin work on the SPE Image Library. The SPE Image Library is a multi-disc cd-rom based product that contains images of all the pages to all the SPE technical papers published from 1951 - 1995 and comes with the SPE Masterdisc. There are about 300,000 images that have been captured in tagged image file (.tif), graphic format. The Masterdisc searchable database has been linked to the Image Library database resulting in the ability to read or print the full paper instantaneously on your computer.
How does the SPE Image Library work? The user enters search criteria that can be author, any combination of paper number, keywords in title, organization, journal, meeting, location, year, word or phrase. Once this form is completed, (Fig. 1), the user presses the enter key and the total number of papers found in the database are displayed. For our example here, the words Ozkan and Erdal were found in the author field for twenty SPE papers.
A simple click on the search results icon displays a list of the papers, (Fig. 2). The user can scroll through this list, print it, return to the search screen and modify search criteria or select a paper for viewing. The highlighted paper number 20964 was selected for viewing. The user can now decide to view the first page of this paper (Masterdisc text of page one only) or the entire paper (Image Library graphic files). By clicking on the highlighted paper, the text of page one is displayed.
Now the user can click on the camera icon to display the entire paper. The software prompts for the appropriate cd-rom disc number. In this example disc number 18 is requested. After disc 18 is substituted for the Masterdisc in the cd-rom drive, the first page of the paper is displayed.
Past publications have indicated that matrix treatment failures are in the order of 30%. To improve the success rate for matrix treatments, current work has been on real time field monitoring. These systems calculate the evolution of skin during matrix stimulations. However, these systems can only inform you how your treatment is performing. A need for a system that optimizes fluids prior to pumping is needed so that an engineer can take true advantage of monitoring acid treatments.
This paper describes the development of an integrated matrix stimulation model for sandstone and carbonate formations that assists in determining formation damage, selection and optimization of fluid volumes, provides a pressure skin response of the acid treatment and forecasts the benefit of the treatment. The model includes three expert advisors for the novice engineer, a kinetic based multilayer reservoir model and a geochemical model to determine rock fluid compatability problems. Additional modules that provide support for the user are a scale predictor, critical drawdown, ball sealer forecaster and a fluid database for the selection of fluids and additives. A production forecast module is included to forecast the benefit of the stimulation.
Formation damage can occur from natural or induced mechanisms that reduce the capability of flow between the formation and the near wellbore region, thus giving a rise to a positive skin. To mitigate this damage, matrix technology using reactive and non reactive fluids are pumped into the formation. StimCADE (Stimulation Treatment Integrated Model Computer Aided Design and Evaluation) was developed as an integrated software application used to identity, prevent and mitigate formation damage. The goal of StimCADE is to optimize stimulation treatments, recognize failures and maximize job success.
Within ARCO, matrix stimulation treatments fail to improve productivity in one out of three treatments. A summary of these failures is shown in Table 1. The current practices for selecting wells for matrix stimulation are evaluating well production/injection histories, offset well performance and pressure transient analysis. Design techniques to improve the wells performance are based on 'rules of thumb'
To improve ARCO's matrix treatments a real time monitoring system1 was developed based on Paccaloni and Provost work. This technique calculates a transient or "apparent" skin vs. time as shown in Fig. 1. The adaptation of this technique has improved the area of incorrect field procedures. Since then several authors have expanded on these ideas by calculating a derivative skin vs. time and using an inverse injectivity plot as diagnostic tools.
To prevent the use of the wrong fluid, Expert systems were developed by ARCO and others. However, these tools were based on rules of thumb, providing no analytical solutions. Past experience indicates that knowledge systems are often discarded by the engineer after a few uses and have only found utility as teaching tools. To overcome this limitation, and to circumvent the loss of expertise within the industry, the expert systems provided within the new software are integrated to an analytical model.
This paper examines how to optimize matrix treatments using an integrated design strategy. This software utilizes expert systems linked to analytical acidizing simulators along with several peripheral tools to achieve the optimized treatment.
StimCADE is an integrated program designed to allow the user to enter data, calculate and obtain results.
This paper describes the design of FLEX, an object-oriented, flexible grid, black-oil reservoir simulator helps in dealing with the complexity of this problem. This approach is particularly useful because of the difficulties associated with generation and use of flexible grid geometries (like Voronoi, median, boundary adapting grids, etc.).
The entire problem is divided into subsystems like geometry, gridnodes, gridnode connectivity, grid, reservoir fluid flow, and matrix. Each of these subsystems have objects which are closely related. The dependency of these subsystems is established. A detailed analysis of each subsystem leads to identifying the classes, which are a set of objects having similar behavior. Attributes and behavior of the classes are assigned. After establishing relationships between the classes, they are arranged into hierarchies. About one hundred major classes have been identified and designed to achieve the desired behavior from FLEX. The programming language used is C++.
Reservoir simulators are inherently complex. A simulator has to deal with issues such as reservoir and grid geometry, fluids, flow calculation, matrix computations, several well and production constraints, visualization, etc. The most important feature of FLEX, a black oil simulator, is its ability to handle complexities arising from flexible grids. Verma and Aziz (1996) give a description of flexible grids in reservoir simulation. The flexibility in grids increases geometrical complexities as well as complexities in flow calculation. These complexities need sophisticated data structures (and associated procedures) to simplify the problem. It is expected that FLEX will change with time to incorporate new features. One of the important considerations in designing the simulator is the ease with which the simulator can be expected to handle new problems. All these factors combined to make the development process of FLEX quite complex. This paper describes the advantages of using an object-oriented approach for the development of reservoir simulators. The philosophy followed in designing FLEX is that advocated by Booch (1994) and Cheriton (1995).
Basic Features of FLEX
FLEX solves flow equations based on the control volume formulation (see Verma and Aziz, 1996). It uses the Newton-Raphson method to iteratively solve for the variables. A connection-based approach is employed to form the Jacobian matrix and the residual vector (see Lim, Shiozer and Aziz, 1995 and Verma and Aziz, 1996). Presently the simulator is developed to handle only two immiscible phases.
The gridnodes can be located so that they represent reservoir geometry, wells, faults, etc. Figure 1 is an example of the flexible grid generation capabilities of FLEX.
An object-oriented approach was followed in the design of FLEX to handle the complexities associated with a flexible grid simulator, and to provide for future enhancements.
Geostatistical techniques generate fine-scale reservoir description that can integrate a variety of data such as cores, logs, and seismic traces. However, predicting dynamic behavior of fluid flow through multiple fine-scale realizations has still remained an illusive goal. Typically an upscaling algorithm is applied to obtain a coarse scale heterogeneity model. Most of the upscaling algorithms are based on single phase pressure solution and are thus questionable at best for multiphase flow applications. Pseudo-relative permeabilities have often been used as a tool for multiphase flow upscaling But such approaches are highly process dependent and thus, have limited applicability. We describe a powerful, versatile, multiphase three dimensional streamline simulator for integrating fine-scale reservoir descriptions with dynamic performance predictions. Unlike conventional streamtube models, the proposed approach relies on the observation that in a velocity field derived by finite difference, streamlines can be approximated by piece-wise hyperbolas within grid blocks. Thus, the method can be easily applied in 3-D and incorporated into conventional finite-difference simulators. Once streamlines are generated in three dimensions, a variety of one dimensional problems can be solved analytically along the streamlines. The power and utility of the streamline simulator is demonstrated through application to a detailed characterization and waterflood performance of the La Cira field, Colombia, South America. We illustrate the advantage of the streamline simulator through comparisons with a commercial simulator for a waterflood pattern. The streamline simulator is shown to be orders of magnitude faster than traditional numerical simulators and does not suffer from numerical dispersion or instability. We illustrate the use of this simulator for evaluation of multiple, fine-scale realizations of heterogeneity models and quantification of uncertainty in predicting dynamic behavior of fluid flow.
A geostatistical approach is commonly used to reproduce reservoir heterogeneities1. The objective is to generate a few "typical descriptions incorporating heterogeneity elements that are difficult to include by conventional methods. Conditional simulation is used for creating property (permeability, porosity, etc.) distribution with a prescribed spatial correlation structure that honors measured data at well locations. Stochastic reservoir modeling provides multiple equiprobable, reservoir models, all data intensive, rather than a single, smooth usually data poor deterministic model. Experience has shown that these data intensive, stochastic reservoir models yield a better history match of production data, yet provide a measure of uncertainty in prediction of future performance.
Fine-scale realizations are the most detailed representation of the heterogeneities that exist in the petroleum reservoir. The ideal flow simulation process would be to input this fine-scale data in its entirety. However conventional numerical simulators do not allow this readily. Reservoir models built for conventional simulators using the fine-scale data are huge and unmanageable. The flow simulation process thus becomes very tedious, slow and expensive. This is in addition to any hardware limitations that may exist. Typically an upscaling algorithm is applied to obtain a coarse-scale heterogeneity model. This coarse-scale model is then input into the conventional simulators. However, most of the upscaling algorithms are based on single phase pressure solution and are thus questionable at best for multiphase flow applications. Pseudo-relative permeabilities have often been used as a tool for multiphase flow upscaling But such approaches are highly process dependent and have limited applicability. There is a definite need for a fast and powerful simulator that allows the easy use of fine-scale realizations as such without the need for any upscaling.
In this paper we describe a new, fully three-dimensional, multiphase, streamline simulator for modeling waterflood performance.
The popularity of Laptop computers has dramatically increased because of their portability, low cost and increasing competitiveness with Workstations and mini-computers performance. As a result, many organizations are beginning to migrate their engineering applications to Laptop Computers. In 1994 Schlumberger Dowell started to migrate the full suite of mini-computer based CADE (Computer Aided Design and Evaluation) applications to Laptop computers. This paper chronicles the issues faced and resolved during the successful migration of the software.
Over a period of 10 years, Dowell had developed 6 CADE (Computer Aided Design and Evaluation) software products, consisting of approximately 650 thousand lines of code (KLOC), which resided on VAX/VMS systems. A need was identified to quickly and economically move this software to a fast and portable computer platform. The company has made 4 major transitions since 1975 (General Electric Time Sharing to Honeywell 6000 to IBM 4341 to VAX/VMS). The last phase has lasted 10 years.
The objective of this project was to migrate the CADE software to a laptop computer as quickly and economically as possible. The resulting system was required by the customer to work on a typical 1994 laptop computer configuration, i.e., a PC with a 486 - 33 MHz processor with 8 MB of RAM and less than 50 MB of free disk space.
This paper describes the experiences encountered and the solutions used during the migration effort. The pre-migration status, the feasibility study prior to migration and a description of the actual development phase are covered. The section on the feasibility phase gives the details on how third party software was selected. The paper concludes with a postmortem of the project and a summary of the lessons learned.
The CADE software is a suite of computer applications for designing and evaluating the various services provided by an oilfield service pumping company. These services include hydraulic fracturing, matrix acidizing, coiled tubing, cementing and sand control. The software was developed using Ada, Fortran and C languages. About 70% (450 KLOC) of the product was developed in Ada and approximately 30% (200 KLOC) in Fortran with only a trace of C. Fortran was used exclusively for numerical calculations, while Ada was used for the Human Interface and the remainder of the numerical calculations. Ada was the predominant language because of its strong data typing, information hiding and generic template features which made the source code easy to maintain. Ada also helped to minimize the most common programming errors made using the more prevalent mainstream languages.
The original CADE software was developed for use on the DEC MicroVAX II mini-computer. More than 200 of these mini-computer machines are networked together around the world using the company's worldwide network. At the introduction of the CADE software in 1985, the mini-computer offered a significant performance improvement over the previously used time sharing computing facility. The mini-computers also allowed the software to be used in remote locations where low telephone communication quality had precluded connection to the time sharing computer.
User Survey. A survey of the CADE user community was conducted in late 1992 to determine user satisfaction and the direction of future development. The user response was considered excellent at 59%, with 71 of a total 120 questionnaires being returned. The survey contained questions about the user profile and specific issues regarding the various products. P. 157
This paper presents a PC based alternative procedure for determining the water saturations within the hot water zone of a thermal project for use in analytical oil recovery calculations. Conventional analytical calculation of oil recovery under steam and hot water injection requires the tracking of the movements of the saturations advancing within the variable-temperature hot-water zone. This involves an adaptation of the Buckley-Leverett theory to this variable temperature zone after dividing it into a number of constant temperature or isothermal zones. If the number exceeds two, the calculation can become very tedious unless done with a computer program. FORTRAN programs have generally been used but they are not as intuitive or easy to use as modern PC based programming tools such as MathematicaTM. In this paper. MathematicaTM was used because it is relatively easy to program. easy to use, and is fast and robust. Unfortunately. depending on the number of isothermal zones, the time step size and the time at which the recovery calculation is desired, the tracking of saturations can tax the capabilities of even the most modern PCs. Therefore,an alternative method is introduced that eliminates the need for saturation tracking. This method calculates the instantaneous saturations within each isothermal zone at the time of interest. Oil recovery results by this method were found to be comparable to those by the saturation tracking method with considerable saving in computation time. Two examples are presented to demonstrate the utility of the method.
For one-dimensional models, the analytical calculation of steamflood oil recovery with time requires calculating the following: a) the position of the steam front and its rate of advance or velocity. b) the temperature profile in the hot water zone and its rate of advance, c) the fluid saturation profile and its rate of advance. The location of the steam front at any time can be calculated by the equations of Mandl and Volek who showed that before a critical time tc, the steam zone can be described by the equations of Marx and Langenheim. Beyond tc, Mandl and Volek present an approximate equation to calculate the steam front location with time. This equation was later improved by Prats and Vogiatzis and communicated to Myhill and Stegemeier who presented the solution in graphical form.
The temperature profile in the hot water zone can be calculated using the equation by Lauwerier for both hot waterflood and steamflood. However, for a steamflood, a simpler approximation assumes a linear temperature drop from the steam temperature to the cold reservoir temperature. The saturation profile in the steam and hot water zones can be calculated using the Buckley-Leverett theory provided that the non-isothermal hot water zone is first divided into an appropriate number of isothermal zones. Several procedures are available to calculate the saturation profile with time. This paper is concerned with the method whereby characteristic saturations are picked and their motions tracked. The tracking process is very time consuming and can easily exceed the capabilities of PCs' when small time steps are used together with many isothermal zones at long times. An alternative to tracking is used whereby instantaneous saturations in each temperature zone are calculated at any given time.
The saturations to be tracked can be picked in different ways. Farouq Ali determines the flood front saturation at the cold reservoir temperature (just as in a waterflood) and arbitrarily picks saturation values greater than this value ending at the maximum water saturation. This has the advantage of giving greater saturation definition where needed. However, it has the potential of using more saturations than is needed leading higher computation time on personal computers. Willman et. al. suggest a somewhat narrower range of saturations by finding as the starting point, that saturation in the cold zone that have the same velocity as the cold temperature at tc. Prats extends this further by finding a particular saturation for each isothermal zone referred as characteristic saturations. These are calculated as the saturation in each zone that has the same velocity as the cold temperature at tc. This way, there are only as many characteristic saturations to be tracked as there are isothermal zones as opposed to tracking an arbitrarily large number of saturations.
Even with these improvements in the choice of saturations, the tracking process of calculating their locations at each time step is time consuming even for personal computers.
Advances of the "information age" have made vast quantities of technical information available through electronic media. Technical problem solving often involves transfer and handling of large blocks of data. For many, an impediment to successful technical data application is not data availability, but data accessibility. An electronically searchable database system was designed to allow access to proprietary environ- mental data by a large number of users located world-wide. Users are linked and data is transferred system-wide via Internet connections. This scheme also provides users with links to an immense and steadily growing number of external information servers.
This paper presents a step-wise approach to establishing a working database system. Included are system design criteria such as types of data desired and number and location of users, data collection techniques, and software applications for data organization. A description of data search engines and search scripts follows, with a discussion of appropriate hardware platforms for data storage and access. Remote data access from many locations via the Internet is discussed. In some cases it is necessary to restrict access to proprietary information. This database system has the capability to keep separate public and proprietary information. The paper addresses data security issues such as access restriction using software and hardware, technology exportation, and the necessity for technology licensing among co-owners.
Databases are "collections of data arranged for ease and speed of retrieval, as by computers." Computers generally handle the "ease and speed of retrieval" aspects of databases. Data arrangement, however, can have much to do with how easily and quickly data retrieval proceeds. Data distribution after retrieval is another important consideration of database design.
Any collection of data can be arranged in a database. Once data are collected, a variety of commercial software is available to aid in organizing data in a (usually) tabular format. Commercial software suitable for this purpose may be either database or spreadsheet applications. Making data retrievable, or "searchable," follows. Commercially available search software can be applied to many database and spreadsheet formats. A means of distributing data among users is then necessary.
A currently very popular means of data, and information in general, distribution is the Internet. More than 40 million worldwide users exist, and the number is steadily increasing. These users access information through more than 5 million hosts, on 45,000 networks, in 159 countries. This vast network of computers, the Internet, grows at about ten percent to fIfteen percent per month. Currently something in excess of twenty terabytes (twenty million megabytes) of information is transferred per month via the Internet.
Such a tremendous amount of information would be very unwieldy were it not for a group of software applications and sets of information transfer protocols and conventions. One such group, known as a client/server environment, is the Worldwide Web (WWW) The WWW is based on hyptext and hypermedia technology and allows "point and click" access to most of the Internet's resources. WWW server software is commercially available for most hardware and operating systems, including UNIX, Macintosh, and DOS/Windows. Use of readily available client/server products thus allows virtually unlimited information transfer without regard to numbers of users or their locations.
Planning and Data Collection
It is usually advantageous to carefully consider how a data-base will be used before data collection begins. In general, any data that can be arranged in tabular format can be searched and transferred by methods described herein.
This paper presents a new parallel ILU preconditioner. It is based on the technique of Sequential Staging of Tasks (SST) which overcomes the difficulty of recursiveness arising from ILU factorization. This new parallel ILU preconditioner is easy to reconstruct from the sequential version. The only requirement is to insert the synchronization codes of stages of tasks into the sequential version, and any other modifications to the original serial codes are not needed. The characteristic of matching various ordering schemes is still maintained, and the new merit of handling different numbers of processors is obtained. Numerical results were obtained using a thermal model with different grid system. The parallel speedup is satisfactory.
Simulation of thermal recovery processes using a fully implicit treatment of component concentrations, phase saturations, pressure and temperature requires solution of large systems of linear equations. Currently, the most robust techniques for solving this large system of linear equations are preconditioned conjugate gradient like methods such as ORTHOMIN which is widely used in traditional solvers for sequential computers.
The major computation of ORTHOMIN is vector inner product and is easy to be paralleled. However, the most robust preconditioners such as ILU factorization and nested factorization are not suitable for parallel computer because of their intrinsical recursiveness. It is difficult to parallel ILU preconditioner directly. At present, the general methods to parallel preconditioners are the use of new preconditioning methods such as parallel nested factorization preconditioning. These new preconditioning methods have high parallel efficiency in parallel computers, but also have limitations:
1. limited types of ordering schemes;
2. comprehensive modifications of the sequential codes;
3. slow convergence.
This paper presents a new parallel ILU preconditioner based on the technique of Sequential Staging of Tasks (SST). Using the SST technique, the new preconditioner exploits the small scale parallelism of ILU factorizations, and achieves a temporal, larger scale parallelism within certain computing domain, consequently obtains an applicable parallel preconditioner. The new parallel preconditioner maintains all the characteristics of the sequential version, and is easy to reconstruct from the sequential version. The new preconditioner is applicable to computers with different number of processors. The numerical experiments show that the parallel speedup is satisfactory. On an NP 1/52 Mini-Supercomputer System (produced by GOULD Co. in 1989, shared main memory, symmetrical operation system UTX/32) with two processors, the parallel speedup is 1.85.
Consider the linear system,
As a preconditioner of ORTHOMIN, ILU factorization provides a matrix M, a "good" approximation to coefficient matrix A and easy to factor, convergence may be accelerated by solving the equivalent system,
Such preconditionings should offset the added cost of factoring M and performing a forward and back solution with each matrix-vector multiplication by reducing the number of iterations substantially.
Main stages of ILU factorization are as follow:
1. A symbolic factorization, defining the non-zero structure of the incomplete factorization.
For k=l, NB Do
2. Inverting the main elements