Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Results
SPE Members Abstract Geostatistical techniques generate fine-scale reservoir description that can integrate a variety of data such as cores, logs, and seismic traces. However, predicting dynamic behavior of fluid flow through multiple fine-scale realizations has still remained an illusive goal. Typically an upscaling algorithm is applied to obtain a coarse scale heterogeneity model. Most of the upscaling algorithms are based on single phase pressure solution and are thus questionable at best for multiphase flow applications. Pseudo-relative permeabilities have often been used as a tool for multiphase flow upscaling But such approaches are highly process dependent and thus, have limited applicability. We describe a powerful, versatile, multiphase three dimensional streamline simulator for integrating fine-scale reservoir descriptions with dynamic performance predictions. Unlike conventional streamtube models, the proposed approach relies on the observation that in a velocity field derived by finite difference, streamlines can be approximated by piece-wise hyperbolas within grid blocks. Thus, the method can be easily applied in 3-D and incorporated into conventional finite-difference simulators. Once streamlines are generated in three dimensions, a variety of one dimensional problems can be solved analytically along the streamlines. The power and utility of the streamline simulator is demonstrated through application to a detailed characterization and waterflood performance of the La Cira field, Colombia, South America. We illustrate the advantage of the streamline simulator through comparisons with a commercial simulator for a waterflood pattern. The streamline simulator is shown to be orders of magnitude faster than traditional numerical simulators and does not suffer from numerical dispersion or instability. We illustrate the use of this simulator for evaluation of multiple, fine-scale realizations of heterogeneity models and quantification of uncertainty in predicting dynamic behavior of fluid flow. Introduction A geostatistical approach is commonly used to reproduce reservoir heterogeneities1. The objective is to generate a few "typical descriptions incorporating heterogeneity elements that are difficult to include by conventional methods. Conditional simulation is used for creating property (permeability, porosity, etc.) distribution with a prescribed spatial correlation structure that honors measured data at well locations. Stochastic reservoir modeling provides multiple equiprobable, reservoir models, all data intensive, rather than a single, smooth usually data poor deterministic model. Experience has shown that these data intensive, stochastic reservoir models yield a better history match of production data, yet provide a measure of uncertainty in prediction of future performance. Fine-scale realizations are the most detailed representation of the heterogeneities that exist in the petroleum reservoir. The ideal flow simulation process would be to input this fine-scale data in its entirety. However conventional numerical simulators do not allow this readily. Reservoir models built for conventional simulators using the fine-scale data are huge and unmanageable. The flow simulation process thus becomes very tedious, slow and expensive. This is in addition to any hardware limitations that may exist. Typically an upscaling algorithm is applied to obtain a coarse-scale heterogeneity model. This coarse-scale model is then input into the conventional simulators. However, most of the upscaling algorithms are based on single phase pressure solution and are thus questionable at best for multiphase flow applications. Pseudo-relative permeabilities have often been used as a tool for multiphase flow upscaling But such approaches are highly process dependent and have limited applicability. There is a definite need for a fast and powerful simulator that allows the easy use of fine-scale realizations as such without the need for any upscaling. In this paper we describe a new, fully three-dimensional, multiphase, streamline simulator for modeling waterflood performance. P. 195
- South America (0.55)
- North America > United States > Texas (0.28)
The Problem: Automatic Downloading of Data PanCanadian is in the process of migrating from mainframe based systems and databases to personal computers and network servers. The process has been underway for about four years now, and it will probably be at least as long again until all the mainframe systems have been replaced. Though data is increasingly available within the client server environment, much of the key data is still resides on the old mainframe. PanCanadian's in-house Information Systems group is primarily focused on the new client server workplace, and responsibilities for the mainframe systems have been out-sourced. Though the now out-dated mainframe reports continue to be available, these hard copy paper stacks are the only way that some of this data can be directly accessed. The Solution: Spreadsheet Parsing of Mainframe Print Buffers When a mainframe report is run a buffer file is created before the report is actually printed. This buffer file can be manually captured and file transferred from the mainframe environment to the PC workplace. In those instances where the report parameters are set sufficiently broad, the printed report represents an echo of the mainframe database — albeit in text form. The captured file usually contains the line printer feed codes as well as job and page header information. To access this information within a spreadsheet or database application the text must be cleaned up and parsed into columns. Though Excel contains parsing or 'text to columns' commands, these are not sufficiently robust to handle most of the mainframe report formats encountered. To facilitate conversion of the text reports into spreadsheet format a general purpose parsing spreadsheet was developed. The Opportunity: Direct Business Access to Data on a PC Once mainframe data is in spreadsheet or database format it can be readily accessed and manipulated. Data can be cross correlated to yield additional information and identify patterns and dependencies not evident in the raw data alone. The business can use this information to measure performance and establish targets for improvement. Example To evaluate the results of 1994 deep gas drilling within a district, data had to be brought together from a number of mainframe systems (Financial, Production and Reserves) as well as a server based completions database. A financial report was first used as the basis for identifying wells drilled as part of each project within the district. The report was converted from a purely text file into a spreadsheet file which honored the report layout, through use of the parsing spreadsheet. The spreadsheet file was then converted from a report layout into a database layout by means of some simple Excel look-up and index functions. A list of wells drilled was then extracted by project, and Authorization For Expenditure (AFE) descriptions were converted into well locations. Look-up and index functions were then used to link production data back to this well list. Subsequent data correlation (not illustrated) was then used to link completions costs and reserves data. Charts are shown for drilling outcome both by program type as well as by area. Completion success has been charted for those horizons identified at casing point as well as for subsequent completion attempts.
Abstract The computer information industry is seeing a revolutionary growth with the introduction of the Internet and especially the World-Wide Web (WWW). There is no doubt that this trend will affect the petroleum industry tremendously in the near future. This paper analyzes the current tools available and tries to forecast the state-of-the-art use of these tools in this industry. Web browsers are getting mature everyday with security features and interactive capabilities. Java (and JavaScript) provides yet another powerful object-oriented way of writing tools on the Internet. Already, more and more oil and gas companies are using these for inter-company and intra-company data transfer, advertising, and many other purposes. This paper was submitted to SPE using a Web browser. With so many advantages, it is easy to foresee that these same companies want to use them more and more often in their daily work and life. This entails using them for log acquisition and analysis, reservoir simulation, well testing, production data analysis, reserves, and filling regulatory reports, etc. Users will also be satisfied with the openness, ease of use, common interface (low-cost training) and widely available tools (such as the numerous Java applets) that they can use to design and customize to their own taste. This paper gives some examples of using Web browsers to do engineering calculations such as oil and gas correlations and field calculations. It is the hope of the author to inspire other colleagues in this industry to start in this direction and bring this bright future nearer. Introduction In the beginning, the computers were scattered all around, separated from each other. Then the Internet was created. The academians used it at first to exchange ideas among themselves through e-mail, telnet and ftp. It was seen as good, but difficult to use for the normal population or the critical mass. Then came along the Internet browsers. It was those browsers that made the Internet popular and connected those networks of internets and intranets into a World-Wide Web. The ease of use of these browsers made it a breeze to navigate (surf) the Web. Nowadays, almost every company, university, and organization has an appearance on the Web. However, it is important to note that up until today, most of these Web pages are relatively static information published and updated regularly or irregularly. With the introduction of Java Script and Java, the browsers are taking in more functions and usability for the scientific and business circles as they are more interactive. As a matter of fact, Java is threatening to take over many of the traditional programming languages such as Visual Basic and Visual C++ because of its ease of use, better portability and ever-increasing popularity. The petroleum industry, in the author's opinion, is not too late jumping on the bandwagon with this new and developing technology. Already, many upstream E&P companies, service companies are appearing on the net. But again, most of these Web pages are also static information. The scientific computing on the Web is still in its infant stage. This paper illustrates the ease of use and the possibility of petroleum computing on the Web through some simple engineering applications. It's the author's hope that with this paper, more applications will be introduced in this industry dealing with wider variety of daily routines such as log analysis, seismic interpretation, financial analysis, reserves estimates and management, well testing, geostatistics, and reservoir simulation, etc. P. 245
- Geophysics > Seismic Surveying (0.54)
- Geophysics > Borehole Geophysics (0.34)
- Information Technology > Software > Programming Languages (1.00)
- Information Technology > Communications > Web (1.00)
Abstract Past publications have indicated that matrix treatment failures are in the order of 30%. To improve the success rate for matrix treatments, current work has been on real time field monitoring. These systems calculate the evolution of skin during matrix stimulations. However, these systems can only inform you how your treatment is performing. A need for a system that optimizes fluids prior to pumping is needed so that an engineer can take true advantage of monitoring acid treatments. This paper describes the development of an integrated matrix stimulation model for sandstone and carbonate formations that assists in determining formation damage, selection and optimization of fluid volumes, provides a pressure skin response of the acid treatment and forecasts the benefit of the treatment. The model includes three expert advisors for the novice engineer, a kinetic based multilayer reservoir model and a geochemical model to determine rock fluid compatability problems. Additional modules that provide support for the user are a scale predictor, critical drawdown, ball sealer forecaster and a fluid database for the selection of fluids and additives. A production forecast module is included to forecast the benefit of the stimulation. Introduction Formation damage can occur from natural or induced mechanisms that reduce the capability of flow between the formation and the near wellbore region, thus giving a rise to a positive skin. To mitigate this damage, matrix technology using reactive and non reactive fluids are pumped into the formation. StimCADE (Stimulation Treatment Integrated Model Computer Aided Design and Evaluation) was developed as an integrated software application used to identity, prevent and mitigate formation damage. The goal of StimCADE is to optimize stimulation treatments, recognize failures and maximize job success. Within ARCO, matrix stimulation treatments fail to improve productivity in one out of three treatments. A summary of these failures is shown in Table 1. The current practices for selecting wells for matrix stimulation are evaluating well production/injection histories, offset well performance and pressure transient analysis. Design techniques to improve the wells performance are based on 'rules of thumb' To improve ARCO's matrix treatments a real time monitoring system1 was developed based on Paccaloni and Provost work. This technique calculates a transient or "apparent" skin vs. time as shown in Fig. 1. The adaptation of this technique has improved the area of incorrect field procedures. Since then several authors have expanded on these ideas by calculating a derivative skin vs. time and using an inverse injectivity plot as diagnostic tools. To prevent the use of the wrong fluid, Expert systems were developed by ARCO and others. However, these tools were based on rules of thumb, providing no analytical solutions. Past experience indicates that knowledge systems are often discarded by the engineer after a few uses and have only found utility as teaching tools. To overcome this limitation, and to circumvent the loss of expertise within the industry, the expert systems provided within the new software are integrated to an analytical model. This paper examines how to optimize matrix treatments using an integrated design strategy. This software utilizes expert systems linked to analytical acidizing simulators along with several peripheral tools to achieve the optimized treatment. Approach StimCADE is an integrated program designed to allow the user to enter data, calculate and obtain results. P. 75
- Geology > Geological Subdiscipline > Geochemistry (0.70)
- Geology > Rock Type > Sedimentary Rock > Clastic Rock > Sandstone (0.50)
- Geology > Mineral > Silicate > Phyllosilicate (0.47)
- North America > United States > Texas > Permian Basin > Yeso Formation (0.99)
- North America > United States > Texas > Permian Basin > Yates Formation (0.99)
- North America > United States > Texas > Permian Basin > Wolfcamp Formation (0.99)
- (21 more...)
- Well Completion > Acidizing (1.00)
- Production and Well Operations > Well Intervention (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Artificial intelligence (1.00)
- Reservoir Description and Dynamics > Formation Evaluation & Management > Pressure transient analysis (0.68)
Abstract This paper presents a new parallel ILU preconditioner. It is based on the technique of Sequential Staging of Tasks (SST) which overcomes the difficulty of recursiveness arising from ILU factorization. This new parallel ILU preconditioner is easy to reconstruct from the sequential version. The only requirement is to insert the synchronization codes of stages of tasks into the sequential version, and any other modifications to the original serial codes are not needed. The characteristic of matching various ordering schemes is still maintained, and the new merit of handling different numbers of processors is obtained. Numerical results were obtained using a thermal model with different grid system. The parallel speedup is satisfactory. Introduction Simulation of thermal recovery processes using a fully implicit treatment of component concentrations, phase saturations, pressure and temperature requires solution of large systems of linear equations. Currently, the most robust techniques for solving this large system of linear equations are preconditioned conjugate gradient like methods such as ORTHOMIN which is widely used in traditional solvers for sequential computers. The major computation of ORTHOMIN is vector inner product and is easy to be paralleled. However, the most robust preconditioners such as ILU factorization and nested factorization are not suitable for parallel computer because of their intrinsical recursiveness. It is difficult to parallel ILU preconditioner directly. At present, the general methods to parallel preconditioners are the use of new preconditioning methods such as parallel nested factorization preconditioning. These new preconditioning methods have high parallel efficiency in parallel computers, but also have limitations:limited types of ordering schemes; comprehensive modifications of the sequential codes; slow convergence. This paper presents a new parallel ILU preconditioner based on the technique of Sequential Staging of Tasks (SST). Using the SST technique, the new preconditioner exploits the small scale parallelism of ILU factorizations, and achieves a temporal, larger scale parallelism within certain computing domain, consequently obtains an applicable parallel preconditioner. The new parallel preconditioner maintains all the characteristics of the sequential version, and is easy to reconstruct from the sequential version. The new preconditioner is applicable to computers with different number of processors. The numerical experiments show that the parallel speedup is satisfactory. On an NP 1/52 Mini-Supercomputer System (produced by GOULD Co. in 1989, shared main memory, symmetrical operation system UTX/32) with two processors, the parallel speedup is 1.85. ILU factorization Consider the linear system, Ax=b As a preconditioner of ORTHOMIN, ILU factorization provides a matrix M, a "good" approximation to coefficient matrix A and easy to factor, convergence may be accelerated by solving the equivalent system, [M1A]x=M1b Such preconditionings should offset the added cost of factoring M and performing a forward and back solution with each matrix-vector multiplication by reducing the number of iterations substantially. Main stages of ILU factorization are as follow:A symbolic factorization, defining the non-zero structure of the incomplete factorization. For k=l, NB Do Inverting the main elements (1) P. 135
Abstract Monitoring the performance of the Kuparuk River Unit waterflood at a multi-well, pattern level is a critical part of field operations. The reservoir performance analysis optimizes allocation of injected fluid, helps identify well work and infill drilling opportunities, supports reservoir management strategies, and provides a basis for development planning. Faulting and stratigraphy of the reservoir make it difficult to determine areal and vertical allocation factors for fluids in the surveillance patterns; therefore material balance calculations are required to judge their validity. The problem is exacerbated by the number of patterns analyzed and the need to share intermediate results between all of engineers that need to be involved in the process. To help solve the problem, a suite of programs for interactive pattern material balance was developed. The program suite includes a principle material balance calculation application along with several ancillary programs for interactive database updates and post-processing. The program suite allows engineers to interactively change input parameters and review material balance results. Internal checks ensure consistency throughout the field. The programs are fully integrated with a large central relational database which includes tables for areal and vertical allocation factors, production, injection, and static pattern information. Introduction The Kuparuk River Field is located west of the Prudhoe Bay Unit on the North Slope of Alaska. The field is a highly faulted reservoir with an areal extent of over 200 square miles. The field is under active waterflood with some areas also under immiscible or miscible WAG (water, alternating with gas) flood. A formal review of reservoir performance at the pattern level is done on an annual basis. The review includes defining patterns and allocating fluids produced from or injected into the patterns. This is followed by a comprehensive review of the performance on a pattern-by-pattern basis and rate forecasts. The review is necessary to help identify workover and infill drilling opportunities to optimize the waterflood It is also required to help with operational considerations such as allocation of injected fluids throughout the field. The validity and usefulness of the surveillance review is highly dependent on accurate allocation of production and injection to each pattern in the Kuparuk River reservoir. This is a difficult problem because over 600 patterns in 2 zones are analyzed. The process is led by a team of five to six engineers. An additional 15 to 20 engineers and geoscientists are involved in defining patterns and evaluating allocation factors. Because of the large number of individuals involved and the large amount of data, a group oriented solution was required in order to achieve consistency across the field. Internal data checks were also required so that wells could not be over or under allocated and to accurately account for all fluids. History of Software Development Efforts. Several computing solutions have been explored. The first software solution was a mainframe material balance program tied to SAS databases. Iterations required editing ASCII text files and overnight batch runs to update databases followed by a batch processing for pattern analysis. Visualization consisted of batch programs that submitted plots to central printers, often with half-day turnaround times. A major advantage of the system was its use of a centralized database and enforced consistency. The disadvantages of long turnaround times and other difficulties within the mainframe environment made the process inefficient and manpower intensive. As personal computers became more popular, much of the process was ported to spreadsheets. The analysis process consisted of downloading data from mainframe databases into a text format that was read into the spreadsheets. Engineers manipulated and modified the data within the spreadsheets to perform individual pattern analyses. The process was somewhat more efficient in terms of manpower, but consistency was nearly non-existent. P. 41
- North America > United States > Alaska > North Slope Borough (1.00)
- Europe > United Kingdom > Irish Sea > East Irish Sea > Liverpool Bay (0.61)
Summary Software development is today steered by developers rather than by engineers as a result of sweeping changes in our industry. Though this produced higher computational efficiency and more flexible GUI's, the applications often lack the robustness, transparency and accurate description of the physics of the old programmes. The professional profile of the user also changed: the expert has been replaced by occasional users who often lack a deep knowledge of the application and of its technical background. This is not entirely matched by an increased "engineering" expertise from the vendors nor by an effort to make the programmes more transparent and the users more informed. Choosing the right software therefore requires to go through a 3 step process, the critical one being an extensive test and validation. Selected examples from the Company's portfolio of programmes are presented to demonstrate the risks of introducing software without testing. Examples of costs incurred in testing petroleum engineering software, from easy to complex applications, show that it is time consuming and expensive, mainly because it requires training of expert teams with the right mix of skills. The activity is difficult to justify when all what we do is scrutinized with a short term perspective to find areas for cost cutting. Centralizing the service in the Company headquarters is one option for containing the cost without compromising on the quality. Greater savings could possibly be achieved by insourcing back part of the software development and by setting alliances between client companies sharing the same interest. Introduction Petroleum engineers today have the option of choosing amongst several programmes developed by specialist vendors for various platforms. The days when they had to write and compile their own programmes in FORTRAN or Basic have since long gone. Significant progresses were achieved in terms of user friendliness, portability, integration and data management, all aspects that an engineer writing his own code did not, and could not, address. But what happened to the technical robustness and transparency of the old programmes? Did the vendors succeeded in providing the same level of improvement in this area as in all other IT areas ? Is the user provided with sufficient information to make the right choice and what are the screening criteria? The answers to these questions are not straightforward nor are probably unique for all users. But whereas answering the first two will mainly involve an assessment of the programmes, the answer to the third requires an understanding of the technical profile of the current users, of the work process and of the data flow on the one hand and of the expertise, aims and resources of the software developers on the other hand. All this has dramatically changed in the last two decades and even greater changes are likely to occur in the next few years. Review of the industry trends The history. Engineers in the 70's were often writing their own software to help them in their duties. More complex programmes with thousands of code lines, such as reservoir simulators, were bought and maintained by few specialized vendors. Both of them were not user friendly, were run in batch mode, required time and patience to process and display in- put/output and were installed on the Company's single platform (mainframe). Porting to other platform was not an Issue and the integration was not sought for. These programmes, though technically robust, had a limited IT content: they were basically the coding of complex algorithms. In a four quadrant plot (Fig. 1) with the engineering aspects (reliability and robustness of algorithms and code, transparency) in the y-axis and the IT aspects in the x-axis, these programmes would be described by point. The availability of more powerful and faster computers along with the diffusion of software applications led to increasing the IT content: user friendly I/O utilities, interactive applications and the need of integration made it difficult for engineers to develop programmes accepted by their peers. P. 141
Abstract The popularity of Laptop computers has dramatically increased because of their portability, low cost and increasing competitiveness with Workstations and mini-computers performance. As a result, many organizations are beginning to migrate their engineering applications to Laptop Computers. In 1994 Schlumberger Dowell started to migrate the full suite of mini-computer based CADE (Computer Aided Design and Evaluation) applications to Laptop computers. This paper chronicles the issues faced and resolved during the successful migration of the software. Introduction Over a period of 10 years, Dowell had developed 6 CADE (Computer Aided Design and Evaluation) software products, consisting of approximately 650 thousand lines of code (KLOC), which resided on VAX/VMS systems. A need was identified to quickly and economically move this software to a fast and portable computer platform. The company has made 4 major transitions since 1975 (General Electric Time Sharing to Honeywell 6000 to IBM 4341 to VAX/VMS). The last phase has lasted 10 years. The objective of this project was to migrate the CADE software to a laptop computer as quickly and economically as possible. The resulting system was required by the customer to work on a typical 1994 laptop computer configuration, i.e., a PC with a 486 - 33 MHz processor with 8 MB of RAM and less than 50 MB of free disk space. This paper describes the experiences encountered and the solutions used during the migration effort. The pre-migration status, the feasibility study prior to migration and a description of the actual development phase are covered. The section on the feasibility phase gives the details on how third party software was selected. The paper concludes with a postmortem of the project and a summary of the lessons learned. Historical Background The CADE software is a suite of computer applications for designing and evaluating the various services provided by an oilfield service pumping company. These services include hydraulic fracturing, matrix acidizing, coiled tubing, cementing and sand control. The software was developed using Ada, Fortran and C languages. About 70% (450 KLOC) of the product was developed in Ada and approximately 30% (200 KLOC) in Fortran with only a trace of C. Fortran was used exclusively for numerical calculations, while Ada was used for the Human Interface and the remainder of the numerical calculations. Ada was the predominant language because of its strong data typing, information hiding and generic template features which made the source code easy to maintain. Ada also helped to minimize the most common programming errors made using the more prevalent mainstream languages. The original CADE software was developed for use on the DEC MicroVAX II mini-computer. More than 200 of these mini-computer machines are networked together around the world using the company's worldwide network. At the introduction of the CADE software in 1985, the mini-computer offered a significant performance improvement over the previously used time sharing computing facility. The mini-computers also allowed the software to be used in remote locations where low telephone communication quality had precluded connection to the time sharing computer. User Survey. A survey of the CADE user community was conducted in late 1992 to determine user satisfaction and the direction of future development. The user response was considered excellent at 59%, with 71 of a total 120 questionnaires being returned. The survey contained questions about the user profile and specific issues regarding the various products. P. 157
Abstract In developing mission-critical, real-time applications, the authors have found historical testing methods to be inadequate for personal computers (PC's). A major problem is the myth of "PC compatibility"; i.e. the notion that all PC's are interchangeable and function independently of the software. The authors propose that the interactions between PC hardware and software are extensive and complex, and require that system specification and testing be modified to include integrated hardware/software testing. Introduction One of the primary reasons for the widespread use of the personal computer in commercial applications has been the openness of the system, allowing many competing vendors to develop hardware and software components. This competition has produced surprisingly rapid advances in computer technology and in system capabilities. However, the main goal of application developers continues to be producing applications that are usable, reliable and cost-effective, both to develop and to support. The historical approach to application development has included independently specifying and testing the hardware and software components of the system. This has proven to be inadequate for mission-critical PC applications, due in large part to the fact that modern PC's are high-performance machines containing an array of specialized components. The open architecture of the PC, compared to the previous closed (proprietary) designs, creates the situation where each of these hardware and software components can come from multiple, independent sources, resulting in a neverending stream of possible combinations. The differences between PC's and the complex interactions between the software and various hardware components requires that specification and test procedures be modified to include rigorous testing of all hardware/software combinations. In addition, due to the continuing evolution and changes in PC hardware and in software development tools, these test procedures must continue in some form throughout the product's life cycle. This paper describes the authors' experience in developing and distributing PC-based applications, emphasizing practical solutions and anticipated future improvements. Impact of the Compatibility Problem The industry relies on personal computers to provide data acquisition, increases in personal productivity, and assistance with data analysis. We are increasingly dependent on these devices and their associated software. We have all been faced with mixed emotions when upgrading our personal computers to either new operating systems, new hardware or both. We welcome the prospects of an analysis taking 20 to 50% less time and in some cases doing things not possible at all a few years ago without massive investments in workstations, mainframes and specialized training. However, will our old applications still function in a predictable manner? Will we spend weeks on the phone in non productive time in an attempt to contact a living and knowledgeable technical support staff member or be relegated to a combination of voice and fax back hell? If the personal computer industry can build a device that operates 2500 percent faster than the original IBM PC, why can't all my applications still work? Sadly, the true answer is that computers do provide the desired benefits only after a start up learning curve. The game, therefore, becomes an effort to reduce this learning curve to a minimum, searching for the quickest answer to the question "The hardware has changed again; does everything still work?". P. 129
- Energy > Oil & Gas > Upstream (1.00)
- Information Technology (0.87)
- Information Technology > Software (1.00)
- Information Technology > Hardware (1.00)
- Information Technology > Architecture > Real Time Systems (0.75)
Abstract In the Shell Group, we have developed a state-of-the-art data management system for the storage of resource (field, reservoir, etc.) related data. It iscalled RISRES (Reservoir Information System - REServoir module). First released in 1993, it is currently in operational use in eight Shell Group operating companies around the globe. This article describes the specification and construction process, as well as various aspects of the system which make it a powerful resource data management system. Introduction The ability to store, retrieve and share data relating to oil and gas reservoirs is critical to their efficient management. Inordinate amounts of time are often spent collecting, verifying, storing, sharing, re-collecting and re-verifying data for the purposes of studies, analyses and reports. RISRES is a state-of-the-art resource data management system which was developed to meet this challenge by addressing a set of business requirements described below. Features which make it state-of-the-art include:–extensibility to store any data –unambiguous versioning and time-stamping –flexible definition of subsurface and reporting structures –ability to store data in any form from numbers to document files –auditing and security features –transparent interfacing to applications. These features, and others which position RISRES as a general resource data management system both inside and outside the Shell Group, are discussed in some detail below. Although it clearly satisfied the stated business requirements, introduction of RISRES to the majority of the Shell Group led to considerable initial resistance in the user community. These experiences are discussed, and the resulting set of data-organization and interfacing concepts, in particular close integration with Shell Group petroleum engineering applications, are also described. Business requirements The data explosion has resulted in the need to handle ever-increasing quantities of data for life-cycle management of petroleum resources. In the late 1980s it was observed that just the gathering and validation of data for reservoir-related studies occupied a significant fraction of each study'sresources, and that data gathering or analysis would often have to be repeated because previous study data and results would get "lost". One area of particular concern at this time was resource volumes. Companies in the Shell Group were working with a combination of legacy systems and spreadsheets. The "old technology" legacy systems were generally inflexible to the evolving requirements of resource classification and were often seen as a "black hole" into which data disappeared, never to be seen again. These problems tended to be addressed by the use of mainframe and PC based spreadsheets, or other "quick fixes", to fill in the gaps not covered by the legacy systems, in some cases replacing them altogether. However, these spreadsheets and files were usually in disparate and undocumented locations on hard disks around the company offices, generally included little or no validation, and often contained hidden errors. In short, they represented an inappropriately fragile audit trail for the main assets of the companies: their hydrocarbon resource volumes. The Shell Group lacked a database to hold gathered, validated and processed data, including hydrocarbon volumes, applicable at a resource level (i.e. fields, reservoirs, blocks, etc.). This led to the establishment of the Reservoir Information Systems (RIS) project in 1990. P. 27