In the last decade, changes in the oil and computer industries have complicated the process of delivering software applications significantly. Although the oil industry has gone through extensive downsizing, computing advances have forced users to demand more complete and integrated computing solutions. Several oil companies responded to this problem with a policy of "buy, don't build." Although useful, this mindset does not address directly the real question of how to maximize and leverage limited resources to deliver the necessary applications to the user community efficiently. This paper delves beyond the bipolar buy vs. build question to present experiences with various approaches that have been used for delivering software. Among the methods we discuss are software tools, research institutions, vendor consortium projects, alliances, and industry standards.
Both the oil and the computer industries have evolved significantly during the last decade. The challenge of delivering software applications to the oil industry has been intensified by the rapid advances in computing technology and further complicated by the downsizing of the petroleum E&P industry. The old paradigm in which oil company internal R&D laboratories deliver the software applications to their internal customers is difficult to maintain with these contrasting forces. R&D departments have needed to change from pure research organizations to value-added entities. As a result, some oil companies have established a new mindset of "buy, don't build."
While this is often a useful distinction, buy vs. build is far too polar. The complexity of delivering software in today's rapidly changing environment requires maximizing limited resources. In our efforts as a software vendor, we have had experiences with a number of techniques to leverage our software-development resources, among them software tools, research institutions, vendor consortium projects, alliances, and industry standards. These alternatives all fall somewhere in the buy/build spectrum. This paper presents some of the benefits and limitations of these approaches.
Reservoir simulation is a mature technology, and nearly all major reservoir development decisions are based in some way on simulation results. Despite this maturity, the technology is changing rapidly. It is important for both providers and users of reservoir simulation software to understand where this change is leading. This paper takes a long-term view of reservoir simulation, describing where it has been and where it is now. It closes with a prediction of what the reservoir simulation state of the art will be in 2007 and speculation regarding certain aspects of simulation in 2017.
Today, input from reservoir simulation is used in nearly all major reservoir development decisions. This has come about in part through technology improvements that make it easier to simulate reservoirs on one hand and possible to simulate them more realistically on the other; however, although reservoir simulation has come a long way from its beginnings in the 1950's, substantial further improvement is needed, and this is stimulating continual change in how simulation is performed.
Given that this change is occurring, both developers and users of simulation have an interest in understanding where it is leading. Obviously, developers of new simulation capabilities need this understanding in order to keep their products relevant and competitive. However, people that use simulation also need this understanding; how else can they be confident that the organizations that provide their simulators are keeping up with advancing technology and moving in the right direction?
In order to understand where we are going, it is helpful to know where we have been. Thus, this paper begins with a discussion of historical developments in reservoir simulation. Then it briefly describes the current state of the art in terms of how simulation is performed today. Finally, it closes with some general predictions.
This paper provides a business perspective of how harnessing information technology (IT) can facilitate change, especially in the oil and gas industry. The oil and gas industry has benefited significantly by leveraging developments in IT for applications that reduce the cost of oil exploration; reduce the time required to perform complex tasks; and link people and information, thereby enabling virtual teams. Typical examples of virtual teamwork are three-dimensional (3D) seismic surveys and million-gridblock reservoir simulations. To implement information-age solutions with global access and anytime availability, one must be aware of security, remote connectivity, and information management issues. Overcoming these issues can be an opportunity for oil and gas industry IT solutions organizations. Future developments in software, Intranet, powerful clients, massively parallel processing (MPP), and asynchronous transfer mode (ATM) will further enhance IT's capability to facilitate change in the oil and gas industry. IT already has played a significant role in the revival of this industry and will continue to do so into the 21st-century.
The benefits and wide-ranging impact of IT are launching this world from the Industrial Age into the Information Age. IT has been a key factor for most businesses in defining their ability to manage change, and this is especially true for the oil and gas industry. This paper highlights the role of IT in the oil and gas industry by stating the achievements it has brought about to date, studying the current trends that significantly impact today's business, analyzing the main issues faced from a business viewpoint, providing examples of some recent projects and achievements, and extrapolating to what the future has in store. The paper identifies the agents of change from an IT perspective and, in the interest of time, focuses on one of the most important agents of today and in the near future: the Intranet.
In its simplest form, IT can be defined as a combination of hardware, software, applications, and content combined with communications (this last factor is gaining increasing importance in time) that is used to store, retrieve, and analyze information. The force of IT is pervasive and growing daily. In the industrial sector, IT dismantles barriers and encourages the development of a global market. By encouraging us to look at how business is done and how it could be done better, IT enables new business growth.
At its core, IT helps bring people together. IT cements the links between client and product developer or service provider, between designer and manufacturer, and between diverse parts of a global organization. It aids team-building everywhere, not just within an organization, but also with partners and clients. In today's jargon, IT enables the virtual organization.
We designed and implemented a work-flow automation system to automate the flow of operational data throughout a servicing operation. Work-flow automation enables well-service personnel who are rarely network-connected to access information systems containing updated job requirements and well data. As the service personnel perform their work, data from the job are stored on a personal computer by use of store-and-forward architecture. Once the computer connects to the wide area network (WAN), job data are forwarded to the service-center operations systems. Copies of these data also can be forwarded to the customer to demonstrate enhanced value or to track cost. In this pilot implementation, we decreased the number of paper forms substantially, increased the quality of data collection, distributed updates to the system to the field easily, and relayed key data to the customer quickly. This paper documents the pilot implementation of this system and the lessons learned from its implementation.
Operators base many future decisions on the value provided by a particular service. If a service is cost-effective, an operator may accept a bid from the provider for additional services. However, the operator may also select a provider on the basis of that provider's extensive knowledge of the well or completion zone from detailed histories of services provided in that area. Such operational knowledge is a valid criterion for selection of a service company.
Though valid operational data are critical in decision-making, they are often difficult to obtain. Typically, operational data are captured in a hand-written form that passes through many channels before they are entered into a system. Data are often lost or misinterpreted during this process; therefore, basing decisions on that data is risky.
The PetroTechnical Open Software Corp. (POSC) was organized in 1990 to define technical methods to make it easier to design interoperable data solutions for oil and gas companies. When POSC rolls out seed implementations, oilfield service members must validate them, correct any errors or ambiguities, and champion these corrections into the original specifications before full integration into POSC-compliant, commercial products. Organizations like POSC are assuming a new role of "promoting formation of projects where E&P companies and vendors jointly test their pieces of the migration puzzle on small subsets of the whole problem. We describe three such joint projects. While confirming the value of such open cross-company cooperation, these cases also help to redefine interoperability in terms of business objects that will be common across oilfield companies, their applications, access software, data, or data stores.
What Are Common Business Objects (CBO's)?
Suppose we solicited a summary business process description for each of the regulatory agency, occupational, vendor, and oil company roles in the E&P business. Someone in a land office would provide a paragraph describing the business, a geologist would supply a general occupational definition for geologists, and someone would provide a description for drilling companies. These are descriptions of business processes, and their practitioners write them in their own terms. If we selected the 50 most important of these, we would have a good summary of the entire E&P business. In these paragraphs, certain terms describing "things," such as reservoir, well, formation, and fluid, would appear again and again. In some cases, the terms would be different but the concepts would be shared. For each of these things or objects, it would be useful if we could capture a broad definition that would encompass all 50 uses. For example, a land office and a reservoir engineer have particular semantics in mind when they say "well." But there is also a core of shared meaning. The idea of a CBO captures this notion of a shared core definition of an object from the point of view of a business practitioner.
We describe the evolution of a customer-driven, fit-for-purpose database tool that intelligently combines reserves, production, entitlement/sales, CAPEX, OPEX, well scheduling and separator test data. How the tool, DPARS, evolved from a single-user application on a portable PC to being a networked, multi-user, corporate tool, used world-wide in a mid-size international oil and gas, exploration and production company.
DEMINEX is an international oil and gas, exploration and production company, based in Germany, and has only interests and activities outside Germany, world-wide. Since its foundation in 1969, DEMINEX has grown to be the largest independent oil and gas exploration and production company in Europe, with over 1 billion barrels of oil-equivalent ("BOE") reserves and 230,000 BOEPD production.
This was not always the case. In the early-1980s, there were only had "a handful" of producing fields, when typically, a few production engineers worked with daily production figures to provide weekly average figures to a secretary, who typed the hand-written figures into a word processing package, to issue to management and shareholders. The arrival of a PC in Petroleum Engineering in 1985 allowed for a direct connection between raw input data and final output data via a spreadsheet. This greatly eased the process of data input, verification, and eventual consolidation. Using macros, data could be "automatically" entered on a routine basis.
This system was operable, but suffered from major problems, i.e.,
- adding new producing fields proved very complicated, and meant opening up additional areas on a 2D spreadsheet,
- it was difficult to simultaneously retain the different "types" of data being reported, e.g., actual, preliminary, forecast, budget, project or long-range plan, and thus preventing comparisons, and - a means of recording of who entered what, and when, was missing.
In total, the production reporting process occupied several engineers almost full-time, and even then, the quality was low. The problems worsened in 1987, when the spreadsheet creator and main user was asked to concentrate on other matters. At this time, it was decided to phase out spreadsheets for this purpose and implement a database option.
DEMINEX is essentially a private company, owned by four shareholders, and controlled by three. We are not listed, but our shareholders' parents are, in Germany. Our shareholders directly receive reports, and can influence the contents and frequency of these reports.
In 1987, there were no off-the-shelf petroleum industry database applications available that could provide us with anything that came near to a match of requirements. (The market was recently surveyed and it was concluded that this was still the case.) There were a few products that provided good matches in certain, specific areas, such as simple production history and forecasting.
As a result, developing own database application was considered an alternative. The in-house computing department supported a few applications on the VAX, and provided no support of personal computers ("PCs"). The available database software for the VAX was cumbersome and of limited capability. At this stage, the confidence was not there to justify the cost of an external database programmer, as it was not known what a database program was capable of providing, and the requirements could only be defined in broad terms. So, a simple, high level, database package that could be used by production engineers was looked for. The intention was to test the package, and if not too difficult, and accepted by management, use it to develop a fit-for-purpose application.
Convenient access to fluid property data in electronic form is currently not available in many petroleum organizations. Ready access to this information could result in significant cost savings by reducing the need for repeated studies and reducing the scope of new studies by analogous fluid evaluation. This work presents an overview of the development of a new scaleable database application to address these issues. In addition, design and development issues related to database scaleability are discussed. The application, PVT ReCORDTM, is currently undergoing beta-testing by a major commercial laboratory which is creating an online fluid property repository and automating PVT report generation with this tool.
Reservoir, production, and process simulators require adequate thermodynamic descriptions and models of petroleum fluids in order to perform accurate simulations. Phase behavior prediction applications model the thermodynamic behavior of these fluids based on fluid property (PVT) data obtained from laboratory studies. The PVT reports that contain these data are often difficult to locate since they are often stored in an engineers office rather than in a central location that is easily accessible to other personnel within the same company. As a result, it is not always evident whether a PVT analysis has been performed for a particular field, and another study may be ordered. In addition, PVT reports are typically provided in hard copy format and the data must be manually entered and regressed in order to be used by the various simulators.
A relational database application (PVT ReCORD) has been developed by DB Robinson & Associates Ltd. (DBR) to ensure that PVT data is stored, organized, and tracked in a logical and readily accessible digital format. The application provides data storage in a central location that can be accessed through a corporate wide network. Users are able to query for fluid data in the database that match a specified search criteria, which can eliminate the need for additional PVT studies if a similar fluid is located. In addition, PVT ReCORD is assisting fluid property laboratories in automating the process of report preparation for routine PVT studies.
The application utilizes an intuitive Microsoft WindowsTM graphical user-interface (GUI) and can be used in a stand-alone (desktop or local) environment or scaled upward to a client-server environment. Flexible search, report generation, and data visualization promotes effective data analysis. Since the application adheres to windows computing standards, and provides functionality such as Dynamic Data Exchange (DDE) and import/export capabilities with other applications, data sharing is fundamental.
Development Approach Used
A two-tiered development approach was used to develop PVT ReCORD. Although the graphical user interface (GUI) tier and database tier appear fully integrated to the end user, in actuality they are segregated. This segregated approach allows for robust, flexible rapid application development and a scaleable database architecture that is able to grow with an organization's information storage needs.
In our industry, data is only as valuable as our ability to use that data to solve problems. In addition, information management and instant access to information is increasingly important in the day-to-day operation of oil and gas fields. This paper presents an example of a project-driven approach to engineering information management of large engineering projects such as integrated reservoir studies. By project-driven, we mean that the design and content of the database are project specific and custom built to fit the needs of the project. This approach has the advantage of addressing the specific requirements of a particular project and providing the end-users of that database with a customized interface to access the data. We used a PC-based relational database management system (RDBMS) to custom build a database from existing engineering data as part of a larger integrated study. The data were originally in a variety of formats. The RDBMS we used allowed us to rapidly construct a database specific to the needs of the project. The database includes forms and reports for data entry and review, and graphics for displaying data. In addition, the database is very flexible, allowing changes to be made quickly and easily as the project progresses or if the objectives of the project change along the way. Finally, the database can import a wide variety of raw data formats and export data to other engineering analysis applications.
Historically, data integration has been a problem in our industry. Until the arrival of powerful computers and database software, most information resided in paper records and file cabinets. Today, in spite of inexpensive desktop computers and powerful workstations, data are still usually stored or even duplicated in different sources and in a wide variety of different formats. The large volume and variety of engineering and geophysical data can be overwhelming. Specific disciplines may need access to only part of the data from a field or reservoir at one time, but separate discipline databases lead to significant inefficiencies when those disciplines are integrated. This problem is so prevalent that a number of consortiums have formed in recent years to address the problem of information management and develop an industry standard for storing and sharing information.
Another problem with integration of existing data is the time required to organize the information and populate a database. Most large integrated reservoir studies have long timelines. Often, the project team requires preliminary results early in the project, even as the database is being constructed. In addition, as the project matures, the objectives may change based on an analysis of the available data. This means that the database must be flexible enough to quickly adapt to the changing objectives of the project over time.
In this paper, we discuss how we used a PC-based RDBMS to provide a practical solution to these basic problems for an integrated reservoir study. We will provide a basic overview of
- what a PC-based relational database looks like,
- why we chose a PC-based RDBMS,
- how we use the RDBMS to put data and analysis in the hands of end-users, and
- how the database becomes a springboard to rapid application of other engineering tasks, such as production data analysis, reservoir simulation or any task requiring integrated data. What is a relational database?
Databases usually come in two forms: 1) flat file databases, and 2) relational databases. Figure 1 presents a schematic of a flat file database. As seen in this figure, a flat file database stores all the information for a particular item in one record, and all records are stored in a single table, much like rows in a spreadsheet.
Changes in technology can alter status-quo thinking about risk and reward and proved reserves drastically. These changes can be subtle or far reaching and can result in a whole new language to describe their impact; the emergence of "paradigm shift" is an example. Faced with these changes, the decision maker must determine how to evaluate their impact quantitatively.
Intuitively, we think of reserves as relatively static quantities, although reserves do fluctuate with changes in price and operating cost (affecting the economic limit) and also with the magnitude of capital costs. Proved undeveloped reserves exist when the required capital investment generates an acceptable return. These reserves become resources when that return requirement is not met. The ability to reduce capital costs and/or risk opens up lower-quality resources to the prospect of economic recovery and the potential to become reserves. Study of the size distribution of such resources shows that small pools can contain successively greater quantities of reserves than pools in larger-size classes. As a result of new technology, development of such pools in a controlled cost and risk environment can represent as great a prize as a large-pool discovery. The ability to "change the rules" because of technology advances opens a vast opportunity for energy self-sufficiency that has ramifications on a national scale.
Resources and Their Conversion to Reserves
Masters'1 resource triangle illustrates that resources and reserves are not all of the same quality. This depiction shows a relatively small volume of high-quality resources at the apex of the triangle, grading down to much larger volumes of low-quality resources at the base. While Masters shows basin-centered tight-gas deposits as the low-quality resources at the base of the triangle, this illustration also can be used to think about or classify reserves-growth candidates. The easier or higher-quality reserves are discovered and produced first, leaving behind the harder-to-recover or lower-quality reserves and resources that contribute significantly to the reserves-growth process. This paper focuses on low-quality resources and the use of technology to convert a large resource base contained in small pools to economically recoverable reserves.
Reserves Growth by Exploiting Lower-Quality Resources
Primary recovery represents the highest-quality resource for oil. As production matures, secondary programs are installed to provide another wave or peak of production, which, because of the added requirement for injection facilities and wells, has its own added cost. Because an infill-drilling program usually is also involved, the cost associated with the secondary recovery may equal or exceed that of the primary recovery for roughly an equivalent amount of recovery. These reserves will cost more per barrel, thus representing lower-quality resources and moving the field down a notch on Masters' resource triangle.
We work with progressively lower-quality resources as time passes. By lower quality, we mean lower average rates and higher cost (both capital/bbl and operating/bbl owing to both the lower average rates and the requirement to inject, treat, and lift water). As the development continues into more exotic recovery schemes, the costs increase yet again, and essentially the evaluated quality of the resource is reduced. Because the infrastructure is already in place, we view the economics on an incremental basis. This view is extremely important because the presence of the infrastructure (the sunk costs) is essential to the continuing incremental recovery of the progressively-lower-quality resources from the field. When we consider exotic recovery schemes, in which the cost of the fluids being injected often exceeds the value of the commodity that is produced and sold, this view is paramount.
Because development is sequential, subsequent projects have flexible financing options. Primary existing production can be used as collateral; financing also can be achieved through reinvestment of cash flow. Contrast this with a large North Sea development that must install pressure maintenance from Day 1 to achieve the rates and recoveries required to justify project economics. In this case, the investment must be made up front and the risks are higher because performance predictions are based on modeling instead of performance history. The net result is that a larger field is necessary to justify taking on these risks to provide the necessary comfort buffer for both the company and its lenders. Here, the physical environment and the requirement to include lower-quality resources in the picture to justify development economically forces a larger minimum acceptable target size.
Role of Technology in Promoting Reserves-Growth Activities
In the gulf coast area, permeable reservoirs combined with natural waterdrives allow high production rates and rapid payout of investment. The remaining prospects in large fields fall into three basic types: (1) undiscovered pools; (2) discovered reserves in marginal quantities, such as unrecovered attics; and (3) discovered reserves behind pipe zones awaiting depletion of existing producing zones before they can be exploited. In all three cases, advances in technology have a major effect on bringing such resources and reserves on stream. The popular use of three-dimensional (3D) seismic to detect hydrocarbons directly allows undiscovered pools to be developed and drilled at success levels more typical of development vs. exploration drilling. For example, Brigham Oil2 has used 3D surveys effectively to identify untested Canyon Reef pinnacles in west Texas. Previously, this play was uneconomical with a historical success rate of 38%, an average recovery of 65,000 bbl/well oil, and a finding cost of U.S. $8.31/bbl. The ability to site wells at the highest structural location has improved average recovery to 243, 000 bbl/well oil and success ratios to 68% and lowered finding costs to U.S. $1.66/bbl.2 The 3D survey represents 10 to 15% of total project costs and lowers finding costs by a factor of two to four times. The rate of return for the play to date is more than 80%, which is a remarkable demonstration of technology resurrecting an effectively dead play involving small pools.
Risk analysis has become an integral part of the decision-making process within the Petroleum Industry. Today, petroleum engineers, geoscientists and project managers are using risking tools to evaluate the economic viability of both exploration and development projects.
Conoco drilling engineers have combined a drilling cost spread-sheet along with a forecasting and risk analysis program to predict the range of both cost and days necessary to drill a well. The model utilizes risk analysis and incorporates Monte Carlo simulation along with regional cost data to generate drilling cost and time requirements for a well. Using this model, the Conoco drilling engineers effectively evaluate multiple drilling alternatives. subsequently, more informed risk related recommendations from the drilling engineers aids management in the decision-making process of drilling a prospect or developing a project
This paper describes the spreadsheet and the risk analysis program used to generate the range of costs and days for a given well. In addition, the paper offers an example of the output data generated from the programs with an interpretation for a sample well.
Historically, drilling engineers have relied on a deterministic approach to developing drilling cost estimates. The strength to this method is its simplicity and clearly set assumptions. To account for uncertainties and risk, the drilling engineer would build a "base" ease cost The "high" and "low" cost estimates were then developed using percentage additions or subtractions from the "base" case. Unfortunately, this approach does not describe the full range of possible outcomes or quantify the likelihood of any particular outcome.
In an effort to demonstrate the drilling risks of drilling a prospect or developing a project to management, Conoco drilling engineers have begun performing risk analysis on the drilling expenditures and time through the use of a new drilling cost spreadsheet, a forecasting and risk analysis program which uses Monte Carlo simulation and a standardized methodology for data usage.
Our main spreadsheet model for probabilistic drilling cost estimating has a total of 152 line items subdivided into 29 major feature categories (Fig. 1). There are also two "breakout" spreadsheets for providing additional detail (one for detailing the estimate of total days on the well and one for detailing the casing program) (Figs. 2 and 3) for use by the engineer preparing the cost estimate if he or she desires to itemize such detail.
Included into our spreadsheet is a summary page and a query sort for the "deterministic" or "most likely" values (cost estimates) of the 29 major feature categories. This query sort, which we call the "Big Rock Sort" lists in descending order, according to cost, the 29 major feature categories, calculates the percentage of the total for each such major feature category, and lists a cumulative percentage for the sorted features (Fig. 4).
The "Big Rock Sort" enables us to conveniently identify the "key" cost drivers. We consider those features which account for 80 percent of the total cost estimate as "big rocks", and generally find that relatively few (generally less than 50 percent) of the features fall into our "big rock" category. We have also found that the particular features which fall into the big rock category vary on a case by case basis.
To improve the quality of our estimate, we invest additional effort to describe the uncertainty for those features which show up as "big rocks". Our rule-of-thumb in this regard is that we deal probabilistically with those elements of the cost estimate that make up the top 80 percent of the total cost estimate.