Nandi Formentin, Helena (Durham University and University of Campinas) | Vernon, Ian (Durham University) | Avansi, Guilherme Daniel (University of Campinas) | Caiado, Camila (Durham University) | Maschio, Célio (University of Campinas) | Goldstein, Michael (Durham University) | Schiozer, Denis José (University of Campinas)
Reservoir simulation models incorporate physical laws and reservoir characteristics. They represent our understanding of sub-surface structures based on the available information. Emulators are statistical representations of simulation models, offering fast evaluations of a sufficiently large number of reservoir scenarios, to enable a full uncertainty analysis. Bayesian History Matching (BHM) aims to find the range of reservoir scenarios that are consistent with the historical data, in order to provide comprehensive evaluation of reservoir performance and consistent, unbiased predictions incorporating realistic levels of uncertainty, required for full asset management. We describe a systematic approach for uncertainty quantification that combines reservoir simulation and emulation techniques within a coherent Bayesian framework for uncertainty quantification.
Our systematic procedure is an alternative and more rigorous tool for reservoir studies dealing with probabilistic uncertainty reduction. It comprises the design of sets of simulation scenarios to facilitate the construction of emulators, capable of accurately mimicking the simulator with known levels of uncertainty. Emulators can be used to accelerate the steps requiring large numbers of evaluations of the input space in order to be valid from a statistical perspective. Via implausibility measures, we compare emulated outputs with historical data incorporating major process uncertainties. Then, we iteratively identify regions of input parameter space unlikely to provide acceptable matches, performing more runs and reconstructing more accurate emulators at each wave, an approach that benefits from several efficiency improvements. We provide a workflow covering each stage of this procedure.
The procedure was applied to reduce uncertainty in a complex reservoir case study with 25 injection and production wells. The case study contains 26 uncertain attributes representing petrophysical, rock-fluid and fluid properties. We selected phases of evaluation considering specific events during the reservoir management, improving the efficiency of simulation resources use. We identified and addressed data patterns untracked in previous studies: simulator targets,
We advance the applicability of Bayesian History Matching for reservoir studies with four deliveries: (a) a general workflow for systematic BHM, (b) the use of phases to progressively evaluate the historical data; and (c) the integration of two-class emulators in the BHM formulation. Finally, we demonstrate the internal discrepancy as a source of error in the reservoir model.
This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 169405, “Use of Emulator Methodology for Uncertainty-Reduction Quantification,” by C. Ferreira, Universidade Estadual de Campinas; I. Vernon, Durham University; D.J. Schiozer, SPE, Universidade Estadual de Campinas; and M. Goldstein, Durham University, prepared for the 2014 SPE Latin American and Caribbean Petroleum Engineering Conference, Maracaibo, Venezuela, 21–23 May. The paper has not been peer reviewed.
Most simulation models go through a series of iterations before being judged as giving an adequate representation of the physical system. This can be difficult because the input space to be searched may be high dimensional, the collection of outputs to be matched may be very large, and each single evaluation may take a long time. Because the uncertainty analysis is complex and time consuming, in this paper, a stochastic representation of the computer model, called an emulator, was constructed to quantify the reduction in the parameter input space.
Reservoir simulators are important and widely used in reservoir management. They are used in reservoir-performance prediction and for decision making. These simulators are computer implementations of high-dimensional mathematical models for reservoirs, where the model inputs are physical parameters and the outputs are observable characteristics such as well-pressure measurements and fluid production. Uncertainties are always present in the reservoir-characterization process; thus, input parameters are usually uncertain and so is the simulator output.
The procedure to calibrate the reservoir-simulation model is called history matching. On the basis of observed data, a set of possible input choices for the reservoir model is identified. Two different procedures can be used to perform the history matching: deterministic and probabilistic approaches.
The deterministic approach involves running the initial simulation model with different input values to obtain one simulation model between many probable matches to the field data.
In a probabilistic approach, in which several reservoir-model scenarios are considered, the uncertainty analysis procedure is used. Identifying the input parameters for which the simulation outputs match the observed data can be a difficult task because the input space to be searched may be high dimensional, the collection of outputs to be matched may be very large, and each single evaluation may take a long time.
In petroleum engineering, simulation models are used in reservoir performance prediction and in the decision-making process. These models are complex systems, typically characterized by a vast number of input parameters. Typically, the physical state of the reservoir is highly uncertain and, thus, the appropriate parameters of the input choices are also highly uncertain. 4D seismic data can reduce significantly the uncertainty of the reservoir because it has a high area resolution, as opposed to the observed well rates and pressure. However, two main challenges are faced to calibrate the simulation model using 4D seismic data. The process can be time consuming because most models go through a series of iterations before being considered sufficiently accurate to give an adequate representation of the physical system. The consideration of 4D seismic data as an observed parameter in the form of maps would lead to an unfeasibly large number of variables to be matched. To overcome such issues, the construction of an emulator that represents the simulation model and the use of the canonical correlation technique to incorporate 4D seismic data can be used. The present study constructed a stochastic representation of the computer model called an emulator to quantify the reduction in the parameter input space. 4D seismic data was incorporated in the procedure through the canonical correlation technique. The water saturation map derived from seismic data was converted into seven canonical functions. Such functions represent the observable characteristics to be matched in the uncertainty reduction process. A high number of evaluations was necessary to identify the range of input parameters whose outputs matched the historical data (4D seismic data). The large number of evaluations justifies the use of an emulator and the reduction of uncertainties with areal characteristics shows that 4D seismic data was successfully incorporated. The emulator methodology represents a powerful tool in the analysis of complex physical problems such as history matching. The incorporation of 4D seismic data as an observable output to be matched leads to a difficult problem to be solved. However, the canonical correlation permitted a successful incorporation of such data into the problem.
In petroleum engineering, simulation models are used in the reservoir performance prediction and in the decision making process. These models are complex systems, typically characterized by a vast number of input parameters. Usually the physical state of the reservoir is highly uncertain, and thus the appropriate parameters of the input choices. The uncertainty analysis often proceeds by first calibrating the simulator against observed production history and then using the calibrated model to forecast future well production. Most models go through a series of iterations before being judged to give an adequate representation of the physical system. This can be a difficult task since the input space to be searched may be high dimensional, the collection of outputs to be matched may be very large, and each single evaluation may take a long time. As the uncertainty analysis is complex and time consuming; in this paper, a stochastic representation of the computer model was constructed, called an emulator, to quantify the reduction in the parameter input space due to production data over different production periods. The emulator methodology used represents a powerful and general tool in the analysis of complex physical models such as reservoir simulators. Such emulation techniques have been successfully applied across a large number of scientific disciplines. The emulator methodology was applied to evaluate the production data capacity to identify uncertain reservoir physical features over the production period for a synthetic reservoir simulation model. The synthetic model was built to represent a region of an injector and related producers. In the case studied; thousands of realizations were required to identify certain physical reservoir features. This justifies the use of emulation and shows the importance of this technique for the identification of regions of feasible input parameters. Moreover, the impact on the input space reduction due to different production periods was determined. The emulator methodology used assists in carrying out tasks that require computationally expensive objective function evaluation, such as identifying regions of feasible input parameters; making predictions for future behavior of the physical system and investigating the reservoir behavior.
Microcomputers have assumed a pivotal support role in carrying out reservoir engineering research at Shell Development Company. Four specific application areas that have benefited from microcomputers are:
1) Laboratory experimental control, data collection, first pass data manipulation and analysis, and graphical presentation of results using microcomputers have increased laboratory throughput while improving the quality of data obtained and the ease of understanding the implications of experimental results.
2) Microcomputers are used extensively as smart terminals for coordinating access to the VAX, IBM, and CRAY computers used in carrying out simulation work.
3) Over the past ten years, a comprehensive package of reservoir simulation workstation software has been developed on VAX minicomputers connected to dedicated graphics computers. With the advent of the 68000 based Amiga and 80286 and 80386 based PC "compatibles," it has become possible to emulate the necessary functionality of these dedicated workstations with an acceptable response time. This puts the power of a VAX 8800 based graphics workstation at each engineer's desk.
4) Preparation of visual aids required for presentation of complex research results is facilitated by the powerful graphics programs available for DOS based microcomputers.
With ever more power microcomputers and peripherals becoming available at prices making it practical to put a high performance microcomputer system at each engineer's desk, we must continually review how to best integrate usage of the micro, mini, and mainframe computers available in our work environment.
Microcomputers have assumed a pivotal role in carrying out one of the primary functions of reservoir engineering research at Shell Development Company: development and dissemination of new oil recovery processes, implementation procedures, and methods for quantifying the impact of these new processes and procedures on project profitability to the operating companies. This effort requires the integration of insight gained from laboratory scale experiments with mathematical reservoir simulation, the tool of choice for scaling from the laboratory level to field scale. If progress in our research is to impact field project planning and operations in a timely manner, it is also necessary that implications of these research results be presented in a concise fashion that is clear to engineering staff who are not experts in the particular research area. For these reasons, staff usage of computers is high and is anticipated to continue increasing.
A number of different computer systems are used every day by the research engineers. The major computer types now in use are two CRAY X-MP/14 computers for scientific calculations, VAX 11/785 and 8800 computers in a cluster for graphics intensive interactive applications and reservoir simulator development. IBM mainframes running VM CMS for access to corporate data bases, dedicated Genisco graphics computers, and a variety of microcomputers for office and laboratory use. In years past, the list would have also included Harris minicomputers, and a variety of microcomputers, Burcon microcomputers, and Univac (Unisys) mainframes.
Given the options available today, the current mix of computers available to the department almost certainly does not represent an optimum configuration for carrying out our work. Rather, the installed computer base represents the cumulative result of decisions made over a period of time to acquire the most suitable computer(s) available at the time for the task at hand. Here we describe how this current mix of computers is used in our everyday work and the future migration directions we might consider for upgrading our computer base.
The four specific application areas to be discussed are: laboratory usage, smart terminals, engineering workstations, and preparation of presentation graphics.
MICROCOMPUTERS IN THE LABORATORY
One of the first uses of microcomputers in our research environment was for laboratory data acquisition. This dates back to late 1970's and the use of a Burcon system based on an 8085 microprocessor.