Ouenes, Ahmed (Petroleum Recovery Research Center) | Weiss, William (Petroleum Recovery Research Center) | Sultan, A.J. (Abu Dhabi Natl. Reservoir Research Foundation) | Anwar, Jabed (Petroleum Recovery Research Center)
This paper describes a new computing environment for reservoir automatic history matching. A parallel simulated annealing algorithm is used to estimate geologic and reservoir engineering parameters by automatically matching production history of an actual oil reservoir. A complex computer set-up using two networks of workstations simultaneously, located at the New Mexico Petroleum Recovery Research Center (PRRC) and the Los Alamos National Laboratory (LANL), is used to test the concept of distributed optimization. A heterogeneous cluster of two workstations (HP and SUN) is used at the PRRC and a homogeneous cluster of six IBM RISO 6000 workstations is used at LANL. At each site (PRRC and LANL), a Parallel Virtual Machine is created by using the message passing software, PVM. Communication between the two parallel virtual machines located at the PRRC and LANL is achieved with a simple e-mail protocol. In this new environment, the total time required to complete a 22 well oil reservoir study lead to the following observation: two-thirds of the time was devoted to geologic, core, and well log analyses, and one-third of the time to history matching.
After a few years of applying stochastic modeling to petroleum reservoirs, researchers and engineers have begun to realize that geostatistics is not enough in describing oil and gas reservoirs. Consequently, a new trend, using both deterministic and stochastic reservoir modeling techniques, recently emerged. Besides this mixed approach, there are two extreme positions in this area. The first one, is represented by those who are convinced that the use of stochastic modeling is the only viable approach in reservoir description. The second one, is represented by those who believe that a probabilistic description is not appropriate at all in reservoir modeling. In this old debate, the only important point is the fact that the uncertainties in reservoir description can be reduced when reservoir models honor all the existing field data.
Some of the most important information from petroleum reservoirs is the dynamic data such as pressure and production history. For many years, geoscientists have produced grayscale maps and 3-D models representing various stochastic models. However, the reservoir engineering value of such realizations were very often poor and in many instances unable to match the cumulative field production. Because of this deficiency, an effort has been initiated to add more constraints to the stochastic models. The application of global optimization methods to reservoir description have provided a framework where this objective is achievable. The goal of this approach is to constrain reservoir models with available dynamic field data. Today, it appears that pressure history obtained during well-test is becoming the major focus of this research area, and little attention is given to production history.
Constraining reservoir models to production history is the ultimate objective of reservoir engineers. Unfortunately, the practice of adjusting relative permeability curves and permeability in a few gridblocks around the wells rarely provides reliable reservoir models. On the other hand, the experience of the petroleum industry with automatic history matching algorithms has lead many engineers and researchers to believe that such techniques are impractical. As a result, the research in this area has been drastically reduced, almost to extinction, with only a few exceptions.
There are two main components in an automatic history matching algorithm: the first one is the optimization method, and the second one is the available computation power. Both components have recently seen dramatic changes, and will continue to be the subjects of a rapid evolution.