|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
The motivation for high-performance computing in reservoir simulation has always existed. From the earliest simulation models, computing resources have been severely taxed simply because the level of complexity desired by the engineer almost always exceeded the speed and memory of the hardware. The high-speed vector processors such as the Cray of the late 1970s and early 1980s led to orders-of-magnitude improvement in speed of computation and led to production models of several hundred thousand cells. The relief brought by these models, unfortunately, was short-lived. The desire for increased physics of compositional modeling and the introduction of geostatistically/structurally based geological models led to increases in computational complexity even beyond the large-scale models of the vector processors. Tens of millions of cells with complete reservoir parameters now became available for use by the engineer.
The Merriam-Webster Dictionary defines simulate as assuming the appearance of without the reality. Simulation of petroleum reservoir performance refers to the construction and operation of a model whose behavior assumes the appearance of actual reservoir behavior. A model itself is either physical (for example, a laboratory sandpack) or mathematical. A mathematical model is a set of equations that, subject to certain assumptions, describes the physical processes active in the reservoir. Although the model itself obviously lacks the reality of the reservoir, the behavior of a valid model simulates--assumes the appearance of--the actual reservoir. The purpose of simulation is estimation of field performance (e.g., oil recovery) under one or more producing schemes. Whereas the field can be produced only once, at considerable expense, a model can be produced or run many times at low expense over a short period of time. Observation of model results that represent different producing conditions aids selection of an optimal set of producing conditions for the reservoir.
Abstract Giant reservoirs of the Middle East are crucial for the supply of oil and gas to the world market. Proper simulation of these giant reservoirs with long history and large amount of static and dynamic data requires efficient parallel simulation technologies, powerful visualization and data processing capabilities. This paper describes GigaPOWERS, a new parallel reservoir simulator capable of simulating hundreds of millions of cells to a billion cells with long production history in practical times. The new simulator uses unstructured grids. A distributed unstructured grid infrastructure has been developed for models using unstructured or complex structured grids. Unconventional wells such as maximum reservoir contact wells and fish-bone wells, as well as faults and fractures are handled by the new gridding system. A new parallel linear solver has been developed to solve the resulting linear system of equations. Load balancing issues are also discussed. A unified compositional formulation has been implemented. The simulator is designed to handle n-porosity systems. An optimization-based well management system has been developed by using mixed integer nonlinear programming. In addition to the core computational algorithms, the paper will present the pre- and post-processing software system to handle large amount of data. Visualization techniques for billions of cells are also presented. Introduction For many oil and gas reservoirs, especially large reservoirs in the Middle East, availability of vast amount of seismic, geological and dynamic reservoir data result in high-resolution geological models. But despite the many benefits of parallel simulation technology for large reservoirs, average cell size still remains in the order of hundreds of meters for large reservoirs. In order to fully utilize the seismic data, smaller grid blocks such as 25 to 50 meters in length are required. This size of grid blocks results in billion (Giga) cell models for giant reservoirs. In order to simulate such models with reasonable turnaround time, new innovations in the main components of the simulator such as linear equation solvers and equation of state computations are essential. Also, next generation pre- and post-processing tools are needed in order to build and analyze giga-cell models in practical times.
Dogru, A.H. (Saudi Arabian Oil Co.) | Sunaidi, H.A. (Saudi Arabian Oil Co.) | Fung, L.S. (Saudi Arabian Oil Co.) | Habiballah, W.A. (Saudi Arabian Oil Co.) | Al-Zamel, N. (Saudi Arabian Oil Co.) | Li, K.G. (Saudi Arabian Oil Co.)
Summary A new parallel, black-oil-production reservoir simulator (Powers**) has been developed and fully integrated into the pre- and post-processing graphical environment. Its primary use is to simulate the giant oil and gas reservoirs of the Middle East using millions of cells. The new simulator has been created for parallelism and scalability, with the aim of making megacell simulation a day-to-day reservoir-management tool. Upon its completion, the parallel simulator was validated against published benchmark problems and other industrial simulators. Several giant oil-reservoir studies have been conducted with million-cell descriptions. This paper presents the model formulation, parallel linear solver, parallel locally refined grids, and parallel well management. The benefits of using megacell simulation models are illustrated by a real field example used to confirm bypassed oil zones and obtain a history match in a short time period. With the new technology, preprocessing, construction, running, and post-processing of megacell models is finally practical. A typical history- match run for a field with 30 to 50 years of production takes only a few hours. Introduction With the development of early parallel computers, the attractive speed of these computers got the attention of oil industry researchers. Initial questions were concentrated along these lines:Can one develop a truly parallel reservoir-simulator code? What type of hardware and programming languages should be chosen? Contrary to seismic, it is well known that reservoir simulator algorithms are not naturally parallel; they are more recursive, and variables display a strong dependency on each other (strong coupling and nonlinearity). This poses a big challenge for the parallelization. On the other hand, if one could develop a parallel code, the speed of computations would increase by at least an order of magnitude; as a result, many large problems could be handled. This capability would also aid our understanding of the fluid flow in a complex reservoir. Additionally, the proper handling of the reservoir heterogeneities should result in more realistic predictions. The other benefit of megacell description is the minimization of upscaling effects and numerical dispersion. The megacell simulation has a natural application in simulating the world's giant oil and gas reservoirs. For example, a grid size of 50 m or less is used widely for the small and medium-size reservoirs in the world. In contrast, many giant reservoirs in the Middle East use a gridblock size of 250 m or larger; this easily yields a model with more than 1 million cells. Therefore, it is of specific interest to have megacell description and still be able to run fast. Such capability is important for the day-to-day reservoir management of these fields. This paper is organized as follows: the relevant work in the petroleum-reservoir-simulation literature has been reviewed. This will be followed by the description of the new parallel simulator and the presentation of the numerical solution and parallelism strategies. (The details of the data structures, well handling, and parallel input/output operations are placed in the appendices). The main text also contains a brief description of the parallel linear solver, locally refined grids, and well management. A brief description of megacell pre- and post-processing is presented. Next, we address performance and parallel scalability; this is a key section that demonstrates the degree of parallelization of the simulator. The last section presents four real field simulation examples. These example cases cover all stages of the simulator and provide actual central processing unit (CPU) execution time for each case. As a byproduct, the benefits of megacell simulation are demonstrated by two examples: locating bypassed oil zones, and obtaining a quicker history match. Details of each section can be found in the appendices. Previous Work In the 1980s, research on parallel-reservoir simulation had been intensified by the further development of shared-memory and distributed- memory machines. In 1987, Scott et al. presented a Multiple Instruction Multiple Data (MIMD) approach to reservoir simulation. Chien investigated parallel processing on sharedmemory computers. In early 1990, Li presented a parallelized version of a commercial simulator on a shared-memory Cray computer. For the distributed-memory machines, Wheeler developed a black-oil simulator on a hypercube in 1989. In the early 1990s, Killough and Bhogeswara presented a compositional simulator on an Intel iPSC/860, and Rutledge et al. developed an Implicit Pressure Explicit Saturation (IMPES) black-oil reservoir simulator for the CM-2 machine. They showed that reservoir models over 2 million cells could be run on this type of machine with 65,536 processors. This paper stated that computational speeds in the order of 1 gigaflop in the matrix construction and solution were achievable. In mid-1995, more investigators published reservoir-simulation papers that focused on distributed-memory machines. Kaarstad presented a 2D oil/water research simulator running on a 16384 processor MasPar MP-2 machine. He showed that a model problem using 1 million gridpoints could be solved in a few minutes of computer time. Rame and Delshad parallelized a chemical flooding code (UTCHEM) and tested it on a variety of systems for scalability. This paper also included test results on Intel iPSC/960, CM-5, Kendall Square, and Cray T3D.
Geiger-Boschung, Sebastian (Heriot Watt University) | Huangfu, Qi (University of Edinburgh) | Reid, Fiona (University of Edinburgh) | Matthai, Stephan Konrad (Imperial College) | Coumou, Dim (Potsdam Institute for Climate Impact Research) | Belayneh, Mandefro (Imperial College) | Fricke, Claudia (Heriot Watt University) | Schmid, Karen Sophie (Heriot Watt University)
Abstract We have been able to solve a reservoir simulation problem which was previously thought of as intractable: We simulated multiphase displacement, including viscous, capillary, and gravitational forces, for highly resolved and geologically realistic models of naturally fractured reservoirs (NFR) at the sector, i.e., kilometre, scale with very reasonable runtime. This has been possible because we used massive parallelisation and hierarchical solvers in conjunction with a new discrete fracture and matrix modelling (DFM) technique that is based on mixed-dimensional unstructured hybrid-element discretisations. High-resolution DFM simulations are important to resolve the non-linear coupling of small scale capillary - viscous and large scale gravitational - viscous processes adequately for sector scale NFR. Cross-scale process coupling in NFR controls oil recovery and NFR often exhibit power-law fracture length distributions, i.e. they do not possess an REV, and highly permeable fractures can extend over the full hydrocarbon column height. As a consequence, emergent displacement patterns have been observed which are difficult to quantify using traditional means of upscaling. However, such patterns could now be used as benchmarks to reach a better concensus on the correctness of promising new upscaling techniques. The parallel DFM technologies presented here allow us to obtain these results much more efficiently and hence explore the parameter space in greater detail. We observed a linear scaling behaviour for up to 64 processes and a significant decrease in runtime when applying our parallel DFM approach to three highly refined NFR simulations. These contain thousands of fractures, up to 5 million elements, and have local grid-refinements below 1 m for model dimensions between 1 and 10 kilometres. We achieved this excellent speedup because we reduced inter-processor communication by minimising the overlap between individual domains and decreased idle time of individual processors by distributing the number of unknowns equally among the processors. Introduction Production from naturally fractured reservoirs (NFR), which contain a major part of the world's remaining oil reserves, is challenging. NFR often suffer from a low final recovery that leaves between 80 to 95% of the oil underground which is retained in the low-permeability rock matrix (Kazemi and Gilman, 1993). Traditionally, production from NFR is simulated with dual-porosity models (Warren and Root, 1963). They represent the reservoir by a flowing domain of high permeability, the network of connected fractures, which is coupled to a stagnant domain, the low-permeability rock matrix. Exchange of oil, gas and water between the two domains is modelled by transfer functions. The rate of fluid transfer between fracture and matrix depends on the pressure gradient between the two domains and a shape factor, which represents the geometry of the rock matrix. The advantage of dual-porosity models is that they can be readily used in standard industry finite-difference or streamline reservoir simulators (Huang et al., 2004).