|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Both the computation of classical statistical measures (e.g., mean, mode, median, variance, standard deviation, and skewness), and graphic data representation (e.g., histograms and scatter plots) commonly are used to understand the nature of data sets in a scientific investigation--including a reservoir study. A distinguishing characteristic of earth-science data sets (e.g., for petroleum reservoirs), though, is that they contain spatial information, which classical statistical descriptive methods cannot adequately describe. Spatial aspects of the data sets, such as the degree of continuity--or conversely, heterogeneity--and directionality are very important in developing a reservoir model. Analysis of spatially rich data is within the domain of geostatistics (spatial statistics), but a foundation in classical statistics and probability is prerequisite to understanding geostatistical concepts. Sampling also has proved invaluable in thousands of studies, but it, too, can lead to statistical insufficiencies and biases.
The objective of scaleup is to take the behavior predicted from detailed, fine-grid reference models that at best represent only a few wells and a tiny part of the reservoir and transfer it to a model that attempts to represent many wells and the integrated behavior of the entire compositionally enhanced solvent flood (or at least a significant portion of it). Jerauld is a good example of the application of this method. Several reference models describe different areas of the field. Water/oil, solvent/oil, and solvent/water pseudorelative permeability relations are developed, along with pseudotrapped-solvent and solvent-flood residual oil values, so that the relevant behavior of the reference models is reproduced by corresponding models that have the same coarse grids as the full-field model. The coarse-grid models, of course, represent the same parts of the full-field model that the reference models represent.
The Merriam-Webster Dictionary defines simulate as assuming the appearance of without the reality. Simulation of petroleum reservoir performance refers to the construction and operation of a model whose behavior assumes the appearance of actual reservoir behavior. The model itself is either physical (for example, a laboratory sandpack) or mathematical. A mathematical model is a set of equations that, subject to certain assumptions, describes the physical processes active in the reservoir. Although the model itself obviously lacks the reality of the reservoir, the behavior of a valid model simulates--assumes the appearance of--the actual reservoir. The purpose of simulation is estimation of field performance (e.g., oil recovery) under one or more producing schemes. Whereas the field can be produced only once, at considerable expense, a model can be produced or run many times at low expense over a short period of time. Observation of model results that represent different producing ...
Reservoir simulation is a widely used tool for making decisions on the development of new fields, the location of infill wells, and the implementation of enhanced recovery projects. It is the focal point of an integrated effort of geosciences, petrophysics, reservoir, production and facilities engineering, computer science, and economics. Geoscientists using seismic, well-log, outcrop analog data and mathematical models are able to develop geological models containing millions of cells. These models characterize complex geological features including faults, pinchouts, shales, and channels. Simulation of the reservoir at the fine geologic scale, however, is usually not undertaken except in limited cases.
Upscaling, or homogenization, is substituting a heterogeneous property region consisting of fine grid cells with an equivalent homogeneous region made up of a single coarse-grid cell with an effective property value. Upscaling is performed for each of the cells in the coarse grid and for each of the grid properties needed in the reservoir flow-simulation model. Therefore, the upscaling process is essentially an averaging procedure in which the static and dynamic characteristics of a fine-scale model are to be approximated by that of a coarse-scale model. A conceptual illustration of the upscaling process is shown in Figure 1. Typically, 3D geological models contain detailed descriptions of the reservoir that can be hard to capture properly with a significantly coarser model. Therefore, it would be preferable if upscaling could be avoided. Currently, an average-sized flow simulation model consists of approximately 100,000 active grid cells. This is to ensure that the CPU consumption of a simulation run will be reasonable (i.e., within practical limits).
Abstract Building reliable subsurface models requires detailed knowledge of both the rock and fluids involved. One critical petrophysical property determining the viability of a development is the hydrocarbon saturation. In 3D geological models, the saturation is populated via Saturation height models and free fluid levels. In populating a 3D model with meaningful properties, measurements at various scales are integrated. Core measurements acquired at resolution far superior to that used in the 3D models require a change of scale-upscaling step. The process of accurately predicting water saturation in the upscaled model is not trivial. Here we follow this process by employing a saturation height model (SHM) at different scales in relationship to various permeability realizations. Multiple choices available as inputs into the SHM in various ranges of sensitivity with respect to the free water level position as well as different rock quality are looked at. Various degrees of heterogeneity are studied by using synthetic data, the saturation prediction accuracy based on upscaled input rock properties (like arithmetic/geometric and harmonic upscaled permeability) is investigated. For homogeneous rocks a workflow is detailed with the purpose of detecting the upscaling limits highlighting the possible errors that might appear in the upscaling process. A counterintuitive result is that in the transition zone (the focus of this work) permeable rocks are more prone to errors than the less permeable ones. We also conclude that no alteration of the SHM is necessary in the upscaling process. Given the fact that rock quality enters the SHM and that permeability upscaling follows a route that ultimately attempts to honor well performance, a natural question is what the relevance of such a permeability model as input for the SHM is. Our results highlight the best choices for an upscaled SHM input (upscaled) permeability- not necessarily the upscaled permeability used in history matching. Smallest errors are shown to be resulting from using geometrical or 1/3 power law upscaled permeability.
Summary A physics-based data-driven model is proposed for forecasting of subsurface energy production. The model fully relies on production data and does not require any in-depth knowledge of reservoir geology or governing physics. In the proposed approach, we use the Delft Advanced Reservoir Terra Simulator (DARTS) as a workhorse for data-driven simulation. DARTS uses an operator-based linearization technique that exploits an abstract interpretation of physics benefiting computational performance. The physics-based datadriven model is trained to fit data increasing the fidelity of the model forecast and reflecting significant changes in reservoir dynamics or physics over its history. The model is examined and validated for both synthetic and real field production data. We demonstrate that the developed approach is capable of providing accurate and reliable production forecast on a daily basis, even if the exact geological information is not available. Introduction Computer technologies are progressing rapidly. Computational capacities that are currently available provide an opportunity for many subsurface applications to perform complex numerical simulations of high-resolution 3D geocellular computer models. Predictions obtained from such models are an important factor governing efficient reservoir management and decision making. The models describe complex geological features through a set of gridblocks and associated rock and fluid properties. However, in many cases, the reliability of geological information is questionable or even not available. Although it is possible to develop a high-fidelity model on a reliable basis of reservoir geology, a high-resolution computer model can exceed a few million blocks and can take hours or even days to simulate. It is still not computationally feasible to perform history matching or reservoir-development optimization at such resolution because it involves a large number of simulation runs. Different methods have been developed to overcome the issue.
Summary In this paper, we propose a methodology that combines finite-element modeling with neural networks in the numerical modeling of systems with behavior that involves a wide span of spatial scales. The method starts by constructing a high-resolution model of the subsurface, including its elastic mechanical properties and pore pressures. A second model is also constructed by scaling up mechanical properties and pressures into a coarse spatial resolution. Inexpensive finite-element solutions for stress are then obtained in the coarse model. These stress solutions aim at capturing regional trends and large-scale stress correlations. Finite-element solutions for stress are also obtained in high resolution, but only in a small subvolume of the 3D model. These stress solutions aim at estimating fine-grained details of the stress field introduced by the heterogeneity of rock properties at the fine scale. A neural network is then trained to infer the transformation rules that map stress solutions between different scales. The inputs to the training are pressure and mechanical properties in high and low resolutions. The output is the fine-scale stress computed in the subvolume of the high-resolutionmodel. Once trained, the neural network can be used to approximate a high-resolution stress field in the entire 3D volume using the coarse-scale solution and only providing high-resolution material properties and pressures. The results obtained indicate that when the coarse finite-element solutions are combined with the neural-network estimates, the results are within a 2 to 4% error of the results that would be computed with high-resolutionfinite-element models, but at a fraction of the cost in time and computational resources. This paper discusses the benefits and drawbacks of the method and illustrates its applicability by means of a worked example.
Lyu, Xiaocong (Delft University of Technology) | Voskov, Denis (Delft University of Technology and Stanford University (Corresponding author) | Tang, Jinyu (email: D.V.Voskov@tudelft.nl)) | Rossen, William R. (Delft University of Technology)
Summary Foam injection is a promising enhanced-oil-recovery (EOR) technology that significantly improves the sweep efficiency of gas injection. Simulation of foam/oil displacement in reservoirs is an expensive process for conventional simulation because of the strongly nonlinear physics, such as multiphase flow and transport with oil/foam interactions. In this work, an operator-based linearization (OBL) approach, combined with the representation of foam by an implicit-texture (IT) model with two flow regimes, is extended for the simulation of the foam EOR process. The OBL approach improves the efficiency of the highly nonlinear foam-simulation problem by transforming the discretized nonlinear conservation equations into a quasilinear form using state-dependent operators. The state-dependent operators are approximated by discrete representation on a uniform mesh in parameter space. The numerical-simulation results are validated by using three-phasefractional-flow theory for foam/oil flow. Starting with an initial guess depending on the fitting of steady-state experimental data with oil, the OBL foam model is regressed to experimental observations using a gradient-optimization technique. A series of numerical validation studies is performed to investigate the accuracy of the proposed approach. The numerical model shows good agreement with analytical solutions at different conditions and with different foam parameters. With finer grids, the resolution of the simulation is better, but at the cost of more expensive computations. The foam-quality scan is accurately fitted to steady-state experimental data, except in the low-quality regime. In this regime, the used IT foam model cannot capture the upward-tilting pressure gradient (or apparent viscosity) contours. 1D and 3D simulation results clearly demonstrate two stages of foam propagation from inlet to outlet, as seen in the computed-tomography (CT) coreflood experiments: weak foam displaces most of the oil, followed by a propagation of stronger foam at lower oil saturation. OBL is a direct method to reduce nonlinearity in complex physical problems, which can significantly improve computational performance. Taking its accuracy and efficiency into account, the data-drivenOBL-based approach could serve as a platform for efficient numerical upscaling to field-scaleapplications.
Wang, Yanji (School of Petroleum Engineering, China University of Petroleum, East China) | Li, Hangyu (School of Petroleum Engineering, China University of Petroleum, East China) | Tian, Ji (CNOOC Research Institute) | Fan, Ling (School of Petroleum Engineering, China University of Petroleum, East China) | Xu, Jianchun (School of Petroleum Engineering, China University of Petroleum, East China)
Abstract Traditional two-phase relative permeability upscaling requires the fine-scale two-phase flow simulation over the target regions/blocks. It can be very computationally expensive especially for cases with multiple (hundreds of) geological realizations (as commonly used in subsurface uncertainty quantification or optimization). In this paper, we develop a machine learning assisted relative permeability upscaling procedure, in which the full numerical upscaling is performed for only a portion of the coarse blocks, while the upscaled functions for the rest of the coarse blocks are calculated by the machine learning algorithm. The upscaling procedure was tested for generic (left to right) flow problems using 2D models for scenarios involving multiple realizations. Numerical results have shown that the coarse-scale simulation results using the newly developed machine learning assisted upscaling procedure are of similar accuracy to the coarse results using full numerical upscaling. Because the fine-scale numerical simulation is only performed for a small fraction of the model, significant speedup is achieved.