Ensemble-based algorithms have been successfully implemented for history matching of geological models. However, their performance is optimal only if the prior-state vector is linearly related to the predicted data and if the joint distribution of the prior-state vector is multivariate Gaussian. Moreover, the number of degrees of freedom is as large as the ensemble size, so the assimilation of large amounts of production or seismic data might lead to the ensemble collapse which results in inaccurate predictions of future performance. In this paper, we introduce a methodology that combines model classification with multidimensional scaling (MDS) and the ensemble smoother algorithm to efficiently history match fluvial and channelized reservoir models. The dynamic responses (production and seismic data) of the different ensemble members are used to compute a dissimilarity matrix. This dissimilarity matrix is then transformed into a lower-dimensional space by the use of MDS. Then, model classification is performed based on the distances between the mapped responses in the lower dimensional space and the actual observed response. In the proposed method. the transformed lower-dimensional data are used instead of original observations in the update equation to update the cluster of ensemble members that are closest to the observed response. In this manner, a limited number of ensemble members are enough to assimilate large amount of observed data without triggering the ensemble collapse problem. The updated subset of models (cluster) are used to infer a probability map and/or new hard conditioning data to re-sample new conditional members for the next iteration or next data-assimilation step. The proposed algorithm is tested by assimilating production and time-lapse seismic data into channelized reservoir models. The presented computational results show significant improvements in terms of preserving channelized features and in terms of reliability of predictions compared to the standard implementation of ensemble-based algorithms.
In laminated clastic reservoirs, the difference in permeability at the bed scale could be several orders of magnitude across the various facies. In addition, cross-bedding and pinched-out geological surfaces give rise to tortuous flow paths. Incorporation of such flow-relevant fine-scale heterogeneities into reservoir-scale models is accomplished through an effective two-stage multiphase upscaling technique. The two stages of scale-up involve (1) bed to genetic-unit scale upscaling and (2) genetic-unit to reservoir scale upscaling. In the bed to genetic-unit scale upscaling step, full tensor permeabilities are computed using a fast flow-based upscaling procedure taking advantage of a multipoint flux approximation method. A novel iterative quasi-global pseudoization method is used in the genetic-unit to reservoir scale upscaling process. The fundamental idea of iterative pseudoization relies on the calibration of effective properties that govern coarse-scale flows to non-local regional interwell-scale flows in the underlying fine-scale model. The calibration is performed through the minimization of an objective function that computes the mismatch in flow responses between fine-scale and coarse-scale models. The pseudoization procedure is formulated as a constrained optimization problem, which is iteratively solved by estimating the variables that parameterize the pseudo-relative permeabilities and pseudo-capillary pressures.
The effectiveness of the two-stage upscaling technique is demonstrated in an intuitive example case featuring the scale-up of a stratigraphically complex Aeolian outcrop-scale reservoir-sector model for a waterflooding scenario. This proof-of-concept example is designed to validate the accuracy and efficacy of the two-stage upscaling method where the full-detail two-dimensional fine-scale model is affordable to simulate and serves as the “true” model for verification. Results indicate that the two-stage upscaling method renders practically intractable high-geologic resolution simulation problems tractable and delivers good accuracy.
Inference of spatially distributed reservoir properties from production data in scattered wells poses an under-constrained inverse problem that has nonunique solutions. One major contributor to problem ill-posedness is over-parameterization of spatially distributed reservoir properties. We recently introduced sparse representations of unknown reservoir properties for history matching by exploiting the correlation in their spatial distribution. In this approach, during history matching, instead of estimating reservoir properties for each model grid cell, the sparse representation of the reservoir properties are estimated from production data. The resulting history-matching problem can be solved using recent developments in sparse signal processing, widely known as
There has been increasing interests in integrated simulation of reservoirs, wells and surface facilities, which is particularly important for companies with major assets in deep offshore fields. The basic approach for integration can be split into decoupled/separated (between a reservoir simulator and a facility simulator), iteratively coupled and fully coupled network, with increasing stability and efforts in implementation. This paper covers two main topics, the development of a state-of-the-art fully coupled network inside a next generation commercial reservoir simulator; the comparison of different coupling approaches on real field cases.
A fully coupled network approach has been developed as an in-house extension to a next generation multi-company collaborative reservoir simulator (which already offers decoupled approach for integrated modeling). As the details on network formulation are available from previous literatures, here we will focus on the vital yet rarely covered information on interaction between network model and typical Field Management group rate allocation scheme, e.g. the definition of well potentials.
Furthermore, this paper is also the first one to offer head-to-head comparison among all 3 approaches (decoupled, iteratively coupled and fully coupled). Here we start with a review of methodology (pros & cons) used in the 3 approaches, then follow up with comprehensive results comparison on real field level cases. To make the fairest possible comparison, the decoupled and fully coupled approaches use the same next generation reservoir simulator, and the iteratively coupled approach uses a reference legacy simulator from the same software vender. Here we demonstrate that, for studies where the fully coupled network approach can be used, it brings stability, speed and scalability.
A multilevel optimization procedure, in which optimization is performed over a sequence of upscaled models, is developed for use in combined well placement and control problems. The multilevel framework, which can be incorporated with any type of optimization algorithm, is implemented here with a derivative-free Particle Swarm Optimization – Mesh Adaptive Direct Search (PSO–MADS) hybrid technique. An accurate global transmissibility upscaling procedure is applied to generate the coarse-model parameters required at each grid level. Distinct upscaled models are constructed using this approach for each candidate solution evaluated by the optimization algorithm. We demonstrate that the coarse models are able to capture the basic ranking of the candidate well location and control scenarios, in terms of objective function, relative to the ranking that would be computed using fine-scale simulations. This enables the optimization algorithm to appropriately select and discard candidate solutions. Two- and three-dimensional example cases are presented, one of which involves optimization over multiple geological realizations. The multilevel procedure is shown to provide optimal solutions that are comparable, and in some cases better, than those from the conventional (single-level) approach, but with computational speedups of about an order of magnitude.
A general algebraic multiscale framework is presented for fractured porous media, which enables the treatment of fractures with multiple length scales and wide ranges of conductivity contrasts. To this end, fully integrated local basis functions for both matrix and fractures are constructed. These basis functions are employed to construct the multiscale coarse system for both matrix and fractures, and then interpolate the coarse solutions back to the fine-scale reference system. Combined with a second stage fine-scale solver, here, ILU(0), our development leads to an iterative multiscale strategy for heterogeneous fractured media, allowing for error reduction to any arbitrary level, while honoring mass conservation after any multiscale finite volume stage. In order to maintain generality, it is shown that when each fracture network is modeled using a single coarse grid cell, our formulation automatically reduces to that proposed by
Gas flow through fractured nano-porous shale formations is complicated by a hierarchy of structural features and fluid transport mechanisms. Structural features include tight porous rock with a variety of fractures. Gas transport mechanisms include self diffusion, Knudsen diffusion, advection, gas expansion, adsorption, and slippage effects at the pore walls. In nanopores, as encountered in tight shale gas reservoirs, the effects of diffusion can overcome advection and should be included in reservoir flow calculations. As the permeability of shale is very low, conventional reservoir simulation modeling and production estimation methods, which are designed for fluid-flow processes dominated by viscous forces, may not be reliable to predict the reservoir production rate. We present a pore-based mechanistic model (the Bundle of Dual-Tube Model, BoDTM), which includes complex gas dynamics in fractured nano-porous shale formations. The pore-based model provides a quick and reliable means for modeling tight rock gas flow that includes known and uncertain reservoir properties, such as rock permeability, diffusion, pore volume, pore throat size, rock tortuosity and fracture characteristics. Gas flow is modeled in the shale matrix and production rate is estimated with a simple bundle of dual-tubes. Each dual-tube idealizes a gas pathway formed by pore bulbs and throats and is characterized by two diameters (a conductive diameter and a storage diameter) and one length. The two diameters permit modeling slow recovery of large gas volumes; the tube length takes into account the pathway length from the matrix into the fracture network, which is the length that controls the travel time and the production rate. To construct the BoDTM, a statistical estimate of the fracture-network and shale-matrix parameters is necessary. Production rate is controlled by flow from the tight shale rock matrix (which has the largest storage capacity but the lowest diffusivity) into the fracture network, and we assume that gas in the fractures is produced instantaneously. Applying statistical distribution functions, we examine the effect of model input properties on gas production. By applying the BoDTM to field data from the Barnett and Fayetteville Plays, we demonstrate that the model provides a means to quickly assess gas production rates from shale formations.
Anciaux-Sedrakian, A. (IFP Energies nouvelles) | Eaton, J. (NVIDIA) | Gratien, J. (IFP Energies nouvelles) | Guignon, T. (IFP Energies nouvelles) | Havé, P. (IFP Energies nouvelles) | Preux, C. (IFP Energies nouvelles) | Ricois, O. (IFP Energies nouvelles)
Heterogeneous supercomputers combining "general-purpose graphic processor", "many integrated cores" and general-purpose CPUs promise to be the future major architecture, because they deliver excellent performance with limited power consumption and space occupancy. However, developing applications that achieve "real" scalability remains a subject of several research fields. This paper studies the possibility of exploiting the power of heterogeneous architecture for Geoscience dynamic simulations for real large-scale models. Geoscience dynamic simulations need scalable linear solvers to realize the potential of the latest high-performance-computing architectures, since it represents the most expensive part of the whole simulation. Several studies have already been done to accelerate sparse linear solver's preconditioners and they illustrated comparable or even improved performance for the academic test cases. However, for a real industrial test case with highly heterogeneous data, these preconditioners do not offer a robust solution, as CPR-AMG does. We study several implementations of AMG and CPR-AMG for heterogeneous architectures in this paper. We illustrate the obtained gain using one real field thermal model and two academic test cases all using a fully implicit scheme. For these models, the total simulation time is reduced by a significant factor when running on n CPU + n GPU, compared to n CPU only, without any accuracy loss.
To realize the potential of the latest High-Performance-Computing (HPC) architectures for reservoir simulation, scalable linear solvers are necessary. We describe a parallel Algebraic Multiscale Solver (AMS) for the pressure equation of heterogeneous reservoir models. AMS is a two-level algorithm that employs domain decomposition with a localization assumption. In AMS, basis functions, which are local (subdomain) solutions computed during the setup phase, are used to construct the coarse-scale system and grid transfer operators between the fine and coarse levels. The solution phase is composed of two stages: global and local. The global stage involves solving the coarse-scale system and interpolating the solution to the fine grid. The local stage involves application of a smoother on the fine-scale approximation.
The design and implementation of a scalable AMS on multi- and many-core architectures, including the decomposition, memory allocation, data flow, and compute kernels, are described in detail. These adaptations are necessary to obtain good scalability on state-of-the-art HPC systems. The specific methods and parameters, such as the coarsening ratio (
The balance between convergence rate and parallel efficiency as a function of the coarsening ratio (
The use of proxy models for optimisation of expensive functions has demonstrated its value since the 1990's in many industries. Within reservoir engineering, similar techniques have been used for over a decade for history matching, both in commercial tools and in-house software.
In addition to efficient history matching, proxy models have a distinct advantage when performing uncertainty quantification of probabilistic forecasts. Markov Chain Monte Carlo (MCMC) methods cannot realistically be applied directly with reservoir simulations, and even fast proxy models can fail dramatically to adequately represent the range of uncertainty if implemented without due care.
A pitfall of the use of proxy models is that they are considered ‘black box’ and their quality is difficult to measure. Engineers prefer to deal with deterministic simulation models which they can evaluate and understand.
The main pitfall of simple random walk MCMC techniques, which have begun to appear within reservoir engineering workflows, is a focus on theoretical properties which are not observed in practical implementations. This gives rise to potential gross errors, which are not generally appreciated by practitioners. Advances in recent years within the field of Bayesian statistics have significantly improved this situation, but have not yet been disseminated within the oil and gas industry.
This paper describes the limitations of random walk MCMC techniques which are currently used for reservoir prediction studies, and shows how Hamiltonian MCMC techniques, together with an efficient implementation of proxy models, can lead to a more reliable and validated probabilistic uncertainty quantification, whilst also generating a suitable ensemble of deterministic reservoir models. Scientific comparison studies are performed for both an analytical case and a realistic reservoir simulation case to demonstrate the validity of the approach.
The benefit of this methodology is to allow asset teams to effectively manage reservoir decisions using a robust and validated understanding of uncertainty. It lays the scientific foundations for the next generation of uncertainty tools and workflows.