Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Results
Model-Based A Priori Evaluation of Surveillance Programs Effectiveness using Proxies
He, Jincong (Chevron Energy Technology Company) | Xie, Jiang (Chevron Energy Technology Company) | Sarma, Pallav (Chevron Energy Technology Company) | Wen, Xian-Huan (Chevron Energy Technology Company) | Chen, Wen H. (Chevron Energy Technology Company) | Kamath, Jairam (Chevron Energy Technology Company)
Abstract Surveillance programs play an important role in reservoir management and are crucial for minimizing subsurface risks and improving decision quality. Optimal design and selection of the surveillance plan requires predicting the performance (e.g. in terms of the expected amount of uncertainty reduction in an objective function) of a given surveillance plan before it is implemented. Because the data from the surveillance program is uncertain at the time of the analysis, multiple history matching runs are required to evaluate the effectiveness of the surveillance program for different plausible realizations of the observed data. As such, the computational cost may be prohibitive as the number of reservoir simulations needed for the multiple history matching runs would be substantial. This paper proposes a framework based on proxies and rejection sampling (filtering) to perform the multiple history matching runs with a manageable number of reservoir simulations. The workflow proposed enables qualitative and quantitative analysis of a surveillance plan. Qualitatively, heavy hitter alignment analysis for the objective function and the observed data provides actionable measures for screening different surveillance designs. Quantitatively, the evaluation of expected uncertainty reduction from different surveillance plans allows for optimal design and selection of surveillance plans.
Abstract Although the EnKF has many advantages, such as ease of implementation and efficient uncertainty quantification, it suffers from a few key issues that limit its application to large-scale simulation models of real fields. Among these key issues is the well known problem of ensemble collapse, which is particularly evident for small ensembles. Further, EnKF is theoretically appropriate only if all ensemble members belong to the same multi-Gaussian random field. This is an important issue because for most real fields, we have more than one geological scenario, and ideally, we would like to obtain one or more history-matched models for each geological scenario. Similar issues also affect the ensemble smoother (EnS). We propose a new variant of the EnKF, called the subspace EnKF, to alleviate both these issues. The basic idea behind the subspace EnKF is to constrain each ensemble member to a different subspace of the full space to which the ensemble members belong. This is done by applying a different parameterization to each ensemble member with appropriate modification of the EnKF formulation, such that the parameterization ensures that each ensemble member remains in the subspace to which it is constrained through any number of updates, thereby preventing collapse. Further, if each parameterization is so chosen as to honor different geostatistical properties, then these statistics will also be honored throughout the updates, thus retaining the geostatistical properties of each ensemble member though each update. The EnS can also be extended similarly with the subspace formulation. We first formulate and demonstrate the validity of the subspace EnKF with the standard PCA parameterization. We further improve the subspace EnKF/EnS to honor multi-point geostatistics by extending the subspace formulation with kernel PCA. We have earlier demonstrated the use of the kernel PCA parameterization with gradient-based history matching for honoring multi-point geostatistics of non-Gaussian random fields. Kernel PCA parameterization can also be used with the EnKF/EnS, however KPCA further aggravates the ensemble collapse problem due to the much larger dimensionality of the feature space. As such, formulating the subspace EnKF/EnS with kernel PCA parameterization alleviates both the key problems of the traditional EnKF/EnS, namely ensemble collapse and inability to honor multi-point geostatistics. The procedure is demonstrated on two examples, one with a multi-Gaussian permeability field, one with a channel sand, and is shown to prevent ensemble collapse while also honoring geostatistical properties much better compared to the standard EnKF/EnS.
Abstract In order to make the field development decision making and planning process tractable, the decision-makers usually need a few representative models (for example, P10, P50, P90 models) selected from a large ensemble of reservoir models. This ensemble of models may have been obtained from a static and/or dynamic modeling process involving uncertainty quantification (ED), history matching, optimization or other workflows. The usual approach to select a few models is using various variants of clustering. This selection process is not only suboptimal, but it could also be quite difficult to do if multiple output responses and/or percentiles are required and the number of models is large. The current approach in most oil companies is even more naรฏve, wherein, such models are chosen manually or using Excel spreadsheets. Thus, due to the unavailability of good approaches, representative models are usually chosen based on one or two criteria. Use of such models in the decision making process can lead to sub-optimal decisions. As such, there is a need to automatically select a small set of statistically representative models from a larger set based on multiple decision criteria. We propose a new model selection approach, namely minimax approach, which can simultaneously and efficiently select a few reservoir models from a large ensemble of models by matching target percentiles of multiple output responses (for example, matching P10, P50 and P90 of OPC, WPC and OOIP), while also obtaining maximally different models in the input uncertainty space. The approach requires the simultaneous solution of two minimax combinatorial optimization problems. Since this requires the solution a complex multi-objective optimization problem, we instead convert the problem to the solution of a single constrained minimax optimization problem. We propose the solution of this optimization problem using a global exhaustive search (for small problems), and a very efficient greedy method wherein a simpler optimization problem can be solved directly by enumeration or by Markov chain Monte Carlo methods for larger problems with many models, target percentiles and variables. The new approach is implemented in Chevronโs in-house uncertainty quantification software called genOpt and tested with multiple synthetic examples and field cases. The results demonstrate that the proposed approach is much more efficient than clustering and solution quality is generally better than clustering. For some models, minimax was orders of magnitude faster than clustering. The new approach could help business units select P10, P50 and P90 models efficiently for decision making and planning.
Abstract The ensemble Kalman filter (EnKF) has recently received significant attention as a robust and efficient tool for data assimilation. Although the EnKF has many advantages such as ease of implementation and efficient uncertainty quantification, it is technically appropriate only for random fields (e.g., permeability) characterized by two-point geostatistics (multi-Gaussian random fields). Realistic systems however are much better described by multi-point geostatistics (non-Gaussian random fields), which is capable of representing key geological structures such as channels. Furthermore, the updating step in the EnKF can lead to non-physical updates of model states and other variables (such as saturations), and this problem is evident in highly nonlinear problems like compositional simulation. In a recent paper (Sarma and Chen, 2009), we formulated a generalized EnKF using kernels (KEnKF), capable of representing non-Gaussian random fields characterized by multi-point geostatistics. In this work, we further extend the KEnKF to efficiently handle constraints on the state and other variables of the simulation model, arising from physical or other reasons. In the standard EnKF, the usual approach to handle constraints is through truncation or variable transformation, which has been shown to be problematic for highly nonlinear problems. In the KEnKF, because the Kalman update equation is solved as a minimization problem, constraints can be easily and rigorously handled via solution of a constrained nonlinear programming problem. We propose a combination of fixed point iteration and the augmented Lagrangian method to solve this problem efficiently. Note that with a kernel of order 1, the KEnKF is equivalent to the standard EnKF, and therefore, the proposed approach to handle constraints is applicable even if high order kernels are not used. The procedure is demonstrated on an example case, and is shown to better handle various state constraints compared to the standard EnKF with truncation.
Abstract Multidimensional transport for reservoir simulation is typically solved by applying 1D numerical methods in each spatial coordinate direction. This approach is simple, but the disadvantage is that numerical errors become highly correlated with the underlying computational grid. In many real-field applications this can result in strong sensitivity to grid design for the computed saturation/composition fields, but also for critical integrated data such as breakthrough times. To increase robustness of simulators, especially for adverse mobility ratio flows that arise in gas injection and other EOR processes, it is therefore of much interest to design truly multi-D schemes for transport that remove, or at least strongly reduce, the sensitivity to grid design. We present a new upwind biased truly multi-D family of schemes for multi-phase transport capable of handling counter-current flow arising from gravity. The proposed family of schemes has four attractive properties: applicability within a variety of simulation formulations with varying levels of implicitness; extensibility to general grid topologies; compatibility with any finite volume flow discretization; and provable stability (monotonicity) for multi-phase transport. The family is sufficiently expressive to include several previously developed multi-D schemes, such as the narrow scheme, in a manner appropriate for general purpose reservoir simulation. A number of water flooding problems in homogeneous and heterogeneous media demonstrate the robustness of the method as well as reduced transverse (cross-wind) diffusion and grid orientation effects. 1. Introduction The use single point upstream weighting (SPU) is widely known to cause grid orientation effect (GOE) for miscible gas injection problems [Todd et al., 1972]. Within the reservoir simulation literature there has been work to develop transport methods which reduce GOE by removing preferential flow directions [Shubin & Bell, 1984, Yanosik & McCracken, 1979, Edwards, 2004, Kozdon et al., 2008b]. These methods have been applied with varying degrees of success and have various numerical stability properties. One of the areas lacking in this work is the inclusion of multiphase physics as well as gravitational terms resulting in countercurrent flow. The basic concept of multidimensional upstream weighting is that characteristic flow information is used to develop a multi-point stencil for transport, as opposed to just considering the sign of the phase velocity as in SPU. Including this flow information reduces numerical biasing due to the grid [Roe & Sidilkover, 1992, Van Ransbeeck & Hirsch, 1997, Koren, 1991]. Biasing and correlation of the numerical errors can be important in porous media problems because many problems are physically unstable at the scale at which they are modeled [Tan & Homsy, 1997, Riaz & Meiburg, 2004] and numerical errors can trigger these instabilities [Kozdon et al., 2008b]. Within the computational fluid dynamics literature there has been similar development of multi-D schemes. In previous work [Kozdon et al., 2008b] we developed a multi-D framework for linear advection in general divergence free velocity fields that we used to model miscible gas injection. Here we present an extension of that work for two-phase flow problems. We do this within a conservative finite volume framework which incorporates a local coupling in the flux calculation through the use of interaction regions. The method is phase based so as to be close to conventional simulation process as possible. It has a compact stencil and is applicable to general relative permeability functions. The developed multi-D schemes are required to be monotone [Crandall & Majda, 1980] and at least as stable as SPU. These schemes provide a suitable starting point for the development of higher order schemes, which have not yet achieved wide spread use in reservoir simulation.
Abstract Efficient history matching (model updating) of geologically complex reservoirs is important in many applications, but it is central in closed-loop reservoir modeling, in which real-time model updating is required. Within the context of closed-loop modeling, one history matching approach receiving the most attention to date is the ensemble Kalman filter (EnKF). Although the EnKF has many advantages such as ease of implementation and efficient uncertainty quantification, it is technically appropriate only for random fields (e.g., permeability) characterized by two-point geostatistics (multi-Gaussian random fields). Realistic systems however are much better described by multi-point geostatistics, which is capable of representing key geological structures such as channels. History matching algorithms that are able to reproduce realistic geology provide enhanced predictive capacity and can therefore lead to better reservoir management and optimization. In this work, we propose and formulate a generalized EnKF using kernels, capable of representing non-Gaussian random fields characterized by multi-point geostatistics. The main drawback of the standard EnKF is that the Kalman update essentially results in a linear combination of the forecasted ensemble, and the EnKF only uses the covariance and cross-covariance between the random fields (to be updated) and observations, thereby only preserving two-point statistics. Kernel methods allow the creation of nonlinear generalizations of linear algorithms that can be exclusively written in terms of dot products. By deriving the EnKF in a high-dimensional feature space implicitly defined using kernels, both the Kalman gain and update equations are nonlinearized, thus providing a completely general nonlinear set of EnKF equations, the nonlinearity being controlled by the kernel. By choosing high order polynomial kernels, multi-point statistics and therefore geological realism of the updated random fields can be preserved. The procedure is applied to two example cases where permeability is updated using production data as obserations, and is shown to better reproduce complex geology compared to the standard EnKF, while providing reasonable match to the production data. Introduction Integrating various kinds of static and dynamic data during the reservoir modeling and simulation process has been shown to reduce the uncertainty of the simulation models, thereby improving the predictive capacity of such models, which in turn can lead to better reservoir management decisions. In this regard, the ensemble Kalman filter (EnKF) has recently generated significant attention as a promising method for conditioning reservoir simulation models to dynamic production data. Further, a recent emphasis on uncertainty quantification, closed-loop reservoir optimization and real-time monitoring has made the EnKF even more valuable, as the EnKF is particularly suited for continuous model updating, and provides an ensemble of models that can be used to approximate the posterior distribution of any output of the simulation model. The EnKF has been recently applied and improved upon by many researchers in the petroleum industry. It was introduced to the petroleum industry by Naevdal et al. (2002, 2003), wherein the EnKF was used to update static parameters in near-wellbore simulation models, and later also used to update permeability, pressure and saturation fields of a 2D three phase simulation model. Since then, many other researchers have modified and improved the EnKF including, Gu and Oliver (2005), Wen and Chen (2005), Li and Reynolds (2007), Skjervheim et al. (2007), etc.
- Information Technology > Modeling & Simulation (1.00)
- Information Technology > Sensing and Signal Processing (0.92)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Kernel Methods (0.37)
Abstract Efficient history matching of geologically complex reservoirs is important in many applications, but it is central in closed-loop reservoir modeling, in which real-time model updating is required. Within the context of closed-loop reservoir modeling, the two approaches receiving the most attention to date are ensemble Kalman filtering and gradient-based methods using Karhunen-Loeve representations (eigen-decomposition) of the permeability field. Both of these procedures are technically appropriate only for random fields (e.g., permeability) characterized by two-point geostatistics (multi-Gaussian random fields). Realistic systems are much better described by multipoint geostatistics, which is capable of representing key geological structures such as channels. History matching algorithms that are able to reproduce realistic geology provide enhanced predictive capacity and are therefore more suitable for use with field optimization. In this work, we apply a new parameterization, referred to as a kernel principal component analysis (kernel PCA or KPCA) representation, to model permeability fields characterized by multipoint geostatistics. Kernel PCA enables preserving arbitrarily high order statistics of random fields, thereby providing the capability to reproduce complex geology. The KPCA representation is then combined with an efficient gradient-based history matching technique. The linkage of KPCA for modeling geology with gradient-based history matching is very natural as the KPCA representation is differentiable and gradients with respect to geological parameters can be readily computed. The overall procedure is then applied to several example cases, including synthetic models and a model of a real reservoir. The approach is shown to better reproduce complex geology, which leads to improved history matches and better predictions, while retaining reasonable computational requirements. Introduction History matching is a key component of closed-loop reservoir modeling. In this procedure, real-time surface and downhole production data provide continuous input to the history matching algorithm. Although history matching has been investigated actively within the petroleum engineering community for the last three or more decades, and numerous algorithms have been developed, in practice most history matching is still performed manually or at best with assisted history matching techniques (Milliken et al., 2000). This suggests that new developments are still required in order to provide a robust, efficient and reliable history matching capability that can be used for closed-loop applications. It is well known that, because history matching is an ill-posed problem with non-unique solutions, additional prior information in the form of geostatistical constraints is required to obtain geologically realistic history matched models that have good predictive capability (Caers, 2003a). In essence, the goal is to integrate and preserve all available qualitative and quantitative data during the process of creating the history matched model, thereby maximizing the reduction of uncertainty and thus leading to better predictions. Existing history matching algorithms can be broadly classified into four general categories: stochastic algorithms, gradient-based methods, streamline-based techniques, and Kalman filter approaches. Within the category of stochastic algorithms, the probability perturbation method (Caers, 2003a) and the gradual deformation method and its extensions (Hu et al., 2001, 2005) are popular methods that have been widely applied. A definite advantage of these algorithms is that they are able to easily honor complex geological constraints by preserving multipoint statistics present in the prior geological model. Furthermore, they are quite easy to implement as they treat the simulator as a "black box." These algorithms are also claimed to be globally convergent due to their stochastic nature. A disadvantage of these approaches is their inefficiency, as they require numerous simulations for convergence (Wu, 2001, Liu et al., 2004). This is particularly of concern in closed-loop reservoir management (Jansen et al., 2005, Sarma et al., 2006a), which requires continuous real-time use of history matching algorithms.
- Europe (0.68)
- North America > United States > Texas > Harris County > Houston (0.28)
- Research Report > New Finding (0.46)
- Overview > Innovation (0.40)
- North America > United States > Texas > Permian Basin > Midland Basin > University Field > Wolfcamp Formation (0.98)
- North America > United States > Arkansas > Smart Field (0.98)
Abstract This paper describes the algorithms and implementation of a parallel reservoir simulator designed for, but not limited to, distributed-memory computational platforms that can solve previously prohibitive problems efficiently. The parallel simulator inherits the multipurpose features of the in-house sequential simulator, which is at the core of the new capability. As a result, black-oil, miscible, compositional, and thermal problems can be solved efficiently using this new simulator. A multilevel domain decomposition approach is used. First, the original reservoir is decomposed into several domains, each of which is given to a separate processing node. All nodes then execute computations in parallel, each node on its associated subdomain. The parallel computations include initialization, coefficient generation, linear solution on the sub-domain, and input/output. To enhance the convergence rate, we solve a coarse global problem which is generated via a multigrid-like coarsening procedure. This solution serves as a preconditioner of an outer parallel GMRES loop. The exchange of information across the subdomains, or processors, is achieved using the message passing interface standard, MPI. The use of MPI ensures portability across different computing platforms ranging from massively parallel machines to clusters of workstations. Results indicate the simulator exhibits excellent scalability of the simulator for up to 32 processors on the IBM SP2 system. Scalability results are also presented for a cluster of IBM workstations connected via an ATM (Asynchronous Transfer Mode) communication. The use of ATM for interprocessor communication was found to have a small, but measurable, impact on scaling performance. Introduction The predictive capacity of a reservoir simulator depends first on the quality of the information used, and then it depends on the ability of the computational grid and solution method to describe the flow behavior accurately. The injection of more detail into reservoir description is producing very large models. Scale up technology can be applied to reduce the overall size of the models while preserving the important details of the flow. For large-scale reservoir displacements, the scaled up model itself could consist of millions of gridblocks Flow simulation using models of that size is beyond the current capability of uniprocessor, or even shared-memory multiprocessor, compute platforms. In this work, we describe the development of a parallel multi-purpose reservoir simulator that can solve previously prohibitive problems efficiently. In addition, the parallel simulator provides the means to validate and help improve the scaled-up model by comparing its flow predictions with detailed simulations using the original finer scale description from which it is derived. In the following sections, we give a brief overview of the parallel computing landscape and we adopt a definition for scalability. That is followed by a description of our parallel simulation development strategy and implementation details. Performance results for the parallel simulator are then presented and analyzed. We close with key conclusions. Background The development of application codes for distributed-memory parallel platforms has been, until recently, a high-risk investment, both in terms of capital and manpower. This high-risk environment was due, in large part, to (1) an unstable landscape of parallel computing vendors/machines and (2) a lack of software portability across the various platforms. The focus had been on massively-parallel machines with proprietary architectures that link hundreds, or even thousands, of specially designed processors. Because the processing nodes in these machines tended to have limited computing power and small local memory, massive parallelism, both in terms of total memory and compute power, was achieved by employing thousands of such processors.