Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Results
Streamline Tracing and Applications in Dual-Porosity Dual-Permeability Models
Onishi, Tsubasa (Chevron Technical Center (Corresponding author)) | Tanaka, Shusei (Chevron Technical Center) | Chen, Hongquan (Texas A&M University) | Kam, Dongjae (Chevron Technical Center) | Xie, Jiang (Chevron Technical Center) | Wang, Zhiming (Chevron Technical Center) | Wen, Xian-huan (Chevron Technical Center) | Datta-Gupta, Akhil (Texas A&M University)
Summary Streamline-based methods have proven to be effective for various subsurface flow and transport modeling problems. However, the applications are limited in dual-porosity and dual-permeability (DPDK) system due to the difficulty in describing interactions between matrix and fracture during streamline tracing. In this work, we present a robust streamline tracing algorithm for DPDK models and apply the new algorithm to rate allocation optimization in a waterflood reservoir. In the proposed method, streamlines are traced in both fracture and matrix domains. The interfluxes between fracture and matrix are described by switching streamlines from one domain to another using a probability computed based on the ratio of interfluxes to total fluxes within the domain. The approach is fundamentally similar to the existing streamline tracing technique and can be utilized in streamline-assisted applications, such as flow diagnostics, history matching, and production optimization. The proposed method is benchmarked with a finite volume (FV)-based approach where grid-based time-of-flight (TOF) distributions were obtained by solving the stationary transport equation. We first validated our method using simple examples. Visual TOF comparisons as well as tracer concentration and allocation factors at wells show good agreement. Next, we applied the proposed method to field-scale models to demonstrate the robustness. The results show that our method offers reduced numerical artifacts and better represents reservoir heterogeneity and well connectivity with subgrid resolutions. The proposed method is then used for rate allocation optimization in DPDK models. A streamline-based gradient-free algorithm is used to optimize net present value (NPV) by adjusting both injection and production well rates under operational constraints. The results show that the optimized schedule offers significant improvement in recovery factor, NPV, and sweep efficiency compared to the base scenario using equal rate injection and production. The optimization algorithm is computationally efficient, as it requires only a few forward reservoir simulations. A novelty of this work is the streamline tracing algorithm in DPDK reservoir models. The proposed method is simple and easy to implement to existing streamline-based frameworks and is applicable to field-scale reservoir models for flow diagnostics and rate allocation optimization.
- North America > United States > Texas (1.00)
- Asia > Kazakhstan (0.93)
- Research Report > New Finding (0.86)
- Research Report > Experimental Study (0.54)
- Asia > Kazakhstan > West Kazakhstan > Uralsk Region > Precaspian Basin > Karachaganak Field (0.99)
- Asia > Kazakhstan > West Kazakhstan > Precaspian Basin (0.99)
- Asia > Kazakhstan > Mangystau Oblast > Precaspian Basin > Tengiz Field > Tengiz Formation (0.99)
- (5 more...)
Streamline Tracing and Applications in Dual Porosity Dual Permeability Models
Onishi, Tsubasa (Texas A&M University) | Chen, Hongquan (Texas A&M University) | Xie, Jiang (Chevron Technical Center) | Tanaka, Shusei (Chevron Technical Center) | Kam, Dongjae (Chevron Technical Center) | Wang, Zhiming (Chevron Technical Center) | Wen, Xian-Huan (Chevron Technical Center) | Datta-Gupta, Akhil (Texas A&M University)
Abstract Streamline-based methods have proven to be effective for various subsurface flow and transport modeling problems. However, the applications are limited in dual-porosity and dual-permeability (DPDK) system due to the difficulty in describing interactions between matrix and fracture during streamline tracing. In this work, we present a robust streamline tracing algorithm for DPDK models and apply the new algorithm to rate allocation optimization in a waterflood reservoir. In the proposed method, streamlines are traced in both fracture and matrix domains. The inter-fluxes between fracture and matrix are described by switching streamlines from one domain to another using a probability computed based on the inter-fluxes. The approach is fundamentally similar to the existing streamline tracing technique and can be utilized in streamline-assisted applications, such as flow diagnostics, history matching, and production optimization. The proposed method is benchmarked with a finite-volume based approach where grid-based time-of-flight was obtained by solving the stationary transport equation. We first validated our method using simple examples. Visual time-of-flight comparisons as well as tracer concentration and allocation factors at wells show good agreement. Next, we applied the proposed method to field scale models to demonstrate the robustness. The results show that our method offers reduced numerical artifacts and better represents reservoir heterogeneity and well connectivity with sub-grid resolutions. The proposed method is then used for rate allocation optimization in DPDK models. A streamline-based gradient free algorithm is used to optimize net present value by adjusting both injection and production well rates under operational constraints. The results show that the optimized schedule offers significant improvement in recovery factor, net present value, and sweep efficiency compared to the base scenario using equal rate injection and production. The optimization algorithm is computationally efficient as it requires only a few forward reservoir simulations.
- Research Report > New Finding (0.87)
- Research Report > Experimental Study (0.54)
- Asia > Kazakhstan > West Kazakhstan > Uralsk Region > Precaspian Basin > Karachaganak Field (0.99)
- Asia > Kazakhstan > West Kazakhstan > Precaspian Basin (0.99)
- Asia > Kazakhstan > Mangystau Oblast > Precaspian Basin > Tengiz Field > Tengiz Formation (0.99)
- (4 more...)
Abstract The objective of this study is to develop a hybrid model by combining physics and data-driven approach for unconventional field development planning. We used physics-based reservoir simulations to generate training datasets. These uncalibrated priors were incorporated into data-driven machine learning (ML) algorithms so that the algorithms can learn the underlying physics from reservoir simulation input and output. The ML model is trained such that it provides fast and scalable applications with good accuracy to find optimum unconventional field development, accounting for geological properties, completions design, well spacing and child well timing. We trained ML models with reservoir simulation inputs and cumulative oil production for parent and child wells. A single half-cluster reservoir model was built where fracture propagation is simulated with pressure-dependent fracture properties and a child-well is introduced with different timing and well spacing. After performing a sensitivity analysis to reduce the number of training inputs, more than 20,000 simulations results were generated as the training data. The best accuracy, R=0.94, was achieved with the neural network model after tuning hyper-parameters. Then, we incorporated the trained model with the genetic algorithm to perform efficient history matching to calibrate model parameters. The hybrid model, physics-embedded machine learning model, is extremely efficient that it takes several minutes to complete a single well history matching. The prediction from the history-matched hybrid model is physically meaningful showing that it properly captures the impact of fracture geometry, child well spacing, and timing on production. With the multiple history matching results, we populated spatial distribution of estimated ultimate recovery (EUR) and calibrated model parameters. To validate the workflow, a blind test was conducted on selected areas from US onshore field. The model prediction with the populated parameters was found to be in good agreement with the actual production history indicating the predictive capability of the hybrid approach. The proposed model can provide quick and scalable solutions that honors underlying physics to help decision making on unconventional field development. The model can capture interactions between wells including production degradation due to child-well effect. By calibrating model input parameters over the entire basin, we can predict EUR, yearly cumulative oil followed by economic metrics such as NPV10 at any location in the basin. The impact of different completion design (e.g., fluid intensity, cluster spacing) on production profile and economic matrices is also quickly assessed.
- North America > United States > Texas > Permian Basin > Yeso Formation (0.99)
- North America > United States > Texas > Permian Basin > Yates Formation (0.99)
- North America > United States > Texas > Permian Basin > Wolfcamp Formation (0.99)
- (24 more...)
- Reservoir Description and Dynamics > Unconventional and Complex Reservoirs (1.00)
- Reservoir Description and Dynamics > Reservoir Simulation > History matching (1.00)
- Management > Asset and Portfolio Management > Field development optimization and planning (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Artificial intelligence (1.00)
Abstract Decline curves are fast methods to predict production behavior in oil and gas wells. Some of the notable decline curve methods are Arpโs, SEDM (Stretched Exponential Decline Model), Duongโs Model and Weibull decline curves. Available production history data can be used to fit any of these equations and future production decline can thus be extrapolated. However, when limited production data is available during early periods of well history, these equations could be fitted using inaccurate parameters leading to erroneous predictions. Also, the traditional decline curve analysis approach does not account for the complexities related to reservoir description and well completions. This study utilizes publicly available databases of the Eagle Ford formation to develop a novel predictive modeling methodology linking decline curve model parameters to well completion related variables that allows for the rapid generation of synthetic decline curves at potential new well locations. Modern machine learning algorithms such as Random Forests (RF), Support Vector Machines (SVM) and Multivariate Adaptive Regression Splines (MARS) can then be used to model the well decline behavior. Cross-Validation technique such as k-fold cross-validation can be used to quantify the predictive accuracy of these models when applied to new wells. First, production data are fitted to decline curve models to estimate the corresponding model parameters. Next, machine learning models are built for these parameters as a function of initial flow rate, various well completion parameters (i.e., number of hydraulic fracture stages, completed lengths, proppant and fracturing fluid amounts) and well location/depth parameters (i.e., well latitudes, longitudes, total vertical depth of heel and difference between total vertical depths of heel and toe of horizontal wells). These models are used to rapidly predict the decline curves for new or existing wells without the need for costly reservoir simulators. It has been found that accurate prediction of rate decline of new wells can be predicted using this methodology. This method can also predict ultimate recovery of a new well based on data collected from previous wells. To our knowledge, this is the first time machine learning algorithms have been used to predict the decline curve parameters and examine the relative performance of various decline curve models. The power and utility of our approach are demonstrated by successful prediction of the decline behavior of blind wells that were not incorporated in the analysis. We also examine the relative influences of various well design and location variables to determine the hidden correlations or interactions among them which are hard to decipher with other methods.
Abstract We present a probabilistic approach for integrating multiple data into subsurface flow models. Our approach is based on a Bayesian framework whereby we exhaustively sample the multi-dimensional posterior distribution to define a Pareto front which represents the trade-off between multiple objectives during history matching. These objectives can be matching of water-cut, GOR, BHP and time-lapse seismic data. For field applications, these objectives do not necessarily move in tandem because of measurement errors and also interpretative nature of the seismic data. Our proposed method is built on a Differential Evolution Markov Chain Monte Carlo (DEMC) algorithm in which multiple Markov Chains are run in parallel. First, a dominance relationship is established amongst multiple models. This is followed by construction of the posterior distribution based on a hypervolume measure. A unique aspect of our method is the proposal selection which is based on a random walk on two arbitrarily selected chains. This promotes effective mixing of the chains resulting in improved convergence. We illustrate the algorithm using a nine-spot waterflood model whereby we use water-cut and bottomhole flowing pressure data to calibrate the permeability field. The permeability field is reparameterized using a previously proposed Grid Connectivity Transform (GCT) which is defined based on a decomposition of the grid Laplacian. The compression power of GCT allows us to reconstruct the permeability field with few parameters, thus significantly improving the computational efficiency of the MCMC approach. Next, we applied the method to the Brugge benchmark case involving 10 water injectors and 20 producers. For both cases, the algorithm provides an ensemble of models all constrained to the history data and defines a probabilistic Pareto front in the objective space. Several experimental runs were conducted to compare the effectiveness of the algorithm with NonDominated Sorting Genetic Algorithms (NSGA). Higher hypervolume was constantly measured using our algorithm which indicates that more optimal solutions were sampled. Our method provides a novel approach for subsurface model calibration and uncertainty quantification using MCMC in which the communication between parallel Markov chains enhances adequate mixing. This significantly improves the convergence without loss in sampling quality. We demonstrate that the proposed approach can efficiently integrate seemingly conflicting data into reservoir models while quantifying uncertainties. Besides reservoir model calibration, we expect that the algorithm will find application in probabilistic production optimization problems as well.
- Europe (1.00)
- North America > United States > Texas (0.69)
- Asia > Middle East (0.68)
- Geophysics > Time-Lapse Surveying > Time-Lapse Seismic Surveying (1.00)
- Geophysics > Seismic Surveying (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Evolutionary Systems (1.00)
Streamline-based integration of time-lapse seismic and production data into petroleum reservoir models
Rey, Alvaro (Texas A & M University, Chevron Energy Technology Company) | Bhark, Eric (Texas A & M University) | Gao, Kai (Texas A & M University) | Datta-Gupta, Akhil (Texas A & M University) | Gibson, Richard (Texas A & M University)
ABSTRACT We have developed an efficient approach of petroleum reservoir model calibration that integrates 4D seismic surveys together with well-production data. The approach is particularly well-suited for the calibration of high-resolution reservoir properties (permeability) because the field-scale seismic data are areally dense, whereas the production data are effectively averaged over interwell spacing. The joint calibration procedure is performed using streamline-based sensitivities derived from finite-difference flow simulation. The inverted seismic data (i.e., changes in elastic impedance or fluid saturations) are distributed as a 3D high-resolution grid cell property. The sensitivities of the seismic and production surveillance data to perturbations in absolute permeability at individual grid cells are efficiently computed via semianalytical streamline techniques. We generalize previous formulations of streamline-based seismic inversion to incorporate realistic field situations such as changing boundary conditions due to infill drilling, pattern conversion, etc. A commercial finite-difference flow simulator is used for reservoir simulation and to generate the time-dependent velocity fields through which streamlines are traced and the sensitivity coefficients are computed. The commercial simulator allows us to incorporate detailed physical processes including compressibility and nonconvective forces, e.g., capillary pressure effects, while the streamline trajectories provide a rapid evaluation of the sensitivities. The efficacy of our proposed approach was tested with synthetic and field applications. The synthetic example was the Society of Petroleum Engineers benchmark Brugge field case. The field example involves waterflooding of a North Sea reservoir with multiple seismic surveys. In both cases, the advantages of incorporating the time-lapse variations were clearly demonstrated through improved estimation of the permeability heterogeneity, fluid saturation evolution, and swept and drained volumes. The value of the seismic data integration was in particular proven through the identification of the continuity in reservoir sands and barriers, and by the preservation of geologic realism in the calibrated model.
- Europe > Norway (1.00)
- North America > United States > Texas (0.93)
- Geology > Geological Subdiscipline > Geomechanics (1.00)
- Geology > Rock Type (0.68)
- Europe > Norway > North Sea > Northern North Sea > North Viking Graben > PL 190 > Brent Group > Tarbert Formation (0.99)
- Europe > Norway > North Sea > Northern North Sea > North Viking Graben > PL 104 > Block 30/9 > Oseberg Field > Tarbert Formation (0.99)
- Europe > Norway > North Sea > Northern North Sea > North Viking Graben > PL 104 > Block 30/9 > Oseberg Field > Oseberg Formation (0.99)
- (14 more...)
Abstract A novel method is presented for history matching and uncertainty quantification for channelized reservoirs using Reversible Jump Markov Chain Monte Carlo (RJMCMC) methods. Our objective is to efficiently sample realizations of channelized permeability fields conditioned to production data and permeability values at the wells. In our approach, the channelized permeability field is reparameterized using the discrete cosine transform (DCT). The parameters representing the channel structure are the coefficients in truncated frequency domain. The parameter space is searched using a RJMCMC, where the dimension of the parameter space is assumed to be unknown. For each step of the RJMCMC, the dimension of the uncertainty space can be increased or decreased according to a prescribed prior distribution. This flexibility in the parameter dimension allows an efficient search of the uncertainty space. To speed up the computation and improve the acceptance rate of the RJMCMC algorithm, we employ two-stage methods whereby coarse-scale simulations are used to screen out the undesired proposals. After simulations, multi-dimensional scaling and cluster analysis are used to select realizations from the accepted models to adequately represent the diversity of the models. We demonstrate the effectiveness of the RJMCMC algorithm using both 2D and 3D examples involving waterflood history matching. The 2-D example shows that the RJMCMC algorithm can successfully match the data and identify the orientation of the channels in the reference model. The 3-D results show that the proposed algorithm can determine the large-scale features of the reference channelized permeability field based on the production data. The MCMC algorithms naturally provide multiple realizations of the permeability field conditioned to well and production data and thus, allow for uncertainty quantification in the forecasting.
Summary Full-field flow simulators use a variety of cell geometries, rang-ing from simple rectangles to complex corner-point systems. One of the benefits of corner-point cells is the ease with which we may represent faulted reservoirs. Each face of a cell may be juxtaposed to two or more cells, depending on the fault throw and the lateral displacements of adjacent cells. Conventional finite-dif-ference approaches routinely include the flux between these cells as "nonneighbor" connections. Other examples of nonneighbor or nonstandard connections occur at the boundary of local grid refinement (LGR) or local grid coarsening (LGC) regions where two computational grids come into juxtaposition. In each of these instances, the velocity across the nonstandard faces of a cell will be unevenly distributed according to the nonneighbor fluxes. In contrast, the standard streamline velocity interpolation model (Pol-lock's scheme) used within a cell assumes that the flux is evenly distributed on each cell face, inconsistent with the nonneighbor connection fluxes. Streamlines traced with such an approach do not have sufficient degrees of freedom to be consistent with the finite-difference fluxes and, consequently, will not follow a physical flow path. We propose a strategy that provides a consistent representation for streamlines and velocities near faults and nonneighbor connec-tions. Our approach is based on a simple local (boundary layer) refinement construction that can be used to honor the fluxes at each face, without affecting the representation of flow within the cell or on any other cell face. The local refinement construction is the simplest extension to 3D for faulted reservoir cells that provides consistency with the finite difference flux calculation. Several examples will be presented for a single pair of cells juxtaposed across a fault and at LGR boundaries to illustrate the difficulties in conventional tracing algorithms and the benefits of our approach. The practical utility of our algorithm is demonstrated in a structurally complex and heavily faulted full-field model. The reservoir geometry includes multiple cells with complex fault juxtaposition and several nonneighbor con-figurations in different faces. This treatment is contrasted with the usual approach, and the implications for reservoir scale fluid flow tracing by streamlines is examined.
Abstract Uncertainty quantification is generally carried out in a Bayesian framework whereby multiple reservoir models can be evaluated by sampling from a posterior distribution that incorporates the observed production data and the prior parameter distribution. Rigorous sampling methods such as the Markov Chain Monte Carlo (MCMC) method provide accurate sampling but at a high cost because of their high rejection rates and the need to run flow simulation for every proposed candidate. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods without loss of its rigorousness in sampling. Our proposed method employs off-line computations for establishing a mathematical relation between coarse- and fine-scale error responses. This relation is modeled using non-parametric approaches that correlate the error responses via optimal transformations. The resulting statistical models are then used in efficient sampling within an MCMC framework. We propose a multi-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale simulations. The latter is determined based on the non-parametric error model developed off-line. Our proposed method substantially improves the efficiency of existing multistage MCMC methods that only rely on coarse-scale or other approximate models. Most importantly, our proposed method does not depend on the proximity of the coarse and fine-scale models and thus, can employ much coarser and inexpensive models to guide the fine-scale simulations. We will present a synthetic and a field example involving three-phase flow to demonstrate that the proposed approach is both robust and computationally efficient. In these examples, water cut and GOR are integrated to generate multiple reservoir models and assess uncertainty in the production forecasts. Introduction Uncertainty exists inherently in dynamic reservoir modeling because of several factors, the primary ones being the modeling error, data noise, and the non-uniqueness of the inverse problems that causes several models to fit the dynamic data. By understanding and quantifying the uncertainty in production forecast, financial investment risks can be reduced and decision quality can be improved. A common practice to quantifying uncertainty is to generate multiple reservoir model realizations that are conditioned to static and dynamic data, and then perform flow simulation for future performance predictions. The validity of the uncertainty quantification will strongly depend upon the distribution of these realizations, that is, whether they adequately represent the underlying uncertainties. In the context of the Bayesian inversion, the solution to the inverse problem is the posterior probability distribution itself. Therefore, the problem of uncertainty quantification is closely tied to the correct sampling of multiple reservoir models from the posterior distribution. Such sampling is nontrivial because the posterior distribution has very high dimension (equal to the number of parameters) and is known only within a proportionality constant (un-normalized). Furthermore, the posterior distribution can be both nonGaussian and multimodal. This makes rigorous sampling from the posterior distribution extremely computationally demanding. The common practice is to resort to approximate sampling methods.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.66)
Abstract Full field flow simulators utilize a variety of cell geometries ranging from simple rectangles to complex corner point systems. One of the benefits of corner-point cells is the ease with which we may represent faulted reservoirs. Each face of a cell may be juxtaposed to two or more cells, depending on the fault throw and the lateral displacements of adjacent cells. Conventional finite-difference approaches routinely include the flux between these cells as "non-neighbor" connections. Other examples of non-neighbor or non-standard connections occur at the boundary of local grid refinement (LGR) or local grid coarsening (LGC) regions where two computational grids come into juxtaposition. In each of these instances, the velocity across the non-standard faces of a cell will be unevenly distributed according to the non-neighbor fluxes. In contrast, the standard streamline velocity interpolation model (Pollock's scheme) used within a cell assumes that the flux be evenly distributed on each cell face, inconsistent with the non-neighbor connection fluxes. Streamlines traced with such an approach do not have sufficient degrees of freedom to be consistent with the finite-difference fluxes, and consequently will not follow a physical flow path. We propose a strategy that provides a consistent representation for streamlines and velocities near faults and non-neighbor connections. Our approach is based on a simple local (boundary layer) refinement construction that can be used to honor the fluxes at each face, without impacting the representation of flow within the cell or on any other cell face. The local refinement construction is the simplest extension to three dimensions for faulted reservoir cells which provides consistency with the finite difference flux calculation. Several examples will be presented for a single pair of cells juxtaposed across a fault and at LGR boundaries to illustrate the difficulties in conventional tracing algorithms and the benefits of our approach. The practical utility of our algorithm is demonstrated in a structurally complex and heavily faulted full field model. The reservoir geometry includes multiple cells with complex fault juxtaposition and several non-neighbor configurations in different faces. This treatment is contrasted with the usual approach and the implications for reservoir scale fluid flow tracing by streamlines is examined Introduction Accurate streamline tracing and time of flight calculations are one of the cornerstones of streamline simulation. The linear velocity interpolation model proposed by Pollock (1988) is by far the most commonly used tracing algorithm in current streamline simulators. Pollock's method is well suited for Cartesian geometries. However, conventional full field flow simulators routinely utilize a much richer set of cell geometries than simple rectangles, and the velocity fields they model are often more complex than linear. There are many possible generalizations to the Pollock's solution for rectangular cells (Datta-Gupta and King, 2007). There are three requirements that guide us when examining these choices.Flux Continuity: It is necessary that the streamline velocity models we use within each cell provide continuous flux between the cells. Not all numerical schemes require flux continuity: only those that do will generate physically correct streamline trajectories. Sufficiency: There must be sufficient degrees of freedom in the representation of velocity tosatisfy face flux continuity, represent compressible flow, and represent flow near wells. Simplicity: The solutions need to reduce to the known solution for 2D incompressible flow. This is the only specific problem where an explicit streamfunction is available to guide us, and to provide a degree of rigor.