Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
ABSTRACT We addressed the problem of the well-to-seismic tie as a Bayesian inversion for the wavelet and well path in the impedance domain. The result of the joint inversion is a set of wavelets for multiple angle stacks, and a corresponding well path. The wavelet optimally links the impedance data along the well to the seismic data along the optimized well path in the seismic time domain. Starting with prior distribution for wavelet and well path, the method calculates the posterior distribution of conditioning the prior distributions with the seismic and well-log data. This is done by iteratively inverting the seismic data with the current wavelet, to obtain an impedance cube around the well. In a second step, the seismic impedances are projected onto the well path. By minimizing the misfit between the inverted seismic impedances and the impedances derived from the well log, the wavelet and well path are optimized. Comparing the well and seismic data in the impedance domain enables the method to work on short and noisy well logs. Another advantage of this method is its ability to derive wavelets for multiple angle stacks and multiple well locations simultaneously. We tested the method on synthetic and real data examples. The algorithm performed well in the synthetic examples, in which we had control over the modeling wavelet, and the wavelets derived for a real data example showed consistently good seismic-to-well ties for six angle stacks and seven wells. The main algorithm we developed was aimed to linearize the problem. We compared the posterior distribution of the linearized result with a sampling-based result in a real data example and found good agreement.
- Europe > United Kingdom (0.28)
- Europe > Denmark (0.28)
- North America > United States > Texas > Borden County (0.24)
- Geophysics > Seismic Surveying > Seismic Processing (1.00)
- Geophysics > Seismic Surveying > Seismic Interpretation > Well Tie (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling > Seismic Inversion (0.69)
Summary Conventional AVO inversion utilizes seismic data alone. However, by including prior information on the subsurface model, the AVO inversion presented herein arrives at an optimal posterior model of the subsurface - optimal in the sense that it utilizes both sets of data, weighted with their respective uncertainties, to produce the posterior model with the smallest uncertainty. Furthermore, the AVO uncertainty itself is independent of actual data and, hence, is ideally suited for survey evaluation and design (SED). In particular, a survey can be designed so that the AVO uncertainty in the future seismic acquisition will meet a preset threshold as, for example, a time-lapse AVO change.
Summary Accurate reservoir simulation requires data-rich geomodels. In this paper, geomodels integrate stochastic seismic inversion results (for means and variances of packages of meter-scale beds), geologic modeling (for a framework and priors), rock physics (to relate seismic to flow properties), and geostatistics (for spatially correlated variability). These elements are combined in a Bayesian framework. The proposed workflow produces models with plausible bedding geometries, where each geomodel agrees with seismic data to the level consistent with the signal-to-noise ratio of the inversion. An ensemble of subseismic models estimates the means and variances of properties throughout the flow simulation grid. Grid geometries with possible pinchouts can be simulated using auxiliary variables in a Markov chain Monte Carlo (MCMC) method. Efficient implementations of this method require a posterior covariance matrix for layer thicknesses. Under assumptions that are not too restrictive, the inverse of the posterior covariance matrix can be approximated as a Toeplitz matrix, which makes the MCMC calculations efficient. The proposed method is examined using two-layer examples. Then, convergence is demonstrated for a synthetic 3D, 10,000 trace, 10 layer cornerpoint model. Performance is acceptable. The Bayesian framework introduces plausible subseismic features into flow models, whilst avoiding overconstraining to seismic data, well data, or the conceptual geologic model. The methods outlined in this paper for honoring probabilistic constraints on total thickness are general, and need not be confined to thickness data obtained from seismic inversion: Any spatially dense estimates of total thickness and its variance can be used, or the truncated geostatistical model could be used without any dense constraints. Introduction Reservoir simulation models are constructed from sparse well data and dense seismic data, using geologic concepts to constrain stratigraphy and property variations. Reservoir models should integrate spare, precise well data and dense, imprecise seismic data. Because of the sparseness of well data, stochastically inverted seismic data can improve estimates of reservoir geometry and average properties. Although seismic data are densely distributed compared to well data, they are uninformative about meter-scale features. Beds thinner than about 1/8 to 1/4 the dominant seismic wavelength cannot be resolved in seismic surveys (Dobrin and Savit 1988; Widess 1973). For depths of ˜3000 m, the maximum frequency in the signal is typically about 40 Hz, and for average velocities of ˜2,000 m/s, this translates to best resolutions of about 10 m. Besides the limited resolution, seismic-derived depths and thicknesses are uncertain because of noise in the seismic data and uncertainty in the rock physics models (Gunning and Glinsky 2004, 2006). This resolution limit and uncertainties associated with seismic depth and thickness estimates have commonly limited the use of seismic data to either inferring the external geometry or guiding modeling of plausible stratigraphic architectures of reservoirs (Deutsch et al. 1996). In contrast, well data reveal fine-scale features but cannot specify interwell geometry. To build a consistent model, conceptual stacking and facies models must be constrained by well and seismic data. The resulting geomodels must be gridded for flow simulation using methods that describe stratal architecture flexibly and efficiently.
- Europe (0.67)
- North America > United States > Texas (0.47)
- North America > United States > Louisiana (0.28)
- Geology > Geological Subdiscipline > Geomechanics (0.68)
- Geology > Geological Subdiscipline > Stratigraphy (0.68)
- Oceania > Australia > Northern Territory > Timor Sea > Bonaparte Basin > Petrel Basin > Sahul Platform > Block WA-6-R > Petrel Field > Torrens Formation (0.98)
- Oceania > Australia > Northern Territory > Timor Sea > Bonaparte Basin > Petrel Basin > Sahul Platform > Block WA-6-R > Petrel Field > Pearce Formation (0.98)
- Oceania > Australia > Northern Territory > Timor Sea > Bonaparte Basin > Petrel Basin > Sahul Platform > Block WA-6-R > Petrel Field > Cape Hay Formation (0.98)
- (9 more...)
- Information Technology > Modeling & Simulation (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
Reduced-Degrees-of-Freedom Gaussian-Mixture-Model Fitting for Large-Scale History-Matching Problems
Gao, Guohua (Shell Global Solutions US) | Jiang, Hao (Shell Global Solutions US) | Chen, Chaohui (Shell Exploration and Production Company) | Vink, Jeroen C. (Shell Global Solutions International) | El Khamra, Yaakoub (Shell Global Solutions US) | Ita, Joel (Shell Global Solutions US) | Saaf, Fredrik (Shell Global Solutions US)
Summary It has been demonstrated that the Gaussian‐mixture‐model (GMM) fitting method can construct a GMM that more accurately approximates the posterior probability density function (PDF) by conditioning reservoir models to production data. However, the number of degrees of freedom (DOFs) for all unknown GMM parameters might become huge for large‐scale history‐matching problems. A new formulation of GMM fitting with a reduced number of DOFs is proposed in this paper to save memory use and reduce computational cost. The performance of the new method is benchmarked against other methods using test problems with different numbers of uncertain parameters. The new method performs more efficiently than the full‐rank GMM fitting formulation, reducing the memory use and computational cost by a factor of 5 to 10. Although it is less efficient than the simple GMM approximation dependent on local linearization (L‐GMM), it achieves much higher accuracy, reducing the error by a factor of 20 to 600. Finally, the new method together with the parallelized acceptance/rejection (A/R) algorithm is applied to a synthetic history‐matching problem for demonstration.
- Europe (1.00)
- North America > United States > Texas (0.28)
- Information Technology > Modeling & Simulation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.68)
Abstract The ensemble Kalman filter (EnKF) has recently become a popular history-matching tool largely because of its computational efficiency and ease of implementation. While EnKF has improved a previous history match obtained manually in several field cases, and often appears to give reasonable results for realistic synthetic history matching problems, one cannot theoretically establish that EnKF samples correctly the posterior probability distribution for the reservoir model parameters or correctly assesses the uncertainty in the production forecast unless strong assumptions of Gaussianity and linearity apply. In multiphase flow problems, the relationship between data and reservoir model parameters and the primary simulation variables is highly nonlinear, so the theoretical justification for obtaining a correct assessment of the uncertainty in model parameters and future performance predictions does not hold. On the other hand, it is well known that the Markov chain Monte Carlo (MCMC) method sample correctly in the limit. However its direct application to reservoir problems can be prohibitively expensive computationally. Here we use a variant of EnKF, the ensemble square root filter (EnSRF), to provide an initial sample of the posterior pdf and to propose transitions for the MCMC method. We present a synthetic reservoir case and show that the combined application of EnSRF and MCMC provides improved history-matching results and a better characterization of uncertainty in the predicted reservoir performance.