Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. The classical tornado chart is obtained by fixing all but one input at some base value and letting the one input vary from its minimum to maximum. A similar graph is used in Monte Carlo simulation software, in which the bar widths represent either rank correlation coefficients or stepwise linear regression coefficients.
Monte Carlo simulation is a process of running a model numerous times with a random selection from the input distributions for each variable. The results of these numerous scenarios can give you a "most likely" case, along with a statistical distribution to understand the risk or uncertainty involved. Computer programs make it easy to run thousands of random samplings quickly. Monte Carlo simulation begins with a model, often built in a spreadsheet, having input distributions and output functions of the inputs. The following description is drawn largely from Murtha.
A Monte Carlo model is, in principle, just a worksheet in which some cells contain probability distributions rather than values. Thus, one can build a Monte Carlo model by converting a deterministic worksheet with the help of commercial add-in software. Practitioners, however, soon find that some of their deterministic models were constructed in a way that makes this transition difficult. Redundancy, hidden formulas, and contorted logic are common features of deterministic models that encumber the resulting Monte Carlo model. Likewise, presentation of results from probabilistic analysis might seem no different from any other engineering presentation (problem statement, summary and conclusions, key results, method, and details).
This glossary was created through discussions among the steering committee for the SPE Global Integrated Workshop Series (GIWS) on Production Forecasting. Some definitions were not contested at all, others generated fierce discussions. The contract quantity is the contractually agreed volumes and limits: predefined (annual) volume of natural gas on contract level. A factor applied to forecasts to take into account the fact that a Production System will not always operate at 100% of its capacity. Available But Not Required, that part of the IPSC that is available for production but not produced because of low off-take demand.
The oil and gas industry invests significant money and other resources in projects with highly uncertain outcomes. We drill complex wells and build gas plants, refineries, platforms, and pipelines where costly problems can occur and where associated revenues might be disappointing. We may lose our investment; we may make a handsome profit. We are in a risky business. Assessing the outcomes, assigning probabilities of occurrence and associated values, is how we analyze and prepare to manage risk.
Just as there are shortcomings of deterministic models that can be avoided with probabilistic models, the latter have their associated pitfalls as well. Adding uncertainty, by replacing single estimate inputs with probability distributions, requires the user to exercise caution on several fronts. Without going into exhaustive detail we offer a couple of illustrations. First, the probabilistic model is more complicated. It demands more documentation and more attention to logical structure.
Dynamic data is information that changes asynchronously as the information is updated. Unlike static data, which is infrequently accessed and unlikely to be modified, or streaming data, which has a constant flow of information, dynamic data involves updates that may come at any time, with sporadic periods of inactivity in between. In the context of reservoir engineering, dynamic data is used during the creation of a reservoir model in conjunction with historical static data. When modeled accurately, any sampling from the conditional distribution would produce accurate static and dynamic characteristics. When a permanence of ratio hypothesis is employed, the conditional probability P(AǀB,C) can be expressed in terms of P(A), P(AǀB), and P(AǀC).
Integration of time-lapse seismic data into dynamic reservoir model is an efficient process in calibrating reservoir parameters update. The choice of the metric which will measure the misfit between observed data and simulated model has a considerable effect on the history matching process, and then on the optimal ensemble model acquired. History matching using 4D seismic and production data simultaneously is still a challenge due to the nature of the two different type of data (time-series and maps or volumes based).
Conventionally, the formulation used for the misfit is least square, which is widely used for production data matching. Distance measurement based objective functions designed for 4D image comparison have been explored in recent years and has been proven to be reliable. This study explores history matching process by introducing a merged objective function, between the production and the 4D seismic data. The proposed approach in this paper is to make comparable this two type of data (well and seismic) in a unique objective function, which will be optimised, avoiding by then the question of weights. An adaptive evolutionary optimisation algorithm has been used for the history matching loop. Local and global reservoir parameters are perturbed in this process, which include porosity, permeability, net-to-gross, and fault transmissibility.
This production and seismic history matching has been applied on a UKCS field, it shows that a acceptalbe production data matching is achieved while honouring saturation information obtained from 4D seismic surveys.
Nandi Formentin, Helena (Durham University and University of Campinas) | Vernon, Ian (Durham University) | Avansi, Guilherme Daniel (University of Campinas) | Caiado, Camila (Durham University) | Maschio, Célio (University of Campinas) | Goldstein, Michael (Durham University) | Schiozer, Denis José (University of Campinas)
Reservoir simulation models incorporate physical laws and reservoir characteristics. They represent our understanding of sub-surface structures based on the available information. Emulators are statistical representations of simulation models, offering fast evaluations of a sufficiently large number of reservoir scenarios, to enable a full uncertainty analysis. Bayesian History Matching (BHM) aims to find the range of reservoir scenarios that are consistent with the historical data, in order to provide comprehensive evaluation of reservoir performance and consistent, unbiased predictions incorporating realistic levels of uncertainty, required for full asset management. We describe a systematic approach for uncertainty quantification that combines reservoir simulation and emulation techniques within a coherent Bayesian framework for uncertainty quantification.
Our systematic procedure is an alternative and more rigorous tool for reservoir studies dealing with probabilistic uncertainty reduction. It comprises the design of sets of simulation scenarios to facilitate the construction of emulators, capable of accurately mimicking the simulator with known levels of uncertainty. Emulators can be used to accelerate the steps requiring large numbers of evaluations of the input space in order to be valid from a statistical perspective. Via implausibility measures, we compare emulated outputs with historical data incorporating major process uncertainties. Then, we iteratively identify regions of input parameter space unlikely to provide acceptable matches, performing more runs and reconstructing more accurate emulators at each wave, an approach that benefits from several efficiency improvements. We provide a workflow covering each stage of this procedure.
The procedure was applied to reduce uncertainty in a complex reservoir case study with 25 injection and production wells. The case study contains 26 uncertain attributes representing petrophysical, rock-fluid and fluid properties. We selected phases of evaluation considering specific events during the reservoir management, improving the efficiency of simulation resources use. We identified and addressed data patterns untracked in previous studies: simulator targets,
We advance the applicability of Bayesian History Matching for reservoir studies with four deliveries: (a) a general workflow for systematic BHM, (b) the use of phases to progressively evaluate the historical data; and (c) the integration of two-class emulators in the BHM formulation. Finally, we demonstrate the internal discrepancy as a source of error in the reservoir model.
The work discussed and presented in this paper focuses on the history matching of reservoirs by integrating 4D seismic data into the inversion process using machine learning techniques. A new integrated scheme for the reconstruction of petrophysical properties with a modified Ensemble Smoother with Multiple Data Assimilation (ES-MDA) in a synthetic reservoir is proposed. The permeability field inside the reservoir is parametrised with an unsupervised learning approach, namely K-means with Singular Value Decomposition (K-SVD). This is combined with the Orthogonal Matching Pursuit (OMP) technique which is very typical for sparsity promoting regularisation schemes. Moreover, seismic attributes, in particular, acoustic impedance, are parametrised with the Discrete Cosine Transform (DCT). This novel combination of techniques from machine learning, sparsity regularisation, seismic imaging and history matching aims to address the ill-posedness of the inversion of historical production data efficiently using ES-MDA. In the numerical experiments provided, I demonstrate that these sparse representations of the petrophysical properties and the seismic attributes enables to obtain better production data matches to the true production data and to quantify the propagating waterfront better compared to more traditional methods that do not use comparable parametrisation techniques.