Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. The classical tornado chart is obtained by fixing all but one input at some base value and letting the one input vary from its minimum to maximum. A similar graph is used in Monte Carlo simulation software, in which the bar widths represent either rank correlation coefficients or stepwise linear regression coefficients.
Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. A geostatistical linear-regression technique that uses a secondary regionalized variable (e.g., a seismic attribute) to control the shape of the final map created by kriging or simulation. External drift uses a spatial model of covariance.
Monte Carlo simulation is a process of running a model numerous times with a random selection from the input distributions for each variable. The results of these numerous scenarios can give you a "most likely" case, along with a statistical distribution to understand the risk or uncertainty involved. Computer programs make it easy to run thousands of random samplings quickly. Monte Carlo simulation begins with a model, often built in a spreadsheet, having input distributions and output functions of the inputs. The following description is drawn largely from Murtha.
Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. Estimates for which there is a correlation between the standardized errors and the estimated values (see Cross-validation) or for which a histogram of the standardized errors is skewed. Either condition suggests a bias in the estimates, so that one area of the map may always show estimates that are higher (or lower) than expected.
An understanding of statistical concepts is important to many aspects of petroleum engineering, but especially reservoir modeling and simulation. The discussion below focuses on a range of statistical concepts that engineers may find valuable to understand. The focus here is classical statistics, but differences in the application for geostatistics are included. A quantitative approach requires more than a headlong rush into the data, armed with a computer. Because conclusions from a quantitative study are based at least in part on inferences drawn from measurements, the geoscientist and reservoir engineer must be aware of the nature of the measurement systems with which the data are collected. Each of these scales is more rigorously defined than the one before it. The nominal and ordinal scales classify observations into exclusive categories.
The accurate calculation of porosity at the wellbore is essential for an accurate calculation of original oil in place (OOIP) or original gas in place (OGIP) throughout the reservoir. The porosity and its distribution also need to be calculated as accurately as possible because they are almost always directly used in the water saturation (Sw) and permeability calculations and, possibly, in the net pay calculations. In most OOIP and OGIP studies, only the gross-rock-volume uncertainties have a greater influence on the result than porosity does. Occasionally, where porosity estimates are difficult, porosity is the leading uncertainty. Fractured and clay-mineral-rich reservoirs remain a challenge. For this discussion, it is assumed that the core data have been properly adjusted to reservoir conditions, that the data from various logs have been reviewed and validated as needed, and that all of the required depth-alignment work has been completed.
Uncertainty range in production forecasting gives an introduction to uncertainty analysis in production forecasting, including a PRMS based definition of low, best and high production forecasts. This page topic builds on this with more details of how to approach uncertainty analysis as part of creating production forecasts. Probabilistic subsurface assessments are the norm within the exploration side of the oil and gas industry, both in majors and independents. However, in many companies, the production side is still in transition from single-valued deterministic assessments, sometimes carried out with ad-hoc sensitivity studies, to more-rigorous probabilistic assessments with an auditable trail of assumptions and a statistical underpinning. Reflecting these changes in practices and technology, recently SEC rules for reserves reporting (effective 1 January 2010) were revised, in line with PRMS, to allow for the use of both probabilistic and deterministic methods in addition to allowing reporting of reserves categories other than "proved." This section attempts to present some of the challenges facing probabilistic assessments and present some practical considerations to carry out the assessments effectively. It should be noted that for simplicity the examples referred to in this section are about calculating OOIP rather than generating probabilistic production forecasts directly. Clearly OOIP/GOIP is the starting point of any production forecast and gives a firm basis from which to build production forecasts.
Empirical methods are quantitative, statistically-based relationships that allow one to compare performance against a collection of analogous reservoirs using specific reservoir properties. As production forecasting analog methods states, analog methods are generally more qualitative in nature, but it often is possible to derive equations relating reservoir parameters to performance indicators. This should allow narrowing the range of outcomes rather than using the entire range of analog values. Empirical forecasts can be highly reliable indicators of performance depending on the relevance of the analog data set used to derive the relationships, the quality of the correlations, the quality and reliability of the reservoir data, and the similarity of development conditions between the fields of the analog data set and the reservoir under consideration. There are several empirical relationships in the literature which are often used for quick performance predictions.
Each of these is discussed briefly in the next two sections. Thereafter--except for another section on probabilistic procedures near the end--the chapter will focus on deterministic procedures because they still are more widely used. Both procedures need the same basic data and equations. Reserves calculated using such procedures are classified subjectively on the basis of professional judgments of the uncertainty in each reserve estimate and/or of pertinent regulatory and/or corporate guidelines. Probabilistic procedures recognize that uncertainties in input data and equations to calculate reserves may be significant.