Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. The classical tornado chart is obtained by fixing all but one input at some base value and letting the one input vary from its minimum to maximum. A similar graph is used in Monte Carlo simulation software, in which the bar widths represent either rank correlation coefficients or stepwise linear regression coefficients.
Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. A geostatistical linear-regression technique that uses a secondary regionalized variable (e.g., a seismic attribute) to control the shape of the final map created by kriging or simulation. External drift uses a spatial model of covariance.
Monte Carlo simulation is a process of running a model numerous times with a random selection from the input distributions for each variable. The results of these numerous scenarios can give you a "most likely" case, along with a statistical distribution to understand the risk or uncertainty involved. Computer programs make it easy to run thousands of random samplings quickly. Monte Carlo simulation begins with a model, often built in a spreadsheet, having input distributions and output functions of the inputs. The following description is drawn largely from Murtha.
An understanding of statistical concepts is important to many aspects of petroleum engineering, but especially reservoir modeling and simulation. The discussion below focuses on a range of statistical concepts that engineers may find valuable to understand. The focus here is classical statistics, but differences in the application for geostatistics are included. A quantitative approach requires more than a headlong rush into the data, armed with a computer. Because conclusions from a quantitative study are based at least in part on inferences drawn from measurements, the geoscientist and reservoir engineer must be aware of the nature of the measurement systems with which the data are collected. Each of these scales is more rigorously defined than the one before it. The nominal and ordinal scales classify observations into exclusive categories.
The accurate calculation of porosity at the wellbore is essential for an accurate calculation of original oil in place (OOIP) or original gas in place (OGIP) throughout the reservoir. The porosity and its distribution also need to be calculated as accurately as possible because they are almost always directly used in the water saturation (Sw) and permeability calculations and, possibly, in the net pay calculations. In most OOIP and OGIP studies, only the gross-rock-volume uncertainties have a greater influence on the result than porosity does. Occasionally, where porosity estimates are difficult, porosity is the leading uncertainty. Fractured and clay-mineral-rich reservoirs remain a challenge. For this discussion, it is assumed that the core data have been properly adjusted to reservoir conditions, that the data from various logs have been reviewed and validated as needed, and that all of the required depth-alignment work has been completed.
Many approaches to estimating permeability exist. Recognizing the importance of rock type, various petrophysical (grain size, surface area, and pore size) models have been developed. This page explores techniques for applying well logs and other data to the problem of predicting permeability [k or log(k)] in uncored wells. If the rock formation of interest has a fairly uniform grain composition and a common diagenetic history, then log(k)-Φ patterns are simple, straightforward statistical prediction techniques can be used, and reservoir zonation is not required. However, if a field encompasses several lithologies, perhaps with varying diagenetic imprints resulting from varying mineral composition and fluid flow histories, then the log(k)-Φ patterns are scattered, and reservoir zonation is required before predictive techniques can be applied.
A Monte Carlo model is, in principle, just a worksheet in which some cells contain probability distributions rather than values. Thus, one can build a Monte Carlo model by converting a deterministic worksheet with the help of commercial add-in software. Practitioners, however, soon find that some of their deterministic models were constructed in a way that makes this transition difficult. Redundancy, hidden formulas, and contorted logic are common features of deterministic models that encumber the resulting Monte Carlo model. Likewise, presentation of results from probabilistic analysis might seem no different from any other engineering presentation (problem statement, summary and conclusions, key results, method, and details).
Estimating capital, one of the main ingredients for any cash flow calculation, is largely in the domain of the engineering community. Petroleum engineers are responsible for drilling costs and are often involved with other engineers in estimating costs for pipelines, facilities, and other elements of the infrastructure for the development of an oil/gas field. All practicing engineers have heard horror stories of cost and schedule overruns, and some have even been involved directly with projects that had large overruns. Why did these overruns occur, and what could have been done to encompass the actual cost in the project estimate?
Both the computation of classical statistical measures (e.g., mean, mode, median, variance, standard deviation, and skewness), and graphic data representation (e.g., histograms and scatter plots) commonly are used to understand the nature of data sets in a scientific investigation--including a reservoir study. A distinguishing characteristic of earth-science data sets (e.g., for petroleum reservoirs), though, is that they contain spatial information, which classical statistical descriptive methods cannot adequately describe. Spatial aspects of the data sets, such as the degree of continuity--or conversely, heterogeneity--and directionality are very important in developing a reservoir model. Analysis of spatially rich data is within the domain of geostatistics (spatial statistics), but a foundation in classical statistics and probability is prerequisite to understanding geostatistical concepts. Sampling also has proved invaluable in thousands of studies, but it, too, can lead to statistical insufficiencies and biases.