Goto

Layer | Fill | Outline |
---|

Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|

Fill | Stroke |
---|---|

Removal of multiples from seismic reflection data is an important preprocessing step before conventional seismic imaging and inversion in most onshore and offshore environments. While many methods have been developed and successfully used to remove free-surface multiples, internal multiple attenuation remains a challenge when working in land and complex marine environments. The inverse scattering series (ISS) internal multiple algorithm is a data-driven tool to predict all orders of internal multiples for all horizons at once, without requiring subsurface information. However, use of the multidimensional version of this algorithm has been limited due to high computational cost, which increases with the maximum output frequency in the prediction. Even with the recent advances in computer hardware, the cost of the multidimensional algorithm remains expensive. To overcome this problem, we use the quasi-Monte Carlo integration technique that can significantly improve the computational efficiency of the multidimensional ISS internal multiple algorithm. The efficiency is improved by reducing the number of samples being evaluated and combining multiple integrals into a single summation.

INTRODUCTION

Conventional seismic imaging and parameter estimation (inversion) assume that the data contain only primaries, which are singly reflected events that propagated through the subsurface. Multiple reflected events in the recorded wavefield are considered noise because they do not satisfy the conventional assumptions of imaging and inversion, can have destructive interference with what is considered signal (primaries), create false images and/or be misinterpreted as primaries. Hence, the need for demultiple algorithms. Multiples are a type of multiple reflected events with at least one downward reflection, and are distinguished from ghost events because multiples propagate down from the source and are recorded as upgoing waves at the receivers. Depending on the location of the downward reflections, multiples are divided into free-surface and internal multiples. Free-surface multiples have at least one downward reflection at the free surface. Internal multiples have all downward reflections below the measurement surface (in surface seismic acquisitions). When exploring offshore areas, the most dominant multiples are associated with free-surface reflections; thus performing free-surface demultiple is often sufficient. Moreover, in onshore seismic exploration, the internal multiples dominate and are often an impediment to obtaining a reliable image and interpretation of the subsurface. Free-surface demultiple technology has reached a mature state in seismic processing; there is a toolbox available with effective effective, practical algorithms (see, e.g., Verschuur et al., 1992; Carvalho et al., 1992; Dragoset and Jeričević, 1998; Liu and Sacchi, 2002; Weglein and Dragoset, 2005). On the other hand, effective and efficient algorithms for internal multiple removal still need development to be robust and efficient in practice. The ISS internal multiple attenuation algorithm was first proposed by Araújo et al. (1994) and Weglein et al. (1997). It is a data-driven algorithm, independent of subsurface information, that predicts internal multiples for all horizons at once. This algorithm predicts the correct traveltimes and an approximated amplitude of the true internal multiples in the data. Ramírez and Weglein (2005) extended the theory from attenuation towards elimination by improving the amplitude prediction.

algorithm, annual international meeting, Artificial Intelligence, Carlo integration, carlo method, frequency, integration, internal multiple, internal multiple attenuation algorithm, inverse, maximum output frequency, Output frequency, prediction, quasi-monte carlo method, Reservoir Characterization, seg seg denver 2010, Upstream Oil & Gas, Weglein

SPE Disciplines: Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)

**Summary**

Pre-stack or angle stack gathers are inverted to estimate pseudo-logs at every surface location for building reservoir models. Recently several methods have been proposed to increase the resolution of the inverted models. All of these methods, however, require that the total number of model parameters be fixed a priori. Here, we investigate an alternate approach in which we allow the data to choose model parameterization. In other words, in addition to the layer properties, the number of layers is also treated as a variable in our formulation. Such trans-dimensional inverse problems are generally solved by using the Reversible Jump Markov Chain Monte Carlo (RJMCMC), which is an effective tool for model exploration and uncertainty quantification. Here we introduce a new gradient based RJMCMC, called the Hamiltonian Monte Carlo, where the model perturbations are generated according to the birth-death approach. Model updates are computed using gradient information and the Metropolis-Hastings criterion is used for model acceptance. We have applied this technique to pre-stack (angle stack) AVA inversion for estimating acoustic and shear impedance profiles. Our results demonstrate that RJHMC converges rapidly and can be a practical tool for inverting seismic data.

**Introduction**

The process of modeling the earth’s interior from the observed data is known as ‘inversion’. We use seismic recording for inversion so as to indirectly infer the subsurface structure and various processes inside the earth. Our observations are deficient of complete information, which leads to non-uniqueness in the solution. Most of the inverse algorithms assume fixed model dimension, which is perhaps the least known of all parameters.

In an inverse problem it is very important to determine exact model parametrization, i.e. the number of model parameters, to be consistent with resolving power of the data. Using too few parameters can lead to under-fitting the data, estimating biased parameters and under-estimating parameter uncertainties. On the other hand, considering too many model parameters can over-fit the data, which leads to estimating under-determined parameters with enormous uncertainty (Dosso et al., 2014). Thus it makes sense to make the number of model parameters itself a parameter to be solved for. Green (1995) introduced a new framework for construction of reversible Markov chain sampler, which can jump between the dimensionality of parameter sub space making it very flexible and entirely constructive. The problem with conventional reversible jump Markov chain sampler is that it is very slow and therefore not practical for routine analysis. Here we have proposed a new method which combines the conventional reversible jump sampler with a gradient based Monte Carlo method named Hamiltonian Monte Carlo.

algorithm, Artificial Intelligence, Carlo, chain monte carlo, criterion, hamiltonian dynamic, hamiltonian monte carlo, interface, inverse problem, inversion, machine learning, Malinverno, Markov chain Monte Carlo, model parameter, Monte Carlo, pre-stack trans-dimensional seismic inversion, probability, Reservoir Characterization, reservoir simulation, Rjhmc, Rjmcmc, seismic inversion, Upstream Oil & Gas

SPE Disciplines:

Technology:

It is difficult to correct porosity measurements from dual spaced neutron logs for lithology, fluid properties, temperature, pressure, and other borehole environmental conditions by using only laboratory test pit data. Mathematical models, particularly those based on Monte Carlo simulation, are being used to study these effects and augment these corrections. However, typical general purpose Monte Carlo computational codes require several hours on large high-speed computers. (A time of 7. 2 hours for use of the MCNP code on a CRAY-XMP computer has been reported. ) This paper presents a new Monte Carlo code (McDNL) which has been developed specifically for dual-spaced neutron porosity tool simulation. We feel that the present version of this code is at least as fast as (and perhaps much faster than) the use of general purpose codes and it will have the advantages of smaller size and user friendliness. The code features two improved variance reduction techniques that increase the computing efficiency by orders of magnitude over analog simulation. They are the direction biasing and statistical estimation methods. In the direction biasing scheme, the neutron direction is biased in the preferred direction (detector direction) and its weight is properly modified. As a result the neutron can reach the detector in fewer collisions and the associated variance in the detected fraction is reduced. Biasing toward the detector is done gradually over a number of collisions to avoid introducing a bias in the detected fraction. Instead of using the analog estimation method which scores only the neutrons that physically reach the detector, the statistical estimation method is used in which an estimate of the score following each scattering event is determined in the Monte Carlo history. The final score for each history is estimated as the sum of individual scores calculated during the history.

Artificial Intelligence, benchmark problem, computation time, computer, cross section, detector, Efficiency, general purpose code, general purpose monte carlo code, History, log analysis, McDNL, Monte Carlo Code, Monte Carlo simulation, neutron, purpose Monte Carlo code, reference sample, Simulation, Symposium, Upstream Oil & Gas, variance reduction technique, well logging

SPE Disciplines: Reservoir Description and Dynamics > Formation Evaluation & Management > Open hole/cased hole log analysis (1.00)

This paper was prepared for the Northern Plains Section Regional Meeting of the Society of Petroleum Engineers of AIME, to be held in Omaha, Neb., May 18-19, 1972. Permission to copy is restricted to an abstract of not more than 300 words. Illustrations may not be copied. The abstract should contain conspicuous acknowledgment of where and by whom the paper is presented. Publication elsewhere after publication in the JOURNAL OF PETROLEUM TECHNOLOGY or the SOCIETY OF PETROLEUM ENGINEERS JOURNAL is usually granted upon request to the Editor of the appropriate journal provided agreement to give proper credit is made.

Discussion of this paper is invited. Three copies of any discussion should be sent to the Society of Petroleum Engineers Office. Such discussions may be presented at the above meeting and, with the paper, may be considered for publication in one of the two SPE magazines.

Abstract

This paper describes two illustrative examples demonstrating the application of Monte Carlo techniques to gas process design. In addition, a detailed discussion is presented of the characteristics of the Monte Carlo procedure. In the first example Monte Carlo procedure. In the first example Monte Carlo simulation is used to determine the distribution of required MEA circulation rates to sweeten gas in a proposed installation. A simple problem formulation is used; therefore, it is possible to test the sensitivity of the calculated circulation rate distribution to the number of Monte Carlo events and to the selected probability distribution. In the second example, the probability distribution. In the second example, the optimum size of a lean oil absorption plant is determined. Feed rate and composition together with product prices are selected as random variables. Distributions of present-value profit are generated for several plant sizes. profit are generated for several plant sizes. Optimum plant size is selected to maximize the mean present-value profit.

Introduction

In designing gas processing facilities, plant size or capacity must be established prior plant size or capacity must be established prior to determining process configuration and operating conditions. Establishing the optimum capacity is complicated in that plant inputs and economic constraints, such as feed volume and composition, product price, and equipment operation, may not be known with certainty. This paper demonstrates use of Monte Carlo techniques in gas plant design problems when system inputs are uncertain.

To use Monte Carlo techniques in analyzing gas processing projects, plant inputs and economic constraints that characterize the project must first be defined. These project project must first be defined. These project parameters fall into two classes: deterministic parameters fall into two classes: deterministic and stochastic variables. Deterministic variables have a known value, but can vary with time (e.g., the gas price in a long-term gas sales contract). Stochastic variables are not known with certainty and are described by a probability distribution spanning a range of probability distribution spanning a range of values. For most variables a triangular distribution adequately relates the value of a stochastic variable to its minimum, maximum and expected value.

Next, appropriate yardsticks for comparing investment decisions must be selected. Present-value profit and investor's interest rate are Present-value profit and investor's interest rate are commonly used economic yardsticks. It may be preferred to use key process variables, plant preferred to use key process variables, plant investment, or product output to characterize performance so that the result may be used in a performance so that the result may be used in a subsequent economic analysis.

application, Artificial Intelligence, Carlo simulation, circulationrate, composition, distributionof present-valueprofit, expectedvalue, gas plant design, gas processing, gas rate, investment, mea circulationrate, monte carlo event, Monte Carlo Technique, plant capacity, plant size, processingfacility, Simulation, triangulardistribution, Upstream Oil & Gas

SPE Disciplines:

Monte Carlo simulation is a process of running a model numerous times with a random selection from the input distributions for each variable. The results of these numerous scenarios can give you a "most likely" case, along with a statistical distribution to understand the risk or uncertainty involved. Computer programs make it easy to run thousands of random samplings quickly. Monte Carlo simulation begins with a model, often built in a spreadsheet, having input distributions and output functions of the inputs. The following description is drawn largely from Murtha.[1]

Artificial Intelligence, Carlo simulation, correlation coefficient, input distribution, input parameter, log-normal distribution, machine learning, Monte Carlo simulation, monte carlo simulation begin, Monte Carlo Technique, output value, PetroWiki, relative importance, reserve model, reservoir simulation, risk management, sensitivity analysis, Simulation, spider diagram, Standard Deviation, Tornado Chart, Upstream Oil & Gas

Thank you!