**Source**

**Conference**

**Theme**

**Author**

**Concept Tag**

- adaptive linear prediction (1)
- algorithm (3)
- amplitude (1)
- annual meeting (1)
- Artificial Intelligence (4)
- attenuation (1)
- coefficient (1)
- component (1)
- conjugate gradient (2)
- curvelet (1)
- curvelet coefficient (1)
- Curvelet transform (1)
- data quality (2)
- data reconstruction (1)
- de-aliased cadzow reconstruction (1)
- dimensional shot-profile (1)
- direction (1)
- domain (1)
- dominant dip (1)
- FGFT (1)
- FGFT coefficient (1)
- FGFT interpolation (1)
- FGFT interpolation method (1)
- Fourier (3)
- Fourier Transform (2)
- frequency (5)
- frequency range (1)
- function (2)
- generalized fourier (1)
- generalized fourier interpolation (1)
- generalized fourier transform (1)
- geophysics (6)
- high frequency (2)
- IDDD method (1)
- information (1)
- input data (1)
- interpolation (7)
- interpolation method (1)
- inverse Fourier transform (1)
- irregularly (1)
- list (1)
- low frequency (1)
- machine learning (1)
- mapping (1)
- mask function (2)
- method (3)
- midpoint (1)
- Modeling & Simulation (1)
- MSAR (1)
- multi-step autoregressive operator (1)
- multidimensional de-aliased (1)
- multidimensional spectrum-guided (1)
- MWNI (2)
- normalized frequency (2)
- normalized wavenumber (2)
- operator (1)
- POC (1)
- prediction filter (1)
- Ramirez (1)
- real data (1)
- reconstruction (6)
- reconstruction method (1)
- reconstruction result (1)
- representation (2)
- Reservoir Characterization (10)
- reservoir description and dynamics (6)
- resolution (1)
- Sacchi (3)
- scale (1)
- Scenario (1)
- SEG (1)
- SEG Denver (1)
- SEG SEG Denver (1)
- seismic data (2)
- seismic processing and interpretation (5)
- seismic record (4)
- seismic trace interpolation (2)
- shot-profile migration data reconstruction (1)
- spatial (3)
- spectra (1)
- Stolt (1)
- synthetic data (1)
- TCOM (1)
- trace (2)
- two-dimensional fast generalized (1)
- Upstream Oil & Gas (9)
- wavenumber (1)
- Weglein (1)
- weight function (1)

**File Type**

A comparison is made between three 5D reconstruction methods– Projection Onto Convex Sets (POCS), Tensor Completion (TCOM), and Minimum Weighted Norm Interpolation (MWNI). A method to measure of the quality of synthetic data reconstructions is defined and applied under various scenarios. Two different measures of performance in the case of real data reconstructions are also provided and applied to a real data example taken from a land dataset acquired in the Western Canadian Sedimentary Basin. We find that TCOM and POCS are better able to reconstruct data in the presence of low SNR. We also find that TCOM provides superior results in most synthetic data scenarios, but in the case of real data reconstruction all three methods have similar performance, with POCS giving slightly better preservation of amplitudes.

amplitude, annual meeting, Artificial Intelligence, data reconstruction, frequency, geophysics, input data, interpolation, midpoint, MWNI, POC, real data, reconstruction, reconstruction method, reconstruction result, Reservoir Characterization, Sacchi, Scenario, spatial, synthetic data, TCOM, Upstream Oil & Gas

Oilfield Places:

- North America > Canada > Saskatchewan > Western Canada Sedimentary Basin > Alberta Basin (0.99)
- North America > Canada > Northwest Territories > Western Canada Sedimentary Basin > Alberta Basin (0.99)
- North America > Canada > Manitoba > Western Canada Sedimentary Basin > Alberta Basin (0.99)
- (2 more...)

SPE Disciplines: Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)

We introduce a strategy for beyond-alias interpolation of seismic volumes that uses the Cadzow method. A Hankel matrix is obtained from the spatial samples of low frequency data in the (

SPE Disciplines: Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (0.69)

Naghizadeh, Mostafa (CREWES) | Innanen, K.A. (CREWES)

The fast generalized Fourier transform algorithm is extended to two-dimensional data cases. The algorithm provides a fast and nonredundant alternative for the simultaneous time-frequency and space-wave number analysis of the data with time-space dependencies. The transform decomposes the data based on the local slope information, and therefore making it possible to extract weight function based on dominant dips from the alias-free low frequencies. By projecting the extracted weight function to the alias-contaminated high frequencies and utilizing a least-squares fitting algorithm, a beyond-alias interpolation method is accomplished. Synthetic and real data examples are provided to examine the performance of the proposed interpolation method.

algorithm, data quality, Fourier, Fourier Transform, frequency, frequency range, generalized fourier, generalized fourier interpolation, generalized fourier transform, geophysics, interpolation, normalized wavenumber, Reservoir Characterization, seismic data, seismic record, spatial, spectra, two-dimensional fast generalized, Upstream Oil & Gas, wavenumber, weight function

SPE Disciplines: Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)

algorithm, data quality, FGFT, FGFT coefficient, FGFT interpolation, FGFT interpolation method, Fourier, Fourier Transform, frequency, high frequency, interpolation, interpolation method, inverse Fourier transform, low frequency, reconstruction, representation, Reservoir Characterization, resolution, seismic data, spatial, Upstream Oil & Gas

SPE Disciplines: Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)

We propose a robust interpolation scheme for aliased regularly sampled seismic data that uses the curvelet transform. In a first pass, the curvelet transform is used to compute the curvelet coefficients of the aliased seismic data. The aforementioned coefficients are divided into two groups of scales: alias-free and alias-contaminated scales. The alias-free curvelet coefficients are upscaled to estimate a mask function that is used to constrain the inversion of the alias-contaminated scale coefficients. The mask function is incorporated into the inversion via a minimum norm least squares algorithm that determines the curvelet coefficients of the desired alias-free data. Once the alias-free coefficients are determined, the curvelet synthesis operator is used to reconstruct seismograms at new spatial positions. Synthetic and real data examples are used to illustrate the performance of the proposed curvelet interpolation method.

INTRODUCTION

Interpolation and reconstruction of seismic data has become an important topic for the seismic data processing community. It is often the case that logistic and economic constraints dictate the spatial sampling of seismic surveys. Wave-fields are continuous; in other words, seismic energy reaches the surface of the earth everywhere in our area of study. The process of acquisition records a finite number of spatial samples of the continuous wave field generated by a finite number of sources. This leads to a regular or irregular distribution of sources and receivers. Many important techniques for removing coherent noise and imaging the earth interior have stringent sampling requirements which are often not met in real surveys. In order to avoid information losses, the data should be sampled according to the Nyquist criterion (Vermeer, 1990). When this criterion is not honored, reconstruction can be used to recover the data to a denser distribution of sources and receivers and mimic a properly sampled survey (Liu, 2004). Methods for seismic wave field reconstruction can be classified into two categories: wave-equation based methods and signal processing methods. Wave-equation methods utilize the physics of wave propagation to reconstruct seismic volumes. In general, the idea can be summarized as follows. An operator is used to map seismic wave fields to a physical domain. Then, the modeled physical domain is transformed back to data space to obtain the data we would have acquired with an ideal experiment. It is basically a regression approach where the regressors are built based on wave equation principles (in general, approximations to kinematic ray theoretical solutions of the wave equation). The methods proposed by Ronen (1987), Bagaini and Spagnolini (1999), Stolt (2002), Trad (2003), Fomel (2003), Malcolm et al. (2005), Clapp (2006) and Leggott et al. (2007) fall under this category. These methods require the knowledge of some sort of velocity distribution in the earth’s interior (migration velocities, root-meansquare velocities, stacking velocities). While reconstruction methods based on wave equation principles are very important, this paper will not investigate this category of reconstruction algorithms. Seismic data reconstruction via signal processing approaches is an ongoing research topic in exploration seismology.

algorithm, Artificial Intelligence, coefficient, conjugate gradient, curvelet, curvelet coefficient, Curvelet transform, direction, domain, function, geophysics, interpolation, machine learning, mask function, method, operator, reconstruction, representation, Reservoir Characterization, reservoir description and dynamics, scale, seismic processing and interpretation, Upstream Oil & Gas

Technology: Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.48)

In seismic data reconstruction, algorithms tend to fall into one of two categories, being rooted in either signal processing or the wave equation. Examples of the former include Spitz (1991), G¨ul¨unay (2003), Liu and Sacchi (2004), Hennenfent and Herrmann (2006), and Naghizadeh and Sacchi (2007), while examples of the later include Stolt (2002), Chiu and Stolt (2002), Trad (2003), Ram´ırez et al. (2006), and Ram´ırez and Weglein (2009). SPDR2 belongs to the family of wave equation based methods for data reconstruction. It differs from previous efforts in its parameterization of model space, being based on shot-profile migration (e.g. Biondi, 2003) and de-migration operators. Additionally, it relies on data fitting methods such as those used in Trad (2003), rather than direct inversion and asymptotic approximation which are used in, for example, Stolt (2002). A challenge in data reconstruction is alias. In particular, when aliased energy is present and interferes with signal, their separation becomes challenging (but, not impossible). A recent example of data reconstruction is Naghizadeh and Sacchi (2007). They use the non-aliased part of data to aid in the reconstruction of the aliased part of data. An alternative approach is to transform data via some operator that maps from data space to some model space, and such that in that model space, the corresponding representation of signal and alias are separable. This is a common approach in many signal processing methods, and is also the approach that we take in SPDR2. In particular, the SPDR2 model space is the sum of constant velocity shot-profile migrated gathers (i.e. a sum of common shot image gathers). This means that the SPDR2 model space is a representation of the earth’s reflectors parameterized by pseudo-depth (i.e. depth under the assumption of a constant migration velocity model) and lateral position. We will show that under the assumption of limited dips in the earth’s reflectors, the SPDR2 model space allows for the suppression of alias while preserving signal, thus allowing for the reconstruction of aliased data. We begin with a description of shot-profile migration and demigration built from the Born approximation to the acoustic wave-field and constant velocity Green’s functions. We apply shot-profile migration to an analytic example in order to illustrate its mapping of signal and alias from data space (shot gathers) to model space.

dimensional shot-profile, geophysics, interpolation, list, mapping, Ramirez, reconstruction, Reservoir Characterization, reservoir description and dynamics, Sacchi, SEG Denver, seismic processing and interpretation, seismic record, seismic trace interpolation, shot-profile migration data reconstruction, Stolt, Upstream Oil & Gas, Weglein

A unified approach for de-noising and interpolation of seismic data in the frequency-wavenumber (f-k) domain is introduced. First an angular search in the f-k domain is carried out to identify a sparse number of dominant dips. Then, an angular mask function is designed based on the identified dominant dips. The mask function is utilized with the least-squares fitting principle for optimal de-noising or interpolation of data. Synthetic and real data examples are provided to examine the performance of the proposed method.

Artificial Intelligence, conjugate gradient, dominant dip, frequency, function, geophysics, IDDD method, information, interpolation, mask function, method, normalized frequency, normalized wavenumber, Reservoir Characterization, reservoir description and dynamics, SEG, SEG SEG Denver, seismic processing and interpretation, seismic record, seismic trace interpolation, trace, Upstream Oil & Gas

Autoregressive modeling is used to estimate the multi-dimensional spectrum of aliased data. A region of spectral support is determined by identifying the location of peaks in the estimated spectrum. This information is used to pose a Fourier reconstruction problem that inverts for the few dominant wavenumbers that are required to model the data. Synthetic and real data examples are used to illustrate the method. In particular, we show that the proposed method can accurately reconstruct aliased data and data with gaps. The method provides a unifying thread between band-limited and sparse Fourier reconstruction, and f-x and f-k interpolation methods.

Spitz (1991) showed how one could extract prediction filters from spatial data at low frequencies to reconstruct aliased spatial data. This idea was expanded by Naghizadeh and Sacchi (2007) and used to reconstruct data with irregular distribution of traces on a grid. The latter is named Multi-Step Auto-Regressive (MSAR) reconstruction. The MSAR reconstruction method is a combination of a Fourier reconstruction method (Liu and Sacchi, 2004) and f-x interpolation (Spitz, 1991). MSAR can be summarized as follows:

- The low frequency (unaliased) portion of data is reconstructed using Minimum Weighted Norm Interpolation (MWNI) (Liu and Sacchi, 2004).
- Prediction filters of all frequencies are extracted from already regularized low frequency spatial data.
- The estimated prediction filters are used to reconstruct the missing spatial samples in the aliased portion of the spectrum.

1) and 2) are estimation stages and 3) is the reconstruction stage. In this paper we propose a new and robust method to solve the reconstruction stage. In the original formulation of MSAR the reconstruction stage uses prediction filters harvested from low frequencies to reconstruct spatial data in the aliased portion of the spectrum (Spitz, 1991). In this article we propose to use the autoregressive (AR) spectrum of the data to define a region of spectral support. Once the region of spectral support (areas of unaliased energy in the f-k plane) is defined we turn the reconstruction problem into a Fourier reconstruction algorithm that solves for unknown spectral components using the least-squares method (Duijndam et al., 1999). Synthetic examples illustrate that the proposed method can handle gaps and extrapolation problems much better than our original formulation of MSAR (Naghizadeh and Sacchi, 2007).

The first example is a 2D spatial data set composed of three dipping planes which are aliased in both spatial directions. The data set contains 1200 traces distributed on a 30×40 regular spatial lattice. Figure 1a shows the original data. The top view is a time slice at 0.65 (s), the front view is the 21st slice in the Y direction, and the side view is 17th slice in the X direction. Figure 1b shows the f-k panel of the data from the front view of Figure 1a. The f-k representation shows the presence of aliasing as well as random noise in the original data. A data set with missing traces was simulated by first eliminating every other X slices of data and, in addition, by randomly eliminating 75% of the remaining traces (Figure 1c).

We propose an algorithm to compute time and space variant prediction filters for signal-to-noise ratio enhancement. Prediction filtering for seismic signal enhancement is, in general, implemented via filters that are estimated from the inversion of a system of equations in the t −x or f −x domain. In addition, prediction error filters are applied in small windows where the data can be modeled via a finite number of plane waves. Our algorithm, on the other hand, does not require the inversion of matrices. Furthermore, it does not require spatio-temporal windowing; the algorithm is implemented via a recursive scheme where the filter is continuously adapted to predict the signal.

We postulate the prediction problem as a local smoothing problem and use a quadratic constraint to avoid solutions that model the noise. The algorithm uses a t −x recursive implementation where the prediction filter for a given observation point is estimated via a simple rule. It turns out that the proposed algorithm is equivalent to the LMS (Least Mean Squares) filter often used for adaptive filtering. It is important to mention, however, that our derivation follows the framework that it is often used to solve underdetermined linear inverse problems. The latter involves the minimization of a cost function that includes a quadratic constraint to guarantee a stable solution.

Synthetic and real data examples are used to test the algorithm. In particular, a field data test shows that adaptive t−x filtering could offer an efficient and versatile alternative to classical f −x deconvolution filtering.

Prediction filters play an important role in seismic data processing with applications ranging from seismic deconvolution, signal-to-noise-ratio enhancement (Canales, 1984; Gulunay, 1986; Abma, 1995) and trace interpolation (Spitz, 1991).

Prediction filters are often estimated from small spatio-temporal windows where waveforms can be approximated by events with constant dip. The latter is required for the optimal performance of the prediction filter. We propose to avoid windowing via a recursive algorithm where one prediction filter is estimated for each data sample. We show that the aforementioned problem is underdetermined and, as it is well-known, admits an infinite number of solutions. A unique and stable solution if found by formulating the problem in terms of a regularization constraint. The filter required to smooth a given data point is constrained to be similar to the filter used to smooth an adjacent data point. Our formulation follows the classical approach used to solve an underdetermined problem. The final algorithm is equivalent to the LMS (Least Mean Squares) algorithm described in Widrow and Stearns (1985) and Hornbostel (1989).

We start to formulate our problem by introducing an auto-regressive (AR) or linear prediction operator modeling operator of order p. To avoid notational clutter we first consider a 1D problem with p = 3

Figure 1 portrays a synthetic data composed of 5 events with parabolic moveout. The data were contaminated with Gaussian band-limited noise with signal-to-noise ratio SNR = 1. Figure 1a portrays the noisy data.

SPE Disciplines: Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (0.87)

The Exponentially Weighted Recursive Least Squares (EWRLS) method is adopted to estimate adaptive prediction filters for F-X seismic interpolation. Adaptive prediction filters are able to model signals where the dominant wave-numbers are varying in space. This concept leads to a F-X interpolation method that does not require windowing strategies for optimal results. Synthetic and real data examples are used to illustrate the performance of the proposed adaptive F-X interpolation method.

Spitz (1991) introduced a seismic trace interpolation method that utilizes prediction filters in the frequency-space (F-X) domain. Spitz''s algorithm is based on the fact that linear events in time-space (T-X) domain map to a superposition of complex sinusoids in the F-X domain. Complex sinusoids can be reconstructed via prediction filters (autoregressive operators); this property is used to establish a signal model for F-X interpolation (Spitz, 1991) and F-X random noise attenuation (Canales, 1984; Soubaras, 1994; Sacchi and Kuehl, 2000). Spitz (1991) showed that prediction filters obtained at frequency f can be used to interpolate data at temporal frequency 2 f . Prediction filters estimated from the low-frequency (alias-free) portion of the data are used to interpolate the high-frequency (aliased) data components. Several modifications to Spitz''s prediction filtering interpolation have been proposed. For instance, Porsani (1999) proposed a half-step prediction filter scheme that makes the interpolation process more efficient. Gulunay (2003) introduced an algorithm with similarities to F-X prediction filtering with a very elegant representation in the frequencywavenumber F-K domain. Recently, Naghizadeh and Sacchi (2007) proposed a modification of F-X interpolation that allows to reconstruct data with gaps.

Seismic interpolation algorithms depend on a signal model. F-X interpolation methods are not an exception to the preceding statement; they assume data composed of a finite number of waveforms with constant dip. This assumption can be validated via windowing. Interpolation methods driven by, for instance, local Radon transforms (Sacchi et al., 2004) and Curvelet frames (Herrmann and Hennenfent, 2008) assume a signal model that consists of events with constant local dip. In addition, they implicitly define operators that are local without the necessity of windowing. This is an attractive property, in particular, when compared to non-local interpolation methods (operators defined on a large spatial aperture) where optimal results are only achievable when seismic events match the kinematic signature of the operator. Examples of the latter are interpolation methods based on the hyperbolic/ parabolic Radon transforms (Darche, 1990; Trad et al., 2002) and migration operators (Trad, 2003).

As we have already pointed out, F-X methods require windowing strategies to cope with continuous changes in dominant wave-numbers (or dips in T-X). In this article we propose a method that avoids the necessity of spatial windows. The proposed interpolation automatically updates prediction filters as lateral variations of dip are encountered. This concepts can be implemented in a somehow cumbersome process that requires classical F-X interpolation in a rolling window. In this paper we have preferred to use the framework of recursive least squares (Honig and Messerschmidt, 1984; Marple, 1987) to update prediction filters in a recursive fashion.

Thank you!