A comparison is made between three 5D reconstruction methods– Projection Onto Convex Sets (POCS), Tensor Completion (TCOM), and Minimum Weighted Norm Interpolation (MWNI). A method to measure of the quality of synthetic data reconstructions is defined and applied under various scenarios. Two different measures of performance in the case of real data reconstructions are also provided and applied to a real data example taken from a land dataset acquired in the Western Canadian Sedimentary Basin. We find that TCOM and POCS are better able to reconstruct data in the presence of low SNR. We also find that TCOM provides superior results in most synthetic data scenarios, but in the case of real data reconstruction all three methods have similar performance, with POCS giving slightly better preservation of amplitudes.
We introduce a strategy for beyond-alias interpolation of seismic volumes that uses the Cadzow method. A Hankel matrix is obtained from the spatial samples of low frequency data in the (f-x) domain. To perform interpolation at a given frequency, the spatial samples are interlaced with zero samples and another Hankel matrix is built from the zero-interlaced data. Next, the rank-reduced eigenstate of the Hankel matrix at low frequencies is used for beyond-alias conditioning of the Hankel matrix at a given frequency. Finally, anti-diagonal averaging of the rank-reduced Hankel matrix gives the final interpolated data. Also, the multidimensional extension of the proposed algorithm is explained. Synthetic and real data examples are provided to examine the performance of the proposed interpolation method.
The fast generalized Fourier transform algorithm is extended to two-dimensional data cases. The algorithm provides a fast and nonredundant alternative for the simultaneous time-frequency and space-wave number analysis of the data with time-space dependencies. The transform decomposes the data based on the local slope information, and therefore making it possible to extract weight function based on dominant dips from the alias-free low frequencies. By projecting the extracted weight function to the alias-contaminated high frequencies and utilizing a least-squares fitting algorithm, a beyond-alias interpolation method is accomplished. Synthetic and real data examples are provided to examine the performance of the proposed interpolation method.
We propose a robust interpolation scheme for aliased regularly sampled seismic data that uses the curvelet transform. In a first pass, the curvelet transform is used to compute the curvelet coefficients of the aliased seismic data. The aforementioned coefficients are divided into two groups of scales: alias-free and alias-contaminated scales. The alias-free curvelet coefficients are upscaled to estimate a mask function that is used to constrain the inversion of the alias-contaminated scale coefficients. The mask function is incorporated into the inversion via a minimum norm least squares algorithm that determines the curvelet coefficients of the desired alias-free data. Once the alias-free coefficients are determined, the curvelet synthesis operator is used to reconstruct seismograms at new spatial positions. Synthetic and real data examples are used to illustrate the performance of the proposed curvelet interpolation method.
Interpolation and reconstruction of seismic data has become an important topic for the seismic data processing community. It is often the case that logistic and economic constraints dictate the spatial sampling of seismic surveys. Wave-fields are continuous; in other words, seismic energy reaches the surface of the earth everywhere in our area of study. The process of acquisition records a finite number of spatial samples of the continuous wave field generated by a finite number of sources. This leads to a regular or irregular distribution of sources and receivers. Many important techniques for removing coherent noise and imaging the earth interior have stringent sampling requirements which are often not met in real surveys. In order to avoid information losses, the data should be sampled according to the Nyquist criterion (Vermeer, 1990). When this criterion is not honored, reconstruction can be used to recover the data to a denser distribution of sources and receivers and mimic a properly sampled survey (Liu, 2004). Methods for seismic wave field reconstruction can be classified into two categories: wave-equation based methods and signal processing methods. Wave-equation methods utilize the physics of wave propagation to reconstruct seismic volumes. In general, the idea can be summarized as follows. An operator is used to map seismic wave fields to a physical domain. Then, the modeled physical domain is transformed back to data space to obtain the data we would have acquired with an ideal experiment. It is basically a regression approach where the regressors are built based on wave equation principles (in general, approximations to kinematic ray theoretical solutions of the wave equation). The methods proposed by Ronen (1987), Bagaini and Spagnolini (1999), Stolt (2002), Trad (2003), Fomel (2003), Malcolm et al. (2005), Clapp (2006) and Leggott et al. (2007) fall under this category. These methods require the knowledge of some sort of velocity distribution in the earth’s interior (migration velocities, root-meansquare velocities, stacking velocities). While reconstruction methods based on wave equation principles are very important, this paper will not investigate this category of reconstruction algorithms. Seismic data reconstruction via signal processing approaches is an ongoing research topic in exploration seismology.
We introduce two dimensional shot-profile migration data reconstruction (SPDR2). SPDR2 reconstructs data using leastsquares shot-profile migration with a constant migration velocity model. The velocity model is chosen both for efficiency and so that minimal assumptions are made about earth structure. At the core of least-squares migration are forward (de-migration) and adjoint (migration) operators, the former mapping from model space to data space, and the latter mapping from data space to model space. SPDR2 uses least-squares migration to find an optimal model which, in turn, is mapped to data space using de-migration, providing a reconstructed shot gather. We apply SPDR2 to real data from the Gulf of Mexico. In particular, we use SPDR2 to extrapolate near offset geophones.
In seismic data reconstruction, algorithms tend to fall into one of two categories, being rooted in either signal processing or the wave equation. Examples of the former include Spitz (1991), G¨ul¨unay (2003), Liu and Sacchi (2004), Hennenfent and Herrmann (2006), and Naghizadeh and Sacchi (2007), while examples of the later include Stolt (2002), Chiu and Stolt (2002), Trad (2003), Ram´ırez et al. (2006), and Ram´ırez and Weglein (2009). SPDR2 belongs to the family of wave equation based methods for data reconstruction. It differs from previous efforts in its parameterization of model space, being based on shot-profile migration (e.g. Biondi, 2003) and de-migration operators. Additionally, it relies on data fitting methods such as those used in Trad (2003), rather than direct inversion and asymptotic approximation which are used in, for example, Stolt (2002). A challenge in data reconstruction is alias. In particular, when aliased energy is present and interferes with signal, their separation becomes challenging (but, not impossible). A recent example of data reconstruction is Naghizadeh and Sacchi (2007). They use the non-aliased part of data to aid in the reconstruction of the aliased part of data. An alternative approach is to transform data via some operator that maps from data space to some model space, and such that in that model space, the corresponding representation of signal and alias are separable. This is a common approach in many signal processing methods, and is also the approach that we take in SPDR2. In particular, the SPDR2 model space is the sum of constant velocity shot-profile migrated gathers (i.e. a sum of common shot image gathers). This means that the SPDR2 model space is a representation of the earth’s reflectors parameterized by pseudo-depth (i.e. depth under the assumption of a constant migration velocity model) and lateral position. We will show that under the assumption of limited dips in the earth’s reflectors, the SPDR2 model space allows for the suppression of alias while preserving signal, thus allowing for the reconstruction of aliased data. We begin with a description of shot-profile migration and demigration built from the Born approximation to the acoustic wave-field and constant velocity Green’s functions. We apply shot-profile migration to an analytic example in order to illustrate its mapping of signal and alias from data space (shot gathers) to model space.
A unified approach for de-noising and interpolation of seismic data in the frequency-wavenumber (f-k) domain is introduced. First an angular search in the f-k domain is carried out to identify a sparse number of dominant dips. Then, an angular mask function is designed based on the identified dominant dips. The mask function is utilized with the least-squares fitting principle for optimal de-noising or interpolation of data. Synthetic and real data examples are provided to examine the performance of the proposed method.
Noise elimination and interpolation of seismic data are important topics of research in seismic data processing community. Several important signal processing techniques have been utilized for de-noising and interpolation purposes. For instance, the prediction filters are used by Canales (1984) and Spitz (1991) in the frequency-space (f-x) domain for de-noising and interpolation of data, respectively. Other methods such as projection filters (Soubaras, 1994), Singular Value Decomposition (Trickett, 2003), Cadzow de-noising (Cadzow and Ogino, 1981; Trickett and Burroughs, 2009), and Singular Spectrum Analysis (Oropeza and Sacchi, 2009) have also been used for random noise attenuation in the f-x domain. All of the f-x de-noising methods are based on the assumption that the spatial signals at each single frequency are composed of a sum of a limited number of complex harmonics. The Fourier transform plays a substantial role in most de-noising and interpolation methods. A frequently used domain for seismic data denoising is the frequency-wavenumber (f-k) domain. One of the prominent attributes of the f-k domain is the separation of signals according to their dip. This property makes the f-k domain a suitable option for ground-roll elimination due to the distinctive dip information of ground-roll and reflection seismic data. The most commonly used f-k domain de-noising method is the dip filtering method. The spatial interpolation of seismic records can also be interpreted as a noise elimination problem in the f-k domain. Depending on the sampling function, noise introduced by decimation of data in the f-k domain can be considered as incoherent (for random spatial sampling) or coherent (for regular spatial sampling). The f-x domain seismic-trace interpolation methods have been proposed by Spitz (1991) and Porsani (1999). These methods utilize the low frequency portion of data for a robust and alias-free interpolation of high frequencies. Gulunay (2003) have introduced the f-k equivalent of f-x interpolation methods by creating a mask function from low frequencies. In both f-x and f-k domain interpolation methods, the given frequencies are interpolated independently. Recently, Curry (2009) has introduced an f-k interpolation method using a Fourierradial adaptive thresholding strategy in an attempt to utilize the continuity of events along the frequency axis. This article introduces another f-k domain method which utilizes information from all desired frequencies for robust interpolation or de-noising of seismic data. The proposed f-k method in this paper consists of three steps. The first step is an angular search for a range of dips, over all of the frequencies, to identify the dominant energy dips.
Autoregressive modeling is used to estimate the multi-dimensional spectrum of aliased data. A region of spectral support is determined by identifying the location of peaks in the estimated spectrum. This information is used to pose a Fourier reconstruction problem that inverts for the few dominant wavenumbers that are required to model the data. Synthetic and real data examples are used to illustrate the method. In particular, we show that the proposed method can accurately reconstruct aliased data and data with gaps. The method provides a unifying thread between band-limited and sparse Fourier reconstruction, and f-x and f-k interpolation methods.
Spitz (1991) showed how one could extract prediction filters from spatial data at low frequencies to reconstruct aliased spatial data. This idea was expanded by Naghizadeh and Sacchi (2007) and used to reconstruct data with irregular distribution of traces on a grid. The latter is named Multi-Step Auto-Regressive (MSAR) reconstruction. The MSAR reconstruction method is a combination of a Fourier reconstruction method (Liu and Sacchi, 2004) and f-x interpolation (Spitz, 1991). MSAR can be summarized as follows:
The low frequency (unaliased) portion of data is reconstructed using Minimum Weighted Norm Interpolation (MWNI) (Liu and Sacchi, 2004).
Prediction filters of all frequencies are extracted from already regularized low frequency spatial data.
The estimated prediction filters are used to reconstruct the missing spatial samples in the aliased portion of the spectrum.
1) and 2) are estimation stages and 3) is the reconstruction stage. In this paper we propose a new and robust method to solve the reconstruction stage. In the original formulation of MSAR the reconstruction stage uses prediction filters harvested from low frequencies to reconstruct spatial data in the aliased portion of the spectrum (Spitz, 1991). In this article we propose to use the autoregressive (AR) spectrum of the data to define a region of spectral support. Once the region of spectral support (areas of unaliased energy in the f-k plane) is defined we turn the reconstruction problem into a Fourier reconstruction algorithm that solves for unknown spectral components using the least-squares method (Duijndam et al., 1999). Synthetic examples illustrate that the proposed method can handle gaps and extrapolation problems much better than our original formulation of MSAR (Naghizadeh and Sacchi, 2007).
The first example is a 2D spatial data set composed of three dipping planes which are aliased in both spatial directions. The data set contains 1200 traces distributed on a 30×40 regular spatial lattice. Figure 1a shows the original data. The top view is a time slice at 0.65 (s), the front view is the 21st slice in the Y direction, and the side view is 17th slice in the X direction. Figure 1b shows the f-k panel of the data from the front view of Figure 1a. The f-k representation shows the presence of aliasing as well as random noise in the original data. A data set with missing traces was simulated by first eliminating every other X slices of data and, in addition, by randomly eliminating 75% of the remaining traces (Figure 1c).
We propose an algorithm to compute time and space variant prediction filters for signal-to-noise ratio enhancement. Prediction filtering for seismic signal enhancement is, in general, implemented via filters that are estimated from the inversion of a system of equations in the t −x or f −x domain. In addition, prediction error filters are applied in small windows where the data can be modeled via a finite number of plane waves. Our algorithm, on the other hand, does not require the inversion of matrices. Furthermore, it does not require spatio-temporal windowing; the algorithm is implemented via a recursive scheme where the filter is continuously adapted to predict the signal.
We postulate the prediction problem as a local smoothing problem and use a quadratic constraint to avoid solutions that model the noise. The algorithm uses a t −x recursive implementation where the prediction filter for a given observation point is estimated via a simple rule. It turns out that the proposed algorithm is equivalent to the LMS (Least Mean Squares) filter often used for adaptive filtering. It is important to mention, however, that our derivation follows the framework that it is often used to solve underdetermined linear inverse problems. The latter involves the minimization of a cost function that includes a quadratic constraint to guarantee a stable solution.
Synthetic and real data examples are used to test the algorithm. In particular, a field data test shows that adaptive t−x filtering could offer an efficient and versatile alternative to classical f −x deconvolution filtering.
Prediction filters play an important role in seismic data processing with applications ranging from seismic deconvolution, signal-to-noise-ratio enhancement (Canales, 1984; Gulunay, 1986; Abma, 1995) and trace interpolation (Spitz, 1991).
Prediction filters are often estimated from small spatio-temporal windows where waveforms can be approximated by events with constant dip. The latter is required for the optimal performance of the prediction filter. We propose to avoid windowing via a recursive algorithm where one prediction filter is estimated for each data sample. We show that the aforementioned problem is underdetermined and, as it is well-known, admits an infinite number of solutions. A unique and stable solution if found by formulating the problem in terms of a regularization constraint. The filter required to smooth a given data point is constrained to be similar to the filter used to smooth an adjacent data point. Our formulation follows the classical approach used to solve an underdetermined problem. The final algorithm is equivalent to the LMS (Least Mean Squares) algorithm described in Widrow and Stearns (1985) and Hornbostel (1989).
ADAPTIVE PREDICTION FILTERS
We start to formulate our problem by introducing an auto-regressive (AR) or linear prediction operator modeling operator of order p. To avoid notational clutter we first consider a 1D problem with p = 3
Figure 1 portrays a synthetic data composed of 5 events with parabolic moveout. The data were contaminated with Gaussian band-limited noise with signal-to-noise ratio SNR = 1. Figure 1a portrays the noisy data.
The Exponentially Weighted Recursive Least Squares (EWRLS) method is adopted to estimate adaptive prediction filters for F-X seismic interpolation. Adaptive prediction filters are able to model signals where the dominant wave-numbers are varying in space. This concept leads to a F-X interpolation method that does not require windowing strategies for optimal results. Synthetic and real data examples are used to illustrate the performance of the proposed adaptive F-X interpolation method.
Spitz (1991) introduced a seismic trace interpolation method that utilizes prediction filters in the frequency-space (F-X) domain. Spitz''s algorithm is based on the fact that linear events in time-space (T-X) domain map to a superposition of complex sinusoids in the F-X domain. Complex sinusoids can be reconstructed via prediction filters (autoregressive operators); this property is used to establish a signal model for F-X interpolation (Spitz, 1991) and F-X random noise attenuation (Canales, 1984; Soubaras, 1994; Sacchi and Kuehl, 2000). Spitz (1991) showed that prediction filters obtained at frequency f can be used to interpolate data at temporal frequency 2 f . Prediction filters estimated from the low-frequency (alias-free) portion of the data are used to interpolate the high-frequency (aliased) data components. Several modifications to Spitz''s prediction filtering interpolation have been proposed. For instance, Porsani (1999) proposed a half-step prediction filter scheme that makes the interpolation process more efficient. Gulunay (2003) introduced an algorithm with similarities to F-X prediction filtering with a very elegant representation in the frequencywavenumber F-K domain. Recently, Naghizadeh and Sacchi (2007) proposed a modification of F-X interpolation that allows to reconstruct data with gaps.
Seismic interpolation algorithms depend on a signal model. F-X interpolation methods are not an exception to the preceding statement; they assume data composed of a finite number of waveforms with constant dip. This assumption can be validated via windowing. Interpolation methods driven by, for instance, local Radon transforms (Sacchi et al., 2004) and Curvelet frames (Herrmann and Hennenfent, 2008) assume a signal model that consists of events with constant local dip. In addition, they implicitly define operators that are local without the necessity of windowing. This is an attractive property, in particular, when compared to non-local interpolation methods (operators defined on a large spatial aperture) where optimal results are only achievable when seismic events match the kinematic signature of the operator. Examples of the latter are interpolation methods based on the hyperbolic/ parabolic Radon transforms (Darche, 1990; Trad et al., 2002) and migration operators (Trad, 2003).
As we have already pointed out, F-X methods require windowing strategies to cope with continuous changes in dominant wave-numbers (or dips in T-X). In this article we propose a method that avoids the necessity of spatial windows. The proposed interpolation automatically updates prediction filters as lateral variations of dip are encountered. This concepts can be implemented in a somehow cumbersome process that requires classical F-X interpolation in a rolling window. In this paper we have preferred to use the framework of recursive least squares (Honig and Messerschmidt, 1984; Marple, 1987) to update prediction filters in a recursive fashion.