We propose an algorithm to compute time and space variant prediction filters for signal-to-noise ratio enhancement. Prediction filtering for seismic signal enhancement is, in general, implemented via filters that are estimated from the inversion of a system of equations in the t −x or f −x domain. In addition, prediction error filters are applied in small windows where the data can be modeled via a finite number of plane waves. Our algorithm, on the other hand, does not require the inversion of matrices. Furthermore, it does not require spatio-temporal windowing; the algorithm is implemented via a recursive scheme where the filter is continuously adapted to predict the signal.
We postulate the prediction problem as a local smoothing problem and use a quadratic constraint to avoid solutions that model the noise. The algorithm uses a t −x recursive implementation where the prediction filter for a given observation point is estimated via a simple rule. It turns out that the proposed algorithm is equivalent to the LMS (Least Mean Squares) filter often used for adaptive filtering. It is important to mention, however, that our derivation follows the framework that it is often used to solve underdetermined linear inverse problems. The latter involves the minimization of a cost function that includes a quadratic constraint to guarantee a stable solution.
Synthetic and real data examples are used to test the algorithm. In particular, a field data test shows that adaptive t−x filtering could offer an efficient and versatile alternative to classical f −x deconvolution filtering.
Prediction filters play an important role in seismic data processing with applications ranging from seismic deconvolution, signal-to-noise-ratio enhancement (Canales, 1984; Gulunay, 1986; Abma, 1995) and trace interpolation (Spitz, 1991).
Prediction filters are often estimated from small spatio-temporal windows where waveforms can be approximated by events with constant dip. The latter is required for the optimal performance of the prediction filter. We propose to avoid windowing via a recursive algorithm where one prediction filter is estimated for each data sample. We show that the aforementioned problem is underdetermined and, as it is well-known, admits an infinite number of solutions. A unique and stable solution if found by formulating the problem in terms of a regularization constraint. The filter required to smooth a given data point is constrained to be similar to the filter used to smooth an adjacent data point. Our formulation follows the classical approach used to solve an underdetermined problem. The final algorithm is equivalent to the LMS (Least Mean Squares) algorithm described in Widrow and Stearns (1985) and Hornbostel (1989).
ADAPTIVE PREDICTION FILTERS
We start to formulate our problem by introducing an auto-regressive (AR) or linear prediction operator modeling operator of order p. To avoid notational clutter we first consider a 1D problem with p = 3
Figure 1 portrays a synthetic data composed of 5 events with parabolic moveout. The data were contaminated with Gaussian band-limited noise with signal-to-noise ratio SNR = 1. Figure 1a portrays the noisy data.