|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
While conventional migration is regarded as an approximation to the inverse of scattering using the adjoint operator, least squares migration is typically formulated as an iterative approach to estimate the reflectivity using forward and adjoint scattering operators. We provide a matrix based formulation of elastic scattering using a staggered-grid finite difference propagator, and derive its adjoint for use in elastic least squares Reverse Time Migration. We solve the system of equations via Conjugate Gradients with a Laplacian preconditioner that mitigates low frequency artefacts in the images.
Presentation Date: Wednesday, October 19, 2016
Start Time: 4:25:00 PM
Presentation Type: ORAL
Ringing artifacts are often observed above the water-bottom when applying POCS interpolation to marine datasets. The ringing is the result of hard thresholding creating sharp cutoffs in the frequency-wavenumber domain. Moreover, it is likely that the artifacts persist below the water-bottom but are masked by adjacent reflections. Modifying the type of thresholding used can mitigate artifacts, but often at the expense of introducing amplitude bias in the interpolated traces. We investigate several thresholding schemes and provide a generalized thresholding function that allows the style of thresholding to be controlled by a single parameter. We find that soft thresholding combined with a debiasing step provides improved results over hard thresholding in both marine and land datasets.
Projection Onto Convex Sets (POCS) is an iterative optimization technique that has found broad application in seismic data processing including interpolation (Abma and Kabir, 2006; Stein et al., 2010), noise attenuation (Gao et al., 2013), and de-blending of simultaneous source data (Abma and Ross, 2013). It is an effective interpolation strategy that often out performs other methods in the presence of low SNR (Stanton et al., 2012).
Interpolation of marine data using POCS often results in a ringing effect above the water-bottom, an effect that is particularly noticeable in areas with large gaps between observed traces. We found that these artifacts are created by the application of the hard thresholding operator to the data in the frequency wavenumber domain. Soft thresholding can mitigate these artifacts, but this often leads to amplitude bias in the interpolated traces. Peyr (2010) detail a number of alternative thresholding strategies that offer a trade-off between these issues.
We investigate several thresholding strategies in POCS interpolation and arrive at a a generalized thresholding function that allows the style of thresholding to be controlled by a single parameter. Should the thresholding introduce amplitude bias for interpolated traces, we provide a simple amplitude compensation step using the median RMS amplitude of original traces. Synthetic and real data examples demonstrate the effectiveness of soft thresholding plus a debiasing step.
Research in the area of data analytics and recommendation systems have lead to important efforts toward solving the problem of matrix completion. The latter entails estimating the missing elements of a matrix by assuming a low-rank matrix representation. The aforementioned problem can be extended to the recovery of the missing elements of a multilinear array or tensor. Prestack seismic data in midpoint-offset domain can be represented by a 5th order tensor. Therefore, tensor completion methods can be applied to the recovery of unrecorded traces. Furthermore, tensor completion methodologies can also be applied for multidimensional signal-tonoise- ratio enhancement. We discuss the implementation of the Parallel Matrix Factorization (PMF) algorithm, an SVD free tensor completion method that we applied to 5D seismic data reconstruction. The Parallel Matrix Factorization (PMF) algorithm expands our first generation of 5D tensor completion codes based on High Order SVD and Nuclear norm minimization. We review the PMF method and explore its applicability to processing industrial data sets via tests with synthetic and field data.
In recent years, the development of recommendation systems has become an important area of research for data scientists (Koren et al., 2009). A recommendation system (or recommender system) is an algorithm that attempts to predict the rating that a user or costumer will give to an item. Recommendation systems have become quite popular in e-commerce for predicting ratings of movies, books, news, research articles etc. In Figure 1, we provide a simplified example of a data matrix with ratings of a series of movies. It is clear that recommendation systems use thousands of users to rate thousands of items and that our figure is merely for illustrative purposes. A rating of 5 means that the user liked the movie, a rating of 1 means that he/she did not like the movie. Question marks are used to indicate that the movie has not been rated by the user. This is a table (matrix) where one can immediately infer that the data could be predicted by simple examination of patterns or relationships between users and movies. For instance, users who liked romantic movies appear not to like action movies. The main task for the recommendation algorithm is to extract patterns that might exist in the data and use them to predict the rating a user would have given to an item he/she did not rate. The unknown ratings can be found by solving the so called Matrix Completion problem (Recht, 2011). A similar problem is also present in seismic data processing (Kreimer and Sacchi, 2011).
We explore the tensor completion problem in a dataset obtained over a heavy oil field in the Western Canadian Sedimentary Basin. The fully sampled prestack volume in the frequency-space domain is a low-rank fourth order tensor. In this context, the reconstruction problem becomes a tensor completion problem that we choose to solve via a nuclear norm minimization approach. The nuclear norm of a tensor is the closest convex approximation to the rank of a tensor. The reconstruction problem entails minimizing the nuclear norm of the tensor subject to data constraints and is implemented with the alternating direction method of multipliers. Our land data example clearly exhibits the performance of this reconstruction method.
Many common processing steps are degraded by static shifts in the data. The effects of static shifts are analogous to noise or missing samples in the data, and therefore can be treated using constraints on sparsity or simplicity. In this paper we show that random static shifts decrease sparsity in the Fourier and Radon transforms, as well as increase the rank of seismic data. We also show that the concepts of sparsity promotion and rank reduction can be used to solve for static shifts as well as to carry out conventional processes in the presence of statics. The first algorithm presented is a modification to the reinsertion step of Projection Onto Convex Sets (POCS) and Tensor Completion (TCOM) that allows for the compensation of residual statics during 5D denoising and interpolation of seismic data. The method allows preserving residual statics during denoising, or correction of residual statics in the case of simultaneous denoising and interpolation. An example is shown for a 5D reconstruction of synthetic data with added noise, missing traces and random static shifts, as well as for a 2D stacked section with missing traces and static shifts. While standard reconstruction struggles in the presence of even small static shifts, reconstruction with simultaneous estimation of statics is able to accurately reconstruct the data. The second algorithm presented is a Statics Preserving Sparse Radon transform (SPSR). This algorithm includes statics in the Radon bases functions, allowing for a sparse representation of statics-contaminated data in the Radon domain.
The quaternion Fourier transform is used to obtain a Fourier domain representation of a multicomponent seismic record. Projection Onto Convex Sets (POCS) is used as a means to reconstruct all three components at once. The results of this process are compared with standard component-by-component reconstruction. The algorithm is applied to 2D and 3D 3C synthetic data but it could be easily extended to a higher number of dimensions and components.