Interferometry has its foundation in Green''s theorem. Dual measurements (pressure data and its normal derivative) are required to satisfy this theorem. Interferometry makes one approximation for each normal derivative in the theorem to avoid needing dual measurements, compromising the theory and giving rise to artifacts or spurious multiples. In this paper, an analytic example is provided to explicitly show the creation of spurious multiples. An analogous analytic example using Green''s theorem will demonstrate its inner workings and the fact that the information in the normal derivative is necessary to avoid creating artifacts. A different form of Green''s theorem, where the normal derivatives are not required, will also be provided.
The attention given by the energy industry and its concomitant representation in the published literature during the last decade to methods dealing with wavefield retrieval through interferometry techniques, and its applications to different seismic exploration problems, has brought about a renewed interest in Green''s theorem. The reason for this is that all the different approaches to what we call seismic interferometry (techniques that synthesize the field propagating between two receivers by correlating the signals recorded at these two receivers) can be derived from this single unifying framework: Green''s theorem (1828), and its extensions by Betti (1872) and Rayleigh (1873). In recent publications, Ramírez et al. (2007a,b) used Green''s theorem to:
Show that it provides a platform and unifying principle for the field of seismic interferometry. The reader interested in the different derivations of seismic interferometry is referred to Lobkis andWeaver (2001); Derode et al. (2003); Roux and Fink (2003);Wapenaar (2004); Bakulin and Calvert (2004) and references within.
Understand artifacts, or spurious multiples, as fully anticipated errors and as direct violations of the theory, rather than as some mystery or numerical manifestation that ought to be explained with physics.
Provide a systematic approach to understanding, comparing, and improving upon many current concepts and approximations.
Provide an overview of a broad set of seismic applications that recognize Green''s theorem as the starting point of their theory and algorithms.
In this manuscript, spurious multiples are closely examined and explained. In addition to the important papers we cite above, we want to make mention of the notable and significant paper by Snieder et al. (2006) in which the term spurious multiples was first introduced to refer to artifacts created by interferometry. In particular, their analysis was performed on data containing only singly reflected events, and using the stationary phase derivation of seismic interferometry. They showed, through analytic examples, how the kinematics of singly reflected waves propagating between two receivers can be reproduced by correlating the primaries produced by all sources and recorded by these two receivers. They also showed that the correlation of primaries generated at different reflectors produce artifacts, which they call spurious multiples. These artifacts are not true multiples. The reason to use the word multiples was that these artifacts are the product of two amplitudes (reflection coefficients) due to two different recorded primaries. In seismic interferometry any two events contained in the total wavefields of two receiver or shot gathers will have a nonzero contribution.
Internal multiples are multiply reflected events in the measured wavefield that have experienced all of their downward reflections below the free surface. The order of an internal multiple is defined to be the number of downward reflections it experiences, without reference to the location of the downward reflection. The objective of internal multiple elimination using only recorded data and information about the reference medium is achievable directly through the inverse scattering task specific subseries formalism. The first term in the inverse scattering subseries for first-order internal multiple elimination is an attenuator, which predicts the correct traveltime and an amplitude always less than the true internal multiples'' amplitude. The leading and higher-order terms in the elimination series correct the amplitude predicted by the attenuator moving the algorithm towards an eliminator. Leading-order as an eliminator means it eliminates a class of internal multiples and further attenuates the rest. Adding the leading-order terms in a closed form provides an algorithm that eliminates all internal multiples generated at the shallowest reflector. The generating reflector is the location where the downward reflection of a given firstorder internal multiple took place. The higher-order subseries and its closed form correct the attenuation due to information on the overburden of deeper generating reflectors. A prestack form of the algorithm, which can be extended to a multidimensional form, is given for the leading-order subseries and its closed form.
Seismic exploration is an inverse problem. The seismic data are inverted for the properties of the medium that created them. In exploration seismology, the medium properties correspond to the characteristics of the Earth''s subsurface, and include the spatial location of the reflectors as well as the density and elastic properties of the layers between reflectors.
Events in recorded seismic data can be classified by the number of reflections they have experienced. Primaries are seismic events that have experienced one reflection, most often upward; whereas, multiples are seismic recorded events that have experienced more than one reflection. Multiples are further classified by the spatial location of the downward reflections within its history. A multiple that has at least one downward reflection at the free surface is a free surface multiple. A multiple that has all of its downward reflections below the free surface is an internal multiple. Source and/or receiver ghost events and direct waves are assumed to be removed before these definitions and classifications are applied.
In addition to the recorded data with the necessary wavefield components, some standard processing algorithms require source wavelet deconvolution, deghosting, seismic data reconstruction (interpolation and extrapolation), regularization and/or redatuming. Seismic data processing is usually accomplished in a sequence of steps, e.g., removal of multiples, depth imaging or migration, and inversion for changes in Earth properties. The standard practice is to perform these steps in a specific order because each step is a pre-processing condition for the next procedure. The removal of multiples is a longstanding problem, of considerable moment and interest, with outstanding theoretical and practical issues yet to be understood and addressed.