Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Results
Summary 3D general surface multiple prediction (GSMP) is a datadriven 3D SRME algorithm that solves the problem of trace sparseness. Rather than overcoming the sparseness problem by changing the data to fit the algorithm- for example, by means of regularization and interpolation- GSMP changes the SRME algorithm to fit the data. This not only makes GSMP a universal compute engine for the 3D prediction of multiples, but also makes it quite versatile. We illustrate this versatility by showing successful applications of GSMP to narrow-azimuth, wide-azimuth, and rich-azimuth seismic surveys. Introduction In recent years, the seismic industry’s repertoire of practical marine survey designs has increased dramatically. In particular, an effort to improve subsurface images in complex areas has led to narrow-azimuth (NAZ) streamer surveys being replaced by multi-azimuth (MAZ), wideazimuth (WAZ), and rich-azimuth (RAZ) surveys. One mechanism by which such surveys are expected to improve images is the inherent multiple attenuation of a manyazimuth stack. Indeed, Kapoor et al. (2007) have reported that WAZ data without multiple attenuation can produce better quality images than NAZ data with multiple attenuation. On the other hand, the same authors also point out that many-azimuth datasets still benefit from residual multiple attenuation. Thus, in spite of the industry''s fondest hopes, it seems that even with MAZ, WAZ, and RAZ surveys, we should not dispense with multiple attenuation. Ideal 3D surface-related multiple elimination (SRME) is a data-driven process in which seismic traces are manipulated to predict surface multiples (van Dedem and Verschuur, 2001). For each input trace, selected pairs of traces are convolved to obtain a 3D volume called a multiple contribution gather (MCG) (Figure 1a). Stacking an MCG then produces the predicted multiples for the targeted input trace. Unfortunately, ideal 3D SRME requires far more traces and a far different distribution of traces than are recorded in any marine survey. 3D general surface multiple prediction (GSMP) overcomes the sparse trace distribution problem quite effectively (Moore and Dragoset, 2008). The basic concept of GSMP was first published in 2005 (Bisley et al., 2005). Since then, Kurin et al. (2006) have described an approach similar to GSMP, and also Ceragioli et al. (2007) have briefly mentioned the basic GSMP concept. Although it was initially envisioned as a way to address feathering issues in NAZ surveys, GSMP is equally applicable for MAZ, WAZ, and RAZ datasets. In this paper, we take a closer look at the GSMP algorithm to explain its versatility. We also show some GSMP results from NAZ, WAZ, and RAZ surveys. GSMP Methodology The GSMP algorithm is as follows: Input all recorded traces along with nominal velocity functions. Compute the midpoint, offset, and azimuth of each trace. Select a target trace and define the aperture and computational grid (Figure 1b) for that trace. For each grid node in the aperture, use a nearestneighbor search to select from among the input traces the two best traces for that convolution. Compensate the two selected traces for offset errors using differential normal moveout.
Simultaneous Source Separation Using Dithered Sources
Moore, Ian (WesternGeco) | Dragoset, Bill (WesternGeco) | Ommundsen, Tor (WesternGeco) | Wilson, David (WesternGeco) | Eke, Daniel (WesternGeco) | Ward, Camille (WesternGeco)
Summary We describe a new algorithm that uses known firing times to separate data from two or more impulsive, simultaneous seismic sources. Synthetic and field data tests show that the algorithm works well, especially when the data are not spatially aliased. Aliasing effects can be reduced if assumptions, such as that the data are in some sense sparse, are made. Introduction In conventional data acquisition, the delay time between the firing of one source and the next is such that the energy from the previous source has decayed to an acceptable level before data associated with the following source arrives. This minimum delay time imposes constraints on the data acquisition rate. For marine data, the minimum delay time also implies a minimum inline shot interval, because the vessel''s minimum speed is limited. Acquisition of simultaneous source data, such that signals from two or more sources interfere for at least part of each record, clearly has enormous potential benefits in terms of acquisition efficiency and inline source sampling. For such data to be useful, however, it is necessary to develop processing algorithms to handle the source interference. The simplest methodology is to separate the energy associated with each source as a preprocessing step, and then to proceed with conventional processing. Beasley et al. (1998) describe one method by which this separation may be achieved when the sources are spatially separated. Another method for enabling or enhancing separability is to make the delay times between interfering sources incoherent (Lynn, et al., 1987). When traces are then collected into a domain that includes many firings of each source, and are aligned such that time zero corresponds to the firing time for a specific source, then signal from that source appears coherent while signal from the other sources appears incoherent. This allows the signals to be separated based on coherency. Stefani et al. (2007) have used random noise attenuation to separate the coherent signal from the apparently incoherent signal with some success. The following section describes a better separation method. The improvements come from the observation that the apparently incoherent signal is not mathematically incoherent, because the time delays that make it appear incoherent are known. The separation method also has applications for oceanbottom cable and land data, and for seismic interference noise removal. These applications have extra opportunities compared to marine data, but also pose extra problems. For example, land data allow for more flexible geometries and better source signature control. However, they are generally noisier than marine data, and the effect of statics could be significant. Separation Method The separation method can be applied to any number of sources, but is described here for the case of two, denoted S1 and S2. The method works very well on synthetic data (Figure 1), provided certain assumptions are met. The main limitation in the testing so far has been aliasing, which creates leakage of high-frequency, high-dip events between the sources (Figure 2). The use of high-resolution (sparse) Radon transforms (Moore and Kostov, 2002) mitigates, but does not eliminate, this limitation (Figure 3).