The term "simultaneous source" refers to the idea of firing several seismic sources so that their combined energy is recorded into the same set of receivers during a single conventional shotpoint timing cycle. The idea is to collect the equivalent of two or more shots worth of data in the same time as it takes to collect one. The potential advantages include cost or time savings in field acquisition, which is of renewed interest due to the popularity and expense of WATS data. We were motivated to the work presented here by observations made on a 3D dataset acquired over the Petronius field in the Gulf of Mexico with two source vessels. The second source was fired with a random delay compared to the first, so that the energy from secondary source is similar to asynchronous noise. While the random nature of the crosstalk in combination with the two known geometries had been enough to successfully apply relatively standard processing techniques for other studied datasets, we found that this one required an improvement on those techniques. This paper describes a high-resolution (sparse) Radonbased separation technique with that aim. We find that while the technique does not by itself do all the required separation, it sufficiently separates the data to allow subsequent standard noise attenuation techniques to complete the task.
A more detailed motivation for simultaneous source recording and several investigations of this idea can be found in (Beasley et al., 1998; de Kok and Gillespie, 2002; Stefani et al., 2007) and references therein. This work is based on the 3D data described by Stefani et al. (2007), which was collected over the Petronius field in the Gulf of Mexico with two sources. The second source was fired with a random delay compared to the first such that the arriving energy from both sources are recorded within a conventional shotpoint timing cycle. Due to the randomness of the delay the cross-talk between primary and secondary source energy is similar to asynchronous coherent noise, such as seismic interference, which can be expected to be fairly effectively attenuated by conventional stacking and migration procedures (Krey, 1987; Stefani et al., 2007).
As Beasley et al. (1998) describe, in order to reduce crosstalk between two sources in the final product beyond what is achieved by treating it as random noise, one can make explicit use of the fact that both sources are fired at known locations and known timing. The resulting distinct geometries of the two wavefields can be used to define "geometry-related filters". Such filtering relies on coherency along predicable trajectories, and is included in the technique described here. There are at least two important features of the Petronius dataset which makes it more of a challenge than the other datasets presented by Stefani et al. (2007). Firstly, this 3D dataset was collected to investigate the simultaneous rich-azimuth recording so the acquisition geometry was different. In particular, the gun-arrays were positioned such that the primary and secondary events are less orthogonal than in previously examined datasets.
Shear-wave splitting estimation and compensation (SEAC) removes the effects of shear-wave splitting from convertedwave data. A locally one-dimensional earth is assumed where a priori rotation of the field data to radial-transverse coordinates is valid. Subsurface fractures (HTI layers, for example) polarize converted-wave reflection energy onto the transverse component, and introduce azimuth-dependent traveltime variations to the radial component. SEAC estimates the fast principal direction of the fractures and the amount of traveltime splitting from input radial and transverse azimuthsectored stack gathers. A splitting-compensated radial dataset, and a data-misfit transverse component are output. Local fracture variations not accounted for in the relatively coarse layer stripping may be interpreted in the data misfit.
Shallow buried heterogeneities act as diffractors to an incoming surface wave. Experimental and numerical analyses are conducted to analyze the problem and to propose a methodology for mapping shallowly embedded objects. A layered system having a high impedance contrast at the base containing shallowly buried drums is tested experimentally. Stacked maps of frequency, velocity and lateral position show the presence of the drums. A hypothesis is posed that the most significant impact of the shallow obstacle on the incident wavefield is diffraction of Rayleigh wave energy. Synthetic seismograms generated through numerical modeling support the hypothesis. The interpretation of the data is complicated by factors such as acquisition footprint and frequency content. These factors remain to be investigated.
The energy angle-distribution in the local image matrix (LIM) for a planar reflector and for a discontinuous point are different with the former exhibiting a linear energy concentration along certain dip direction while the latter showing a scattered energy distribution. Therefore the cross-correlation value of the local image matrix between adjacent image points can be used to distinguish these two situations. The seismic images of these diffraction points may provide important information about geological discontinuities.
We describe a new algorithm that uses known firing times to separate data from two or more impulsive, simultaneous seismic sources. Synthetic and field data tests show that the algorithm works well, especially when the data are not spatially aliased. Aliasing effects can be reduced if assumptions, such as that the data are in some sense sparse, are made.
This paper presents a target oriented methodology to perform illumination studies considering several different acquisition geometries. The Green’s function, used to evaluate the illumination amplitude for specific subsurface points, was estimated using the complete two-way wave equation solved by Finite Difference Method.
The proposed methodology requires considerably less resources, in terms of processing time and disk space, than the traditional procedure using wave equation. The traditional procedure compares depth images obtained by a migration scheme, after modeling the complete dataset for a certain acquisition geometry. Both methodologies are more accurate than the results based on ray tracing modeling in complex areas. Numerical results are presented and analyzed on a modified 2D version of the SEG/EAGE Salt Dome. This model was especially selected because in order to get good depth image results - underneath salt bodies with complex geometries - the first necessary condition is to have an appropriately acquired dataset, and the illumination analysis is the first step in this direction.
The acquisition of n-shots, more or less simultaneously, increases acquisition efficiency and collects a wider range of information for imaging and reservoir characterisation. Its success relies critically on the ability to separate n-shots from one recording. Stefani et al (2007) showed that while some datasets may be easily separated, others are more difficult. Using the more difficult data example from Stefani et al (loc.cit.), we show that a PEF-based adaptive subtraction (Spitz, 2007) of the estimated wavefield due to a secondary source provides an effective separation of the sources.
We present a theory for multiply-scattered waves in layered media which takes into account wave interference. The inclusion of interference in the theory leads to a new description of the phenomenon of wave localization and its impact on the apparent attenuation of seismic waves. We use the theory to estimate the localization length at a CO2 sequestration site in New Mexico at sonic frequencies (2 kHz) by performing numerical simulations with a model taken from well logs. Near this frequency, we find a localization length of roughly 180 m, leading to a localization-induced quality factor Q of 360. Introduction
Intrinsic seismic attenuation bears the direct imprint of rheological properties and fluid conditions in the subsurface and is thus a valuable parameter to measure in the field. Such a measurement is complicated by the fact that subsurface conditions not necessarily related to rheology or fluids, for example heterogeneity, also attenuate seismic waves. As a result, heterogeneity causes field measurements of attenuation to reflect an apparent instead of an intrinsic attenuation (Gorich and Muller, 1987). In layered media, the apparent attenuation is a weighted combination of intrinsic attenuation and scattering attenuation due to reflection and transmission at interfaces. As famously shown by O.Doherty and Anstey (1971), the multiple scattering of waves must be taken into account to properly gauge the attenuation due to scattering. White et al. (1990) and Shapiro and Zien (1993) have further shown that a particularly strong type of multiple scattering, known as wave localization, is key to the understanding of scattering attenuation in layered media.
We adapt a recently published theory (Haney and van Wijk, 2007) for multiply-scattered waves to describe scattering attenuation in a general layered subsurface model. An example of such a subsurface model is one constructed from well logs. The modifications are needed since the original theory shown in Haney and van Wijk (2007) used a model of identical thin layers randomly located within a homogeneous background medium. Here, this restriction is relaxed and a model consisting of layers of random density, P-wave velocity, and thickness is assumed.
The theory takes into account wave interference and is therefore able to represent wave localization. We find that scattering attenuation is in fact a combination of two distinct scattering mechanisms: one due to scattering out of the main direction of wave propagation which would exist in the absence of interference and the other due to wave localization. The length scale over which the latter mechanism acts is called the localization length and is critical to assessing the amount of scattering attenuation in a particular model. We show an application of the theory to the estimation of the localization length in a 1D model taken from well logs at the West Pearl Queen Field, a CO2 sequestration site in New Mexico.
We consider a simple model of plane waves propagating at normal incidence to planar interfaces. The model is specified in Figure 1 by the density ? and P-wave velocity ? in each layer, as well as the layer thickness L.
Spectral decomposition has proven to be a powerful means to identify strong amplitude anomalies at specific frequencies that are otherwise buried in broad-band response. We compute Teager-Kaiser Energy for each component of a joint time-frequency representation to generated from a 3D survey acquired over a Brazilian deep water carbonate reservoir. This nonlinear energy tracking algorithm allows us to differentiate between high amplitude reservoir and other high amplitude reflections. We calibrate our algorithm against synthetic seismic traces generated from the well logs and and then apply to the real seismic data to reveal important geological features.
In marine multi-component data processing, the vertical component of particle velocity Z is calibrated against the pressure component P to correct for coupling discrepancies and sensitivity differences between hydrophone and geophone. The PZ calibration processing step is crucial to the success of wavefield decomposition into up- and downgoing waves, as well as into compressional and shear waves. In this paper, we present a new application of the conventional cross-ghosting technique. Our PZ calibration approach is implemented in the x-t domain and considers far offset reflected or refracted energy and their water layer reverberations. These events are commonly and easily identified in shallow water data as they arrive earlier than the direct arrival. Compared to other published techniques, our method is less sensitive to near-offset noise, source bubbles and irregular or sparse offset sampling. We successfully apply the proposed approach to an OBS data set acquired at 150 m water depth over the Britannia field in the North Sea.