Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Results
Summary This case study shows how the use of new seismic technology allows the recording of good-quality surface seismic data in an area of complex geology that has historically provided poor seismic data. The enabling technology was 80,000-lb peak force vibrators using an energetic low-frequency sweep recording into densely sampled single-sensor accelerometers. The study area is in the Emirate of Dubai at the edge of the Oman thrust belt. It is covered by a thick layer of dry sand with relatively large sand dunes. The preferred source, based on legacy data, would have been large dynamite charges or a very high source effort with multiple long sweeps to provide broadband data with adequate low-frequency energy. This “intelligent” acquisition provided better final sections with respect to signal quality within realistic time frames. Introduction The paper is a case study from a 2D exploration/appraisal survey from the onshore part of the Emirate of Dubai, United Arab Emirates, recorded at the end of 2007 and early in 2008. The survey area is located at the edge of the Oman thrust belt near a producing gas/condensate field, around 45 km from the city of Dubai. Substantial reserves are to be found in the area in the lower Cretaceous Thamama formations. However, these have been difficult to produce as there is little understanding of the subsurface due to difficulties in acquiring adequate quality seismic data. The aim of the survey was to explore the area around the producing field that covers the edge of the thrust belt and part of the fore deep. The formations of interest are structural closures at depths of around 4000 m. Imaging of thrust belt data throughout the world is generally challenging, as the reflection energy typically does not form nice hyperbolic events in the common shot domain or in the common receiver domain as in a flat layer cake geology. Without a migration step, the energy in the common midpoint domain may show little or no coherency.Making sense of the complex overthrust geology requires obtaining seismic data with good signal-to-noise ratio, especially at longer offsets and larger times inherent tothrust belt imaging. Complex raypaths give rise to longer traveltimes, and hence, more signal attenuation than those in a flat layer-cake geology. Multiple reflections and scattered events further complicate matters. This has been confirmed in the study area during a modeling study published by O’Donnell et al. (1994). The key sources ofpoorer data quality in this area are postulated as velocity variations in the Aruma group, complex deformation in this formation, and steep dips in the Hajar super group. Various vintages of seismic data exist in the area of interest, including a 2D dataset recorded in the mid 1980s and a large 3D survey recorded in 2002. The 2D data are, in general, of better quality than the 3D data. As a result, the new survey parameters were loosely based on the best 2D line recorded in the area. This vintage vibroseis line used large areal arrays, but with a trace spacing of 20 m.
- Asia > Middle East > UAE > Dubai > Rub' al Khali Basin > Margham Field (0.99)
- Asia > Middle East > Saudi Arabia > Thamama Group Formation (0.98)
Summary The noise attenuation aspects of single-sensor acquisition are discussed in this paper. The paper shows examples from 3D seismic data acquired in West Kuwait in 2006. It is often difficult to make direct comparisons between singlesensor and existing conventional data because of the differences in acquisition geometry and processing sequence. In an attempt to get a more direct comparison and a better understanding of the uplift in quality of the recently acquired single-sensor data, this data set was used to simulate conventional acquisition geometries. Equivalent data processing sequences were used to make such comparisons. Introduction Kuwait Oil Company (KOC) acquired several conventional 3D surveys between 1995 and 1999. In an attempt to improve seismic data quality and in particular the resolution at the reservoirs levels, KOC has tested and evaluated single sensor seismic in one of its producing fields. The final comparison between conventional and single sensor data was very much in favor of the latter in terms of S/N and resolution (El-Emam, A. 2006). This new work aims to quantify the improvements by removing both acquisition and processing differences from the comparison or assessment criteria. Conventional data acquisition geometry simulation is performed by straight summing a number of single sensors to form an output group. The array dimensions selected for the simulation could be guided by previous conventional acquisition geometries in the area or the most likely geometry for a new acquisition. It is also limited by the single- sensor geometry used to do the simulation. Theory and method Conventional land seismic systems use geophone arrays to record data. These arrays consist of a number of geophones positioned in a linear or aerial pattern. Simple summation of the geophone signals from such arrays does not optimally remove noise and can attenuate the seismic signal. The imperfect response results in leakage of coherent and ambient noise into the wavenumber spectrum of the spatially re-sampled output space. Single-sensor fine spatial sampling allows for more effective noise attenuation and perturbation correction prior to digital group forming (DGF). The result is an optimally sampled 3D wavefield ready for imaging, analysis, and interpretation. This paper demonstrates the impact of using simulated conventional arrays as opposed to digital group forming on noise attenuation. Although the effect of amplitude and time perturbation should be more significant on single sensor data improvements, it is beyond the scope of this paper. Acquisition geometry Seven macro-receiver lines with four sub-lines each 960 receivers split spread per sub-line 26,880 traces per shot ? 200 m receiver line interval Sensors in zigzag pattern, 5 m inline, 5 m crossline. The source-line interval is 200 m orthogonal to receiver lines and the source point interval is 20 m. Figure 1 shows the geometry of one cross spread of single sensor data before group forming. After group forming the output source and group intervals are 50 m. The dimension of the straight sum group is 50 m inline and 20m crossline with a total of 20 single sensors summed together.
- Asia > Middle East > Kuwait (1.00)
- North America > United States > Mississippi > Marion County (0.24)
- Energy > Oil & Gas > Upstream (1.00)
- Government > Regional Government > Asia Government > Middle East Government > Kuwait Government (0.51)
- Asia > Middle East > Kuwait > Jahra Governorate > Arabian Basin > Widyan Basin > Minagish Field > Marrat Formation > Upper Marrat Formation > Sargelu Formation (0.99)
- Asia > Middle East > Kuwait > Jahra Governorate > Arabian Basin > Widyan Basin > Minagish Field > Marrat Formation > Upper Marrat Formation > Najmah Formation (0.99)
- Asia > Middle East > Kuwait > Jahra Governorate > Arabian Basin > Widyan Basin > Minagish Field > Marrat Formation > Upper Marrat Formation > Marrat "C" Formation (0.99)
- (9 more...)
Summary It has been observed on multiple occasions that after full processing, residual organized noise depends on source and receiver line intervals. An explication of this observation is proposed and illustrated by a recent single vibrator experiment in Egypt. Introduction The seismic acquisition geophysicist must provide his processing colleague with data that meet the double requirement of properly sampling the seismic wave-field and achieving adequate signal to ambient noise ratio. This must be achieved under the usual time and cost constraints. Using synthetic data, J. Meunier(1) concluded in 1998 that residual noise amplitude was roughly proportional to the product of source and receiver line intervals when no 3D noise attenuation routine is applied, but that these routines would reduce the importance of line intervals. In fact, experience tells that in the real world, there is always some residual noise leaking through the filter because noise is aliased, because it shows low velocity discrimination with primaries (intra bed multiples) or because its nature is too chaotic (diffracted ground-roll). The relationship between attenuation of this residual noise and spatial sampling will be analyzed in the first section. The second section will look at how to translate the S/N requirement in the survey design. The last section will propose a way to keep acquisition time and cost under control in vibroseis operations Wave field sampling The sampling requirement extends over 5 dimensions: time, source and receiver coordinates. During the course of processing it is found more convenient to sort the data in time, CMP position (X and Y) and Offset Vector (X and Y). The stacking fold is the number of samples in the offset dimensions. For post stack imaging, only the CMP domain needs to be properly sampled. For noise suppression (source generated noise, multiple and migration noise) the regularity and the density of sampling in the offset dimension directly conditions the efficiency of noise attenuation. Pre or post migration stack can be seen as a 2- D noise attenuation filter. The distances between array elements are twice the source and receiver line intervals. The response of such a filter is seen on figure 1 where 1 is represented in red and 0 in blue. Signal is concentrated around the peak at the {Kx Ky} origin. The other ones will let noise leak at their Kx, Ky wave number. To reduce noise leakage through the stack operator, the spacing between the ones must be increased. This can be achieved by a reduction in source and receiver line intervals. The effect of the migration itself is shown in figure 2. Signal is focused around K=0 whereas the noise leakage decrease when wave numbers increase. From a noise point of view, migration is close to a space and time variant F-K filter which mutes K > F/C*sin(_) where C is the migration velocity and _ is the maximum dip. In addition, shorter distances between lines lead to less geometry discontinuities in the common offset vector tiles, better image gathers and better velocity determination. 110
- Africa > Middle East > Egypt (0.25)
- North America > United States (0.16)
Summary We have developed a new method to correct for wavelet distortion due to NMO correction or prestack time migration and differential attenuation in converted-wave (PS) data. To correct for these effects, we apply an offsetand time-dependent filter to migrated offset or angle stacks. The filter design is based on an analytical expression for the NMO-stretch factor for PS data and the P- and S-wave quality factors, Qp and Qs. We also provide a workflow to estimate Qs from the converted-wave data. Using model data, we demonstrate that our method enhances the resolution of far-offset data and mitigates AVA distortions caused by NMO stretch and differential attenuation. Introduction Offset-dependent stretch and tuning: NMO stretch causes a loss of high frequencies at far offsets resulting in AVA distortion near and below tuning thickness. This modification of the AVA response is often called offsetdependent tuning. Absorption loss: Due to longer ray paths, large-offset data suffer larger attenuation which also distorts the AVA response. Large-offset seismic amplitudes have the following inherent effects that distort their AVA behavior:It is important to correct for both of these effects for accurate AVA analysis and inversion. A number of techniques have been proposed to correct for wavelet stretch in PP data (e.g., Corcoran et al., 1991; Swan, 1997; Roy et al., 2005). Lazaratos and Finn (2004) developed a technique to simultaneously correct for differential attenuation and wavelet stretch in PP data. With recent advances in seismic acquisition and processing, convertedwave (PS) data have become a viable and important addition to geophysical reservoir characterization. PS data incorporate an independent measure of seismic information for improved reservoir analysis, either on their own (e.g., Duffaut et al., 2000) or jointly with conventional PP data (e.g., Khare and Rape, 2007). PS data, like PP data, suffer from wavelet stretch and differential attenuation. Although, as we show in this paper, at a given P-wave incident angle, the stretch effect in PS data is smaller than that for PP data. However, the differential attenuation in PS data can be substantially more due to a larger traveltime, unless Qp is much smaller than Qs as in the case of a shallow gas cloud. In this work, we design an offset- and time dependent filter that corrects for wavelet stretch and differential attenuation and apply it using the method described by Lazaratos and Finn (2004). To this end, we provide an analytical expression for the stretch factor ß (Dunkin and Levin, 1973) for PS reflections. Using a generalized spectral-ratio method, we also show how to estimate quality factor Qs from PS wavelets and known Qp, since Qs is needed to compensate for differential PS attenuation. Differential attenuation for PS data To avoid amplifying noise, Lazaratos and Finn (2004) proposed to compensate for only differential attenuation between near- and far-offset data instead of correcting for total attenuation. Compensation for differential attenuation mitigates AVA distortion while, as compared to a full attenuation-compensation, constrains amplification of highfrequency noise in the data.
Introduction Summary In the context of an increasing world demand for oil & gas the untapped reserves of the arctic are becoming nowadays of great interest. Since the late 70''s many attempts have been made to carry out seismic exploration in arctic environments. Although many successes have been achieved both onshore and offshore during these years, offshore exploration is still effectively restricted to the very short arctic summer. An acquisition test was conducted in winter 2007 in Alaska by Eni, Shell and Repsol with CGGVeritas as contractor to prove the feasibility of acquiring offshore seismic surveys on floating ice. Different source receiver combinations were tested in order to find the one able to solve the existing tradeoff between data quality and operational constrains. A subsequent phase of processing tests have been started in order to address specific problems related to seismic acquisition in this environment. In particular the most relevant of these is the Flexural Ice Wave (FIW) noise: a novel approach has been developed in order to address it. FIW noise suppression preliminary results indicate that acquisitions on floating ice are feasible in terms of time, cost, HSE and data quality. The Exploration of areas located in artic and sub-artic regions is constrained by limited access and short operating time and weather window. One way to enlarge the time window is to extend the operations over the frozen sea during the winter season. Since 1975, when the use of dynamite under the ice layers was forbidden, many attempts were carried out to find a new seismic source for the on-ice acquisition. The combination of the "Mudgun" in the nearshore and of the "Thunderwagon" in the offshore resulted in good data quality but also in processing and HSE issues (Lansley, 1984). In 1979 the use of classical land vibrators was tested giving acceptable results in imaging the deeper targets (Mertz at. all, 1981). The fundamental benefit of acquiring seismic data on floating ice using conventional land techniques is the possibility to collect consistent and continuous data from the onshore up to the offshore part of the survey saving time and costs compared with the use of available sub-ice sources. Nevertheless, beyond all the operational difficulties and HSE issues, it soon became apparent that a big challenge of the seismic on ice was the removal of ice induced noise: the Flexural Ice Wave (Lansley, 1984; Proubasta, 1985). Acquisition In March 2007 seismic acquisition tests were conducted in order to assess the characteristics of the FIW. Other goals of the test were aiming at the assessment of the appropriate source and receiver combinations that could be used to properly sample and process it. The same attention was put in the definition of the acquisition geometry with a dense spatial sampling in order to avoid as much as possible the aliasing of the FIW. Another challenge of the test was to prove the feasibility of deploying and retrieving all the equipment in a reasonable time and respecting all the HSE constrains.
ABSTRACT Ocean Bottom Node surveys provide high quality data but, due to the higher cost of acquisition, lead to low coherency in common midpoint gathers. This makes the use of prestack depth imaging difficult. Even more, node data can be believed to be of little interest. We show here the opposite: not only it is possible to use node data for prestack depth imaging, but in addition, the obtained results are of very high quality. Such data requires furthermore very little preprocessing compared to more usual ocean bottom cable data. We present possible tools to image the subsurface from node data. A comparison between cable and node data on a real dataset from the North Sea shows the possible gain provided by node data.
- Europe > United Kingdom > North Sea (0.35)
- Europe > Norway > North Sea (0.25)
- Europe > North Sea (0.25)
- (2 more...)
Summary Multiples are successfully removed from seismic data to increase the signal-to-noise ratio in a 3D land dataset acquired in Abu Dhabi. The anti-multiple process aims at removing, in user specified spatial-temporal windows, surface and inter-bed multiple energy, which are associated with particular primary events. The anti-multiple is designed in the f-xy domain. In order to emphasize the fact that this process targets a particular multiple events, it is sometimes referred to as pattern recognition. In this particular study, the noise attenuation is performed pre stack, on constant offset time migrated data volumes. Introduction When processing seismic data the ability to remove multiple energy noise while preserving at the same time the primary events is important. Multiples can corrupt the target and lead to incorrect seismic attributes and erroneous interpretations. Multiple attenuation can be described as a two step process. In the first step the multiples (surface related or internal) are predicted via a data driven process that does not involve any prior information on the seismic velocities. In a second step, the predicted events are matched to the multiples present in the actual data and removed using the derived matching filter. This approach assumes that the kinematics of the multiple events is correctly derived, that the prediction locates them correctly on the time axis and ignores any possible interference between the primary and the multiple wave fields. While this multiple attenuation approach is standard on offshore data, its application on land data is rather limited. Offshore seismic datasets have as a rule good signal-tonoise ratio and exhibit a general simple behavior from a multiple attenuation point of view. In contrast, land seismic datasets are characterized by poor signal-to-noise ratio and by a possible mixture of surface related and internal multiple wave fields. The prediction of the multiple wave field is complicated due to the variable near surface effects such as complex surface reflectivity, poor source and receiver couplings, strong ground roll or variable source wavelet. Moreover land acquisitions are notoriously irregular. When a strong multiple event contaminates the section at the target level, an attractive alternative approach to the construction of the whole multiple wave field is the prediction of particular noise events only in the area of interest. The prediction includes the primary itself and the identification of all multiple events that are adversely affecting the seismic image in the zone of interest. There is a class of generators for which the prediction is trivial. This is the rather frequent case when the interfaces responsible for generating multiples are horizontal, or quasi-horizontal. For flat generators the multiples are simply a translated version of the primaries in any constant offset gather. In that case the primary and its multiple display a similar pattern (same lateral phase shift and amplitude variation). This prediction step is the basis of a multiple attenuation technique implemented in the frequency-space domain. Possible interference between primaries and multiples in the working gate, or some imprecision in the predicted multiple can be handled at the subtraction stage as proposed by Spitz (2007).
- Geophysics > Seismic Surveying > Seismic Processing (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling (0.34)
SUMMARY Internal multiples are multiply reflected events in the measured wavefield that have experienced all of their downward reflections below the free surface. The order of an internal multiple is defined to be the number of downward reflections it experiences, without reference to the location of the downward reflection. The objective of internal multiple elimination using only recorded data and information about the reference medium is achievable directly through the inverse scattering task specific subseries formalism. The first term in the inverse scattering subseries for first-order internal multiple elimination is an attenuator, which predicts the correct traveltime and an amplitude always less than the true internal multiples'' amplitude. The leading and higher-order terms in the elimination series correct the amplitude predicted by the attenuator moving the algorithm towards an eliminator. Leading-order as an eliminator means it eliminates a class of internal multiples and further attenuates the rest. Adding the leading-order terms in a closed form provides an algorithm that eliminates all internal multiples generated at the shallowest reflector. The generating reflector is the location where the downward reflection of a given firstorder internal multiple took place. The higher-order subseries and its closed form correct the attenuation due to information on the overburden of deeper generating reflectors. A prestack form of the algorithm, which can be extended to a multidimensional form, is given for the leading-order subseries and its closed form. INTRODUCTION Seismic exploration is an inverse problem. The seismic data are inverted for the properties of the medium that created them. In exploration seismology, the medium properties correspond to the characteristics of the Earth''s subsurface, and include the spatial location of the reflectors as well as the density and elastic properties of the layers between reflectors. Events in recorded seismic data can be classified by the number of reflections they have experienced. Primaries are seismic events that have experienced one reflection, most often upward; whereas, multiples are seismic recorded events that have experienced more than one reflection. Multiples are further classified by the spatial location of the downward reflections within its history. A multiple that has at least one downward reflection at the free surface is a free surface multiple. A multiple that has all of its downward reflections below the free surface is an internal multiple. Source and/or receiver ghost events and direct waves are assumed to be removed before these definitions and classifications are applied. In addition to the recorded data with the necessary wavefield components, some standard processing algorithms require source wavelet deconvolution, deghosting, seismic data reconstruction (interpolation and extrapolation), regularization and/or redatuming. Seismic data processing is usually accomplished in a sequence of steps, e.g., removal of multiples, depth imaging or migration, and inversion for changes in Earth properties. The standard practice is to perform these steps in a specific order because each step is a pre-processing condition for the next procedure. The removal of multiples is a longstanding problem, of considerable moment and interest, with outstanding theoretical and practical issues yet to be understood and addressed.