Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
2009 SEG Annual Meeting
Summary A methodology to perform illumination analysis, based on previous work from Alves et al. (2008), with modifications to avoid the influence of the density of receivers for a given acquisition configuration in the illumination response, is proposed in this paper. These modifications ensure that the illumination results are not affected by changes in the receiver interval, as well as the image quality for the illuminated point, if minimum sampling criteria are already achieved for a fixed frequency range. The methodology requires much less computational effort than traditional illumination studies relying on the wave equation, which usually are based on the comparison of depth images, obtained after modeling complete datasets with different geometry configurations, followed by migration of each data set. The proposed target oriented methodology doesn’t require such computational efforts and provides the ability to evaluate hundreds of acquisition geometries at once, as soon as the wavefield is calculated. In the proposed methodology, the estimation of the illumination energy at a target location is accomplished with the propagation of the complete two-way wave equation via Finite Difference Methods (FDM). This method brings advantages when compared to methods such as ray tracing, especially when applied to complex models involving for example salt domes, with salt layers having corrugated top and base. In such cases, ray tracing methods are less accurate due to the high frequency approximations involved (Carcione et al., 2002). Numerical results on a modified 3D version of the SEG/EAGE Salt Dome model are presented and analyzed. Introduction The main challenges faced by the oil industry for increasing the quality of depth images in complex geological areas has been balanced with the application of the so-called migration techniques, but these schemes alone are not the panacea. No matter what type of migration techniques are applied, they can not solve imaging problems caused by low illumination or poor seismic data. The seismic data is the fundamental input, so planning their acquisition is critical. Poor illumination can be attributed mainly to two factors: acquisition geometry and geology. Only the first one can be modified to better illuminate a specific region. There are almost an infinite number of possible acquisition geometries and parameters that can be used to acquire a seismic dataset. Therefore, methodologies to compare them in terms of the illumination at the target zone are very important, for a wide set of possibilities, especially if the costs of seismic acquisition are taken into account. The proposed strategy uses Finite Difference Methods (FDM) to solve the two-way wave equation, calculating the downgoing and upcoming wavefields for a number of points or surfaces in the velocity model. After obtaining these wavefields, the illumination for those regions can be evaluated by a modified version of the image condition, which provides an estimate of the depth image quality for one specific acquisition geometry and parameters set. Methodology The first step of the proposed methodology is based on the application of the image condition that calculates the correlation between the downgoing and upcoming wavefields at the image location (Etgen, 1986), as expressed in the equation:
Summary The cost of conventional wide azimuth (WAZ (WAz?)) acquisition is generally several times that of narrow azimuth (NAZ) acquisition. Simultaneous source acquisition is a direct way to reduce this cost. The main drawback of this technique is interfering noises. We present a simulated simultaneous source experiment to evaluate the impact of noise on subsalt imaging and sediment model building. Although results show the noise is negligible to the subsalt image, it is severe on Kirchhoff migrated gathers. Controlled beam migration (CBM) is able to suppress the noise and yield usable gathers for tomographic update. Introduction During the last few years, wide azimuth marine acquisition and processing have gained popularity in areas of complex geology, especially in the deep water Gulf of Mexico where salt tectonics creates complicated salt bodies that distort wave field propagation. In such a setting, the benefits of having wider azimuth for subsalt imaging have been widely reported (Michell et al., 2006; Corcoran et al., 2007; Howard, 2007) and can be seen in Figure 1, where we show the change of subsalt image with various acquisition geometries. Although there are several variations of wide azimuth acquisition, most of them involve multiple source boats and shooting passes to achieve the desired offset-azimuth coverage. As a result, the cost of high density wide azimuth acquisition is several times that of narrow azimuth acquisition. To reduce the cost, it has become common practice to perform synthetic data modeling to study the effect of shot density reduction schemes prior to acquisition. On the other hand, simultaneous shooting (Beasley, 1987; Vaaga, 2003) is a method to directly cut acquisition cost by up to a factor of 2 by reducing the acquisition turnaround time, while maintaining the same shot density. Alternatively, it can be used to increase the shot density and coverage while keeping the same turnaround time. The main disadvantage of this technique is the interfering noises introduced by listening to multiple sources at the same time. It is therefore important to study the impact of these interfering noises, especially on the subsalt images and migrated common image gathers. Method To ensure a fair and realistic comparison between simultaneous source and conventional source acquisition, we decide to perform a simulated simultaneous shooting experiment, where the data is derived from a real conventional wide azimuth acquisition. As a result, both datasets have the same trace density, feathering, undershoot, gun drop-off, etc. With this simulated experiment, we are able to attribute differences solely to interference noise introduced by the simultaneous sources, and perhaps, come up with unbiased conclusions. Figure 2a shows the acquisition geometry of a conventional wide azimuth survey in the Gulf of Mexico associated with the WAZ images shown in Figure 1. The survey is acquired at a 45 degree NE-SW shooting direction by four source boats (two lead boats and two tail boats) and one streamer boat, with a 2 pass scheme, in order to achieve a maximum 4km crossline offset (or 4 “tiles”) corresponding to the image in Figure 1c.
- North America > United States (0.56)
- North America > Mexico (0.46)
Summary An automatic velocity picking method for deriving high resolution anisotropic velocities is presented. The method can be used to pick isotropic and anisotropic velocity fields simultaneously on full azimuth 3D angle gathers. The same procedure can be used to simultaneously process multiple azimuth sectors of conventional gathers. The resulting velocity field is very detailed both laterally and vertically. Time data, as well as depth data, can be analyzed using the same procedure. Both offset, as well as angle gathers, can be processed with the same approach. For wide azimuth AVAZ analysis, such a procedure is essential since wide-azimuth gather flattening is a critical element in the process. Manual picking alternatives are not practical in this case. Introduction Wide-Azimuth seismic datasets are often used for fractured reservoir analysis. The fractures produce anisotropy (mainly HTI) which causes azimuthal variations in amplitude as a function of offset/reflection angle, as well as moveout variations as a function of azimuth, which are a reflection of the anisotropic velocity field. Wide azimuth data captures these anisotropic effects and provides the means for fracture characterization. Anisotropic velocity analysis is a critical step when processing wide azimuth data. Anisotropic velocities are basic elements in fractured reservoir characterization. These velocities are also required for AVAZ analysis in order to "flatten" the data. The conventional approach for wide-azimuth velocity analysis and gather flattening involves independent velocity analysis for each azimuth sector. We present a method for automatic anisotropic residual velocity analysis and gather flattening that can process full azimuth 3D angle gathers on a very dense lateral and vertical grid. The same method is applicable for simultaneous analysis of 2D gathers from multiple sectors. Full Azimuth 3D angle gathers are generated via a new approach for migrating multi-azimuth data, developed by Koren et. al. (2008). Using this approach densely sampled full azimuth 3D angle gathers, in depth, are available for anisotropic velocity and AVAZ analyses, providing high resolution, azimuth-angle domain information in true subsurface coordinates. Performing anisotropic velocity analysis using multi-azimuth gathers, whether created directly by migration or constructed after multiple migration of limited azimuth sectors, is an attractive approach. It enables the construction of a consistent velocity field, and ensures flattened gathers for AVAZ work. Manual picking of such datasets is not a practical alternative. Figure 1 shows an example of a migrated full azimuth 3D depth angle gather. This is a 3D gather unfolded to a 2D display. There are approximately one thousand traces in each migrated gather, densely sampling the azimuth axis and the reflection angle axis. Note the non-flatness of the gather which has two components. One is a global moveout error which is a function of the reflection angle, and the other describes the moveout variations with azimuth. Our objective is to flatten these gathers and derive the two components of the velocity field. The method we describe here uses the dynamic properties of the wavefield, namely AVAZ. A complementary method which analyzes such gathers using the kinematic properties of the wavefield is described by Koren and Ravve (2009).
Summary The seismic acquisition technology, referred to as “simultaneous sources”, records two or more shots (ignited with random delay time) in a single shot gather. Despite the fact that the recorded data is blended among different shot gathers, conventional processing procedures could still produce acceptable images for interpretation. However, separating the blended data into single shot gathers is still desirable for further improving the seismic image quality. This paper introduces a new Multi-Directional Vector- Median Filter (MD-VMF) to separate the blended seismic shot gathers. The vector median filter extends the conventional one from scalars to vectors. Moreover, it is applied in multiple directions centered at any sample point in a seismic (e.g., common receiver or CMP) gather and the filtered result in the most coherent direction is selected as the output. Tests on both synthetic and real marine seismic data simulating blended acquisition confirm the effectiveness of our proposed MD-VMF approach. Introduction The concept of simultaneous sources acquisition can significantly enhance field acquisition efficiency and improve the quality of seismic data. It is not new for vibroseis acquisition. The study on simultaneous shooting using vibratory sources has lasted for around two decades and various methods are proposed. For a complete review see Bagaini (2006). The methods commonly employ specially encoded source sweeps, which make it possible to separate the interfering source responses. In marine seismic, Lynn et al. (1987) described occasional interference from a second source, also called crosstalk, which was treated as noise and suppressed by stacking. Beasley et al. (1998) proposed to adopt simultaneous sources in marine acquisition to improve the efficiency. They suggested placing the air guns symmetrically off the ends of a 2D marine cable and firing the sources simultaneously, and used only a geometry filter in CMP domain to separate the overlapping source responses. Inspired by the encoded vibroseis acquisition, Ikelle (2007) introduced coding and decoding into marine cases. Stefani et al. (2007) and Hampson et al. (2008) applied a small random delay time onto the second source. As a consequence, the response of one source appears random in some special geometries such as the common-receiver, common-offset and CMP domain. Stacking the blended seismic data without any further processing produces acceptable results as stacking can effectively suppress random energy. However additional efforts towards source separation have been proposed. Moore et al. (2008) adopted a technique based on conventional Radon transforms while Akerberg et al. (2008) used sparse Radon transforms for the source separation. Spitz et al. (2008) proposed a prediction-subtraction approach which first estimates the primary wavefield of the second source and then subtracts it from the total wavefield via a PEF-based adaptive subtraction. Berkhout et al. (2008) extended the simultaneous shooting method to the concept of blended acquisition, which adopts continuous recording and requires neither randomized delay times nor encoded source signatures. In their approach, two processing routes are suggested: (1) process the blended records, and (2) separate the sources and apply conventional processing. In this paper, we capitalize on the randomness in the acquisition and propose a new approach for the separation.
- Asia > Middle East > Saudi Arabia (1.00)
- Asia > Middle East > Yemen (0.96)
- Africa > Sudan (0.96)
- (4 more...)
Summary In 2008, Apache acquired the Lenga / Russfin 3-D survey in the Magallanes region of Chile, on La Isla Grande deTierra del Fuego. The survey was adjacent to Chile’s border with Argentina and also to one of Apache’s recently acquired 3D surveys in Argentina. The survey was recorded as an extension of the Argentina 3D through the innovative combination of conventional and wireless seismic recording systems. Introduction Late in 2007 Apache was awarded two exploration licenses in Chile, known as Lenga and Russfin, on the island of Tierra del Fuego. The island is divided politically between Chile and Argentina and these two blocks added 1 million acres adjacent to Apache’s 714,000 acres and extensive exploration and production operations on the Argentine side of the island. Both blocks are bounded to the east by Chile’s frontier with Argentina which also forms the western boundary of the 2007 Los Chorrillos / Uribe 3D in Argentina. Apache explorationists working these areas needed to be able to seamlessly map across the border as there were identified leads that lay close to and directly beneath the border. As with any new 3D survey being designed adjacent to an existing survey, it was important to achieve continuous sampling and thus avoid unwanted migration edge-effects in both surveys; what made achieving this continuity more complicated was that the old and new surveys lay in different countries. From the outset we knew that the solution must minimize cross-border traffic of personnel and equipment to reduce the risk of delays from border formalities, while avoiding cross-border data transmission, radio communication, and equipment command-and-control; simply connecting a cable across the fence was not an option. Survey Design The primary exploration target of the survey was the Cretaceous Springhill formation, within the same depth range as encountered in Argentina. Many operational changes made during the acquisition of the Argentina survey had given Apache a good understanding of effective geometry and equipment for year-round acquisition, and the geometry and parameters of the Argentina survey had proved capable of imaging the targets of interest as described by Yates et al (2008). Consequently the Lenga / Russfin 3D was laid out as a westward extension of the Los Chorrillos / Uribe 3D, using the same two layouts developed for and used in Argentina. Contractor and Equipment Selection Once the basic design had been completed, it was necessary to select an acquisition contractor. Two options were considered: to select the Vibroseis crew already working on the island and re-equip it for year-round explosive acquisition, or to bring in a new crew with equipment specified based on the extensive experience Apache had gathered while working in Argentina. We had learned the hard way that vibrators are only effective in a limited weather window during the summer, and our intention was to start acquisition immediately permits were in place and not be forced to wait for the summer. To this end, Apache selected Global Geophysical to carry out the survey, drilling and recording work in Chile, using lightweight and low ground-pressure vehicles, selfpropelled mechanized drills and a 7,000 station Sercel 428 recorder.
Summary We are presenting a new way of improving seismic-array Responses. By analyzing the relationship between covariance matrix forms from real sensors of seismic arrays and the fourth-order crosscumulants from the same sensors, we find that artificial sensors can be constructed from the real sensors. We have called these artificial sensors virtual sensors. Moreover, we have called the combination of real and virtual sensors a virtual seismic array. For example, we can construct from an equally weighted linear array of five sensors a weighted virtual array of nine sensors. Basically, virtual sensors allow us to introduce new sensors in the seismic arrays as well as new weightings of the existing real sensors. The key assumption behind this approach is that seismic data are considered nonGaussian; hence the fourth-order crosscumulants of the real-sensor reponses are nonzero. Introduction A receiver is actually an array of sensors whose number can vary between 6 and 24. So seismic responses are collected at sensors of the array and then summed to produce the seismic response associated with one receiver (or seismic trace). In a number of acquisition systems, the summation is hardwired in such a way that wavefronts recorded by sensors at time t are directly summed, irrespective of the data quality or some sensor malfunctioning. Although very efficient in terms of acquisition turnaround, these types of acquisition systems are prone to errors ranging from noise leakage due to aliasing to improper summation due to some malfunctinoning sensors. An alternative acquisition system, which is more and more commonly adopted today, is to record the whole array of sensors for a certain length of time, filter the noise and aliased data, and correct for any potential sensor malfunctionining before summing the seismic sensors to produce seismic traces. We here present an additional processing step before forming the arrays. The basic idea is to construct additional sensors from the real sensors. The ability to construct these additional sensors can help us to either improve the resolution of the array response, to reduce the number of sensors used in the seismic array, or both. Mathematics behind the concept of virtual arrays Suppose that we have I statistically independent signals impinging on an array of L sensors (also known as elements). The array response of this array can be written as follows: Our next task is to recast equation (2) into a series of linear equations in which the linear coefficients are independent of time. We found that we can do so by decomposing our seismic data Dl(t) into a series of narrow-band signals by using, for example, the filter-bank technique (Harris and Fredric, 2004). A filter bank is an array of band-pass filters that separates the input signal into several components, each one carrying a single-frequency subband of the original signal. It is desirable to design the filter bank in such a way that subbands can be recombined to recover the original signal. The first process is called analysis, and the second is called synthesis.
Summary A beam steering approach to designing source arrays originally developped for land dynamite acquisition is applied to marine seismic. The resulting ghost attenuation enhances low frequencies and provides better seismic penetration. In particular, imaging below a highly reflective carbonate layer in the Browse Basin, Australia is dramatically improved. Extensive modelling must be done to properly understand the multi-level source spectrum and its radiation pattern. Introduction The sea-surface reflection generates interferences between up- and down-going waves that ultimately limit the bandwidth of marine seismic data. This phenomenon, known as ghosting, actually occurs twice: on the source side and on the receiver side. Ghost attenuation or elimination, to increase the signal bandwidth, has been the focus of extensive research. The receiver ghost can be removed using dual-sensor ocean-bottom devices (Barr and Sanders, 1989) or a dual-sensor towed streamer (Carlson et al., 2007) or an over/under streamer acquisition (Brink and Svendsen, 1987). The over/under technique can also be used to remove the source ghost (Moldoveanu, 2000) but it requires flip-flop shooting of two sources at two different depths, which ultimately halves the survey shot-point density. Alternatively, the source ghost can be attenuated using a beam steering technique originally developed 60 years ago for dynamite land acquisition (Shock, 1950). The principle is to detonate charges at various depths in a sequence that constructively builds the down-going wave at the expense of the up-going wave. This reduces the energy of the ghost (surface-reflected up-going wave) compared to that of the primary pulse. There are two major drawbacks with this technique: the unknown and variable speed of sound in the near-surface, and the accuracy of the detonations’ timing. The first hurdle was addressed by in situ measurements prior to the survey, and the second by looping around Primacord so that the apparent detonation speed matched the formation velocity (Martner and Silverman, 1962). In this paper we adapt the beam steering approach to airgun arrays in the marine environment. We place guns, clusters of guns or sub-arrays at different depths and fire them sequentially. Contrary to the land dynamite case, the speed of sound in water is well known and varies little at the depth considered, and the trigger-time accuracy is in the order of a fraction of a millisecond. This technique is quite straightforward to implement and requires only minor modifications of the existing gun arrays. However, the associated radiation patterns are not isotropic and need to be carefully analyzed. Contrary to the source over under approach, the shot-point density is the same as for a regular survey. Principles of multi-level source acquisition A conventional airgun array is made of several sub-arrays each containing a number of guns, or clusters of guns. All guns are at the same depth (typically between 5 and 10 meters) and fire at the same time. This provides constructive down-going energy but also constructive upgoing energy (Figure 1). Therefore the ghost has the same energy as the direct wave. The multi-level source concept puts guns, clusters or sub-arrays at different depths and fires them sequentially so that only the down-going waves builds up constructively.
- North America > United States > Illinois > Madison County (0.82)
- Oceania > Australia (0.77)
- Oceania > Australia > Western Australia > Western Australia > Timor Sea > Browse Basin (0.99)
- Oceania > Australia > Western Australia > North West Shelf > Timor Sea > Browse Basin (0.99)
Summary Sensor saturation occurs with seismic sensing systems when the signal levels exceed the system dynamic range. Typically this saturation is limited to the highest amplitude and broadest bandwidth arrivals such as the direct arrival and near surface reflections. This can pose problems for Ocean Bottom Seismic recording where the direct arrival may be required for sensor orientation, matching and demultiple processing schemes. This paper presents a novel approach to overcoming sensor saturation which is particularly applicable for highly multiplexed fibre-optic sensing systems. The approach enables the accurate recording of direct arrival signatures from a typical airgun array at zero offset by extending the system effective dynamic range to 180dB whilst maintaining an efficient high multiplexing rate. Results of field trials utilizing the approach illustrate that zero offset signal at close ranges from a typical airgun array is faithfully recovered. Introduction Sensor saturation is a known problem for all seismic sensing systems, and results in loss of information for high amplitude, high frequency signals such as those associated with the direct arrival or near surface reflections at short ranges. This problem results in ‘clipping’ or other distortions to the signal. For Ocean Bottom Seismic systems where the direct arrival/near surface signal at near offsets may be required for sensor orientation, hydrophone/accelerometer matching, and de-multiple, this loss of signal is particularly problematic. For systems with a broad band response the problem of sensor saturation is exacerbated by energy in the airgun source signal above the normal seismic recording band (Hatton 2008). Recorded energy at 1s to 10s of kHz requires a correspondingly high sample rate to avoid sensor saturation. The degree of sensor saturation for a given signal is determined by the system dynamic range which is itself linked to the fundamental signal sampling rate. In fibreoptic sensing systems, the fundamental signal sampling is determined by the system multiplexing rate as dictated by the system optical architecture. In general it is desirable to maintain a high multiplexing rate since this maximises the number of channels per fibre, and hence improves reliability and reduces system cost (Nash and Strudley, 2007). For such highly multiplexed broad band recording systems the degree of sensor saturation is potentially very large. Thus a solution is required which maintains the desirable features of high multiplexing efficiency, whilst retaining the fidelity of the potentially saturated signal. Problem Description Sensor saturation for a broad band phase-based optical sensing system can be thought of as a sampling problem. Energy beyond the Nyquist limit, as determined by the fundamental system architecture, will ‘wrap’ around into the signal pass band. This is illustrated in figure 1, which shows the aliasing effect in instantaneous frequency for an hypothetical signal (shown as the black curve). The Nyquist limit in this example is at pi/2. Two instances of ‘wrapping’ are illustrated by the red curves, the first a single ‘wrap’ and the second a double ‘wrap’ around the Nyquist limit. This aliasing effect can result in large distortion of the recorded signal unless some form of ‘clipping’ is employed. Such ‘clipped’ signals are not useful in subsequent processing steps since they do not recover the ‘wrapped’ energy, but rather simply result in truncation at the Nyquist boundary.
Summary The general spatial resolution is relevant to offset, so it should be taken into account in seismic survey design. In this paper we determine maximum offset by studying the decreasing rate of spatial resolution, and analyze some application effects. The principle by which we determine the maximum offset based on the decreasing rate of spatial resolution could be referred to in seismic survey design. Introduction The general spatial resolution was first introduced by Ma et al. (2002), which is a generalization of the known vertical and horizontal resolution. Following that, there were some papers focusing on general spatial resolution. Based on a study of general resolution, Chen et al. (2003) suggested limiting the maximum offset for higher imaging resolution, and Ma et al. (2004) put forward the technology for high resolution stacking. Chen et al. (2009) studied the relationship of the decreasing rate of resolution, the target layer depth and offset in detail. In this paper, we apply these achievements to seismic survey design to determine the maximum offset and show how effective and necessary it is. The theoretical foundation For a 2D seismic geometry, suppose there are two underground scattering points, A and B, as shown in figure 1 (Ma et al., 2002). The resolvable distance between the two points is r, the depth of A is z0, S and R are the shot and receiver points, M is the midpoint between the source and receiver, O is the origin point, d is half the distance between S and R, ß is the angle between line segment AB and the vertical line, and x is the distance between M and O. Equation (1) is the generalized spatial resolution formula for 2D seismic traces. This formula indicates the relationship between vertical resolution and offset with the given depth and wavelength. Maximum Offset for a target layer Chen et al. (2009) studied the pattern of the relationship between maximum offset and resolution based on equation (2) and introduced decreasing rate of resolution, which is useful in application. The decreasing rate of resolution is defined as follows, where c is the decreasing rate of resolution, r is the resolution on a non-zero offset and r0 is the resolution on the zero offset. Figure 3 shows that the decreasing rate of resolution is in the range of 0-1 with layer depth from 0-4000m when apparent wavelength ? is 80 m. Figure 4 shows the natural logarithm of the offset changes with decreasing rate of resolution and depth. Application We give an application example in the seismic design for a CNPC project. The target layer depth is 2600 m. By analyzing test shot records, the apparent wavelength was estimated to be 70 m. Figure 5 shows a test shot. We plotted the d and c relationship curve from equation (4), shown in figure 6. We regarded the maximum decreasing rate as 0.5 and then the corresponding maximum offset was 2800 m. Once the offset exceeded the maximum value, the resolution decreased faster, which is shown in figure 7.
Summary 2D finite-difference (FD) modeling and 2D reverse-time migration (RTM) conducted over 2 lines of the Ziggy model confirms the potential of long-offset seismic data to improve subsalt imaging. The maximum offset of the FD modeling was 20 km, and RTM was done with a maximum offset of 7 km, 14 km, and 20 km separately. The imaging below a complex salt structure was improved significantly with an increase in maximum offset. The combination of wide azimuth with longer offsets should bring another step improvement in subsalt imaging. Introduction Wide- and rich-azimuth surveys have provided a step improvement in subsalt imaging (Kapoor et al., 2007). These surveys improve signal-to-noise ratio and illumination in complex subsalt geology and provide natural attenuation of some multiples. However, subsalt imaging below some thick or extremely complex salt structure is often still poor. Long-offset seismic acquisition has been successfully applied in academic geophysics for studying the crustal and upper mantle structure (Li et al., 2008) and in industry for sub-basalt imaging (White and Fliedner, 2001). Long-offset seismic data could be valuable for subsalt imaging by improving complex salt imaging and subsalt illumination. Moreover, long-offset data could provide more natural multiple attenuation, better determination of anisotropic parameters, and more accurate velocity model updates with full-wave inversion. FD modeling can imitate wave propagation in complicated geology areas much better than ray-tracing modeling, and therefore can be used to determine design parameters for field acquisition. RTM is two-way wave-field migration, which can image steep dips, turning waves, overhangs and complex structures much better than Kirchhoff and WEM (Biondi, 2006). The Ziggy model is a Gulf-of-Mexico salt model developed by the SMAART JV (Salt Multiple Attenuation and ReductionTeam,1998-2002. http://delphi.tudelft.nl/smaart). 2D FD modeling and 2D RTM migration over 2 lines of the mirrored Ziggy model confirmed the advantage of long-offset data over conventional cable data on steep-dip salt boundaries and complex subsalt imaging. Method FD 2D modeling was conducted over two lines of the Ziggy model. The original Ziggy velocity and density model (26-km long) was mirrored twice to 104 km (Figure 1), so as to generate and migrate long-offset data with full 2D modeling aperture. In order to study the illumination along different directions at different locations, some circles are added in the density model at a depth of 6 km. The radius of the circles is 50 m with the circles spaced at 200 m. The density of the circles is 4.2 g/cm3. The velocity and density grid is 20 m (inline) x 10 m (depth) with the maximum depth of 10 km. The reflectivity (Figure 1c) model was obtained by the multiplication of the velocity with the density model. The inline grid number spans from 1 to 5201 for the whole model. The maximum modeling frequency is 30 Hz with a Ricker wavelet, the shot spacing is 100 m, the receiver spacing is 20 m, the offset spans from 0 to 20 km, and the trace length is set to 20 s for long-offset reflections and diffractions.
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling (0.56)
- Geophysics > Seismic Surveying > Seismic Processing > Seismic Migration (0.53)