BP recently acquired wide-offset ocean-bottom-node data with conventional airguns over the Atlantis Field in the deep-water Gulf of Mexico. Careful consideration during the acquisition was rewarded by recording usable signal down to a lower frequency than previously achieved. Full-waveform inversion was then applied to the resulting dataset. The resulting velocity model was then used, unmodified, to reverse time migrate the seismic data. The result was the production of some of the best sub-salt images ever seen at Atlantis. Furthermore, the FWI velocity model clearly revealed several major interpretation errors in the legacy salt model, and thus the FWI result also offered an excellent basis for updating the conventional salt-model-building workflow. These results demonstrated that with appropriate seismic data to support it, FWI might offer a paradigm shift in model building and imaging in areas of complex salt.
Presentation Date: Thursday, September 28, 2017
Start Time: 9:45 AM
Presentation Type: ORAL
In an effort to better understand imaging challenges in the deep water Gulf of Mexico, in 2011 we constructed a large 3D model loosely based on the complex salt geology of the Garden Banks protraction area of the deep water GoM. We then simulated a regional WAZ (wide azimuth towed streamer) seismic survey over this model, producing data that on casual inspection could be mistaken for real. We then performed velocity-model building and imaging on this dataset as if it were real. In particular, the team performing the analysis never saw the correct model.
For much of the model, especially where the salt was relatively simple, we found that the resulting velocity model was quite accurate, even if lacking in fine detail. Reverse-time migration of the seismic data through these parts of the velocity model produced an imperfect but usable image. In other places the salt structures were misinterpreted, causing large-scale errors in the migration velocity model, which resulted in an unusable, shattered image below the salt.
We conclude that traditional velocity-model-building techniques can miss features that occur at too large a scale. Reliably imaging under complex salt in the Gulf of Mexico may require new velocity-model-building methodologies to be developed that are specifically designed to deal with the problem of large velocity heterogeneities.
A previous study (Etgen et al., 2014) showed that fine-scale features associated with abrupt velocity heterogeneities (in particular, top of salt) can damage the seismic image. In this study our goal was to examine the other end of the velocity heterogeneity size spectrum, by simulating acquisition and processing of a large 3D WAZ dataset in an area of complex salt containing large structural features.
We found that existing models available to us were unsuitable for this purpose. We concluded that:
1) The model needed to be 3D, because complex salt geology is intrinsically 3D. A 2D model cannot adequately represent the challenge.
2) The model needed to be very large, big enough to contain modern wide-azimuth long-offset acquisition geometries with plenty of room to spare, to avoid the results being contaminated by edge effects and to allow us to test multiple novel acquisition strategies using the same synthetic dataset. It also needed to be deep enough to contain diving waves out to wide offsets. The final model design was 18 km deep by 85.5 km by 106 km wide.
The stationary wavelet transform (SWT) is a variation of the traditional discrete wavelet transform (DWT) that preserves the original length of the data trace at each wavelet scale. With an appropriate choice of wavelet basis, a seismic signal can have a sparse impulsive representation in the wavelet domain. Signals spiking above some picked noise floor threshold can then be boosted, or noise below a threshold attenuated. The compact time-frequency localization of the wavelet domain provides more opportunity for signal and noise to cleanly separate, while the oversampled transformed domain of the SWT makes this application practical. The process of thresholding to remove noise can zero out noisy wavelet scales, where the signal never rises above the noise floor. Fortunately, events on seismic traces typically encompass mutiple wavelet scales. In the SWT, the original time sampling is preserved at every wavelet scale, making them directly comparable. We can thus estimate the missing signal at a strongly noise-contaminated wavelet scale from adjacent scales that are less contaminated by noise, resulting in an increase in the effective signal bandwidth. We demonstrate signal enhancement in the SWT domain using synthetic data. We expect that the method should be more robust against noise than traditional Fourier-domain techniques, which boost signal and noise at the same frequency by the same amount.
“Vector infidelity” is the failure of multicomponent ocean-bottom-seismic geophones to respond isotropically to incoming seismic energy. We model vector infidelity as an unknown linear convolutional filter that the geophone and its environment apply to the “true data” to produce the “corrupted” data that we actually record. Our goal is to determine the inverse of this filter, so that we can correct the recorded data. A perfect vector-infidelity correction filter, when applied to the data, should cause the horizontal components of the direct arrival to become linearly polarized in the radial direction for all source azimuths. We show how to calculate the linear filter that optimally accomplishes this, and the results of applying the method to a real OBS dataset.