Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
time
ABSTRACT Multiple removal is a crucial step in seismic data processing prior to velocity model building and imaging. After the prediction, adaptive multiple subtraction is used to suppress multiples (considered noise) in seismic data, thereby highlighting primaries (considered signal). In practice, conventional adaptive subtraction methods fit the predicted and recorded multiples in the least-squares sense using a sliding window, formulating a localized adaptive matched filter. Subsequently, the filter is applied to the prediction to remove multiples from the recorded data. However, such a strategy runs the risk of over-attenuating the useful primaries under the minimization energy constraint. To avoid damage to valuable signals, we develop a novel approach that replaces the conventional matched filter with a structure-oriented version. From the predicted multiples, we extract the structural information to be used in the derivation of the adaptive matched filter. Our structure-oriented matched filter emphasizes the structures of predicted multiples, which helps to better preserve primaries during the subtraction. Synthetic and field data examples demonstrate the effectiveness of our structure-oriented adaptive subtraction approach, highlighting its superior performance in multiple removal and primary preservation compared with conventional methods on 2D regularly sampled data.
- Geophysics > Seismic Surveying > Seismic Processing (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling (0.66)
The Implementation of Big Data and Machine Learning (ML) processes as part of the Fourth Industrial Revolution (4IR) are already having a significant impact in the Oil & Gas Industry. Where direct access of streaming data to Big Data Systems is possible, the ML Algorithms are able to improve operational efficiency with a corresponding significant reduction in costs. Today the identification and reduction of both Invisible Lost Time (ILT) and Non-Productive Time (NPT) is even more critical than it has ever been in the past as many projects strive to maximise the rate of return on investments. However, many remote sites are linked to Big Data Systems through a lower bandwidth satellite system, resulting in loss of data granularity and time lags that make the response times of analytical solutions based upon ML processes too slow to be of any immediate value to the well construction process. As a consequence, it is vital to move the ML Algorithms as close to the source of the data as possible.
ABSTRACT Seismic data interpolation is essential in a seismic data processing workflow, recovering data from sparse sampling. Traditional and deep-learning-based methods have been widely used in the seismic data interpolation field and have achieved remarkable results. In this paper, we develop a seismic data interpolation method through the novel application of diffusion probabilistic models (DPMs). DPM transforms the complex end-to-end mapping problem into a progressive denoising problem, enhancing the ability to reconstruct complex situations of missing data, such as large proportions and large-gap missing data. The interpolation process begins with a standard Gaussian distribution and seismic data with missing traces and then removes noise iteratively with a UNet trained for different noise levels. Our DPM-based interpolation method allows interpolation for various missing cases, including regularly missing, irregularly missing, consecutively missing, noisy missing, and different ratios of missing cases. The ability to generalize to different seismic data sets is also discussed in this paper. Numerical results of synthetic and field data indicate satisfactory interpolation performance of the DPM-based interpolation method in comparison with the f-x prediction filtering method, the curvelet transform method, the low-dimensional manifold method (LDMM), and the coordinate attention-based UNet method, particularly in cases with large proportions and large-gap missing data. Diffusion is all we need for seismic data interpolation.
Combining unsupervised deep learning and Monte Carlo dropout for seismic data reconstruction and its uncertainty quantification
Chen, Gui (China University of Petroleum-Beijing, China University of Petroleum-Beijing) | Liu, Yang (China University of Petroleum-Beijing, China University of Petroleum-Beijing, China University of Petroleum-Beijing at Karamay)
ABSTRACT Many methods, such as multichannel singular spectrum analysis (MSSA) and deep seismic prior (DSP), have been developed for seismic data reconstruction, but they do not quantify the uncertainty of reconstructed traces, relying on the subjective visual inspection of results. Our goal is to quantify the reconstructed uncertainty while recovering missing traces. We develop a framework including an unsupervised deep-learning-based seismic data reconstruction method and the existing Monte Carlo dropout method to achieve this goal. The only information required by our framework is the original incomplete data. A convolutional neural network trained on the original nonmissing traces can simultaneously denoise and reconstruct seismic data. For uncertainty quantification, the Monte Carlo dropout method treats the well-known dropout technique as Bayesian variational inference. This refers to the fact that the dropout technique can be regarded as an approximation to the probabilistic Gaussian process and thus can be used to obtain an approximate distribution (Bernoulli variational distribution) of the posterior distribution. The reconstructed result and uncertainty of the trained model are yielded through multiple Monte Carlo dropout simulations. The analysis of the reconstructed uncertainty quantifies the confidence to use reconstructed traces. Tests on synthetic and field data illustrate that our framework outperforms the MSSA and DSP methods on reconstructed accuracy and quantifies the reconstructed uncertainty as an objective benchmark to guide decision making.
- Asia > China (0.30)
- North America > Canada > Alberta > Woodlands County (0.24)
ABSTRACT Seismic data interpolation is essential in a seismic data processing workflow, recovering data from sparse sampling. Traditional and deep-learning-based methods have been widely used in the seismic data interpolation field and have achieved remarkable results. In this paper, we develop a seismic data interpolation method through the novel application of diffusion probabilistic models (DPMs). DPM transforms the complex end-to-end mapping problem into a progressive denoising problem, enhancing the ability to reconstruct complex situations of missing data, such as large proportions and large-gap missing data. The interpolation process begins with a standard Gaussian distribution and seismic data with missing traces and then removes noise iteratively with a UNet trained for different noise levels. Our DPM-based interpolation method allows interpolation for various missing cases, including regularly missing, irregularly missing, consecutively missing, noisy missing, and different ratios of missing cases. The ability to generalize to different seismic data sets is also discussed in this paper. Numerical results of synthetic and field data indicate satisfactory interpolation performance of the DPM-based interpolation method in comparison with the f-x prediction filtering method, the curvelet transform method, the low-dimensional manifold method (LDMM), and the coordinate attention-based UNet method, particularly in cases with large proportions and large-gap missing data. Diffusion is all we need for seismic data interpolation.
Improving prestack time migration by introducing a new velocity-related parameter: Parameter picking and 3D real data application
Xu, Jincheng (Guangdong Provincial Key Laboratory of Geophysical High-resolution Imaging Technology, Southern University of Science and Technology, National Engineering Research Center of Offshore Oil and Gas Exploration) | Zhang, Jianfeng (Guangdong Provincial Key Laboratory of Geophysical High-resolution Imaging Technology, Southern University of Science and Technology, National Engineering Research Center of Offshore Oil and Gas Exploration)
ABSTRACT Prestack time migration (PSTM), a commonly used tool for seismic imaging, has been widely applied in 3D seismic data processing. However, the conventional PSTM algorithms use only one effective velocity parameter (i.e., root-mean-square [rms] velocity) for each imaging point, which may not be accurate when stronger lateral variations occur in seismic velocities. In this paper, we introduce a new parameter called the velocity variation factor that considers velocity variations in inhomogeneous media to improve PSTM. This new parameter, together with the rms velocity, describes the propagation Green’s function at an imaging point with two effective parameters rather than one effective parameter as in conventional PSTMs. This provides a more accurate traveltime calculation for the wave propagating through media with moderate lateral velocity variation. Unlike the conventional bending-ray PSTM, the additional effective parameter is fully independent of the rms velocities. We estimate the two effective parameters at each imaging point by flattening the neighboring image gathers with a global optimization algorithm. The objective function is built at each imaging point using a selective crosscorrelation-based time shift, which can quantitatively describe the slight bending of events in the local migrated gathers regardless of the quality of the gathers. We estimate the two effective parameters using the very fast simulated annealing algorithm and multiscale approach, thus avoiding the local minimum caused by the noises in the migrated gathers. We apply our two-parameter PSTM to a real 3D land data set to demonstrate its industrial applicability. A comparison of the new imaging result with the conventional prestack depth migration also is presented.
- Information Technology > Knowledge Management (0.40)
- Information Technology > Communications > Collaboration (0.40)
Stripping is a method of interpreting refraction data by removing the effect of upper layers, the removal being accomplished by reducing the traveltimes and distances so that in effect the source and geophones are located on the interface at the base of the "stripped" layer. Stripping can be accomplished by calculation or graphically, or by a combination. We wish to compare our results with those of problem 11.5, so we use the same measurements, namely V 1 2.02 {\displaystyle V_{1} 2.02} km/s and We start by using equations (4.24f) to get V 2 {\displaystyle V_{2}}: Equations (4.24b,d) can be written These are the same as those in problem 11.5. Next we calculate the distances perpendicular to the first refractor at A {\displaystyle A} and B {\displaystyle B} (Figure 11.6a). These results are identical with those in problem 11.5.
- Information Technology > Knowledge Management (0.40)
- Information Technology > Communications > Collaboration (0.40)
- Information Technology > Knowledge Management (0.40)
- Information Technology > Communications > Collaboration (0.40)