Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Results
PINNslope: seismic data interpolation and local slope estimation with physics#xD;informed neural networks
Brandolin, F. (King Abdullah University of Science and Technology (KAUST)) | Ravasi, M. (King Abdullah University of Science and Technology (KAUST)) | Alkhalifah, T. (King Abdullah University of Science and Technology (KAUST))
Interpolation of aliased seismic data constitutes a key step in a seismic processing workflow to obtain high-quality velocity models and seismic images. Building on the idea of describing seismic wavefields as a superposition of local plane waves, we propose to interpolate seismic data by using a physics informed neural network (PINN). In the proposed framework, two feed-forward neural networks are jointly trained using the local plane wave differential equation as well as the available data as two terms in the objective function: a primary network assisted by positional encoding is tasked with reconstructing the seismic data, while an auxiliary, smaller network estimates the associated local slopes. Results on synthetic and field data validate the effectiveness of the proposed method in handling aliased (coarsely sampled) data and data with large gaps. Our method compares favorably against a classic least-squares inversion approach regularized by the local plane-wave equation as well as a PINN-based approach with a single network and precomputed local slopes. We find that introducing a second network to estimate the local slopes while at the same time interpolating the aliased data enhances the overall reconstruction capabilities and convergence behavior of the primary network. Moreover, an additional positional encoding layer embedded as the first layer of the wavefield network confers to the network the ability to converge faster, improving the accuracy of the data term.
- Geophysics > Seismic Surveying > Seismic Processing (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling (0.34)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Neural networks (1.00)
Sparse seismic data regularization in shot and trace domains using a residual block autoencoder based on the fast Fourier transform
Campi, Alexandre L. (Darcy Ribeiro Northern Rio de Janeiro State University) | Missagia, Roseane Marchezi (Darcy Ribeiro Northern Rio de Janeiro State University, Instituto Nacional de Ciรชncia e Tecnologia โ Geofรญsica do Petrรณleo (INCT โ GP))
ABSTRACT The increasing use of sparse acquisitions in seismic data acquisition offers advantages in cost and time savings. However, it results in irregularly sampled seismic data, adversely impacting the quality of the final images. In this paper, we develop the residual block fast Fourier transform-convolutional autoencoder (ResFFT-CAE) network, a convolutional neural network with residual blocks based on the Fourier transform. Incorporating residual blocks allows the network to extract high- and low-frequency features from seismic data. The high-frequency features capture detailed information, whereas the low-frequency features integrate the overall data structure, facilitating superior recovery of irregularly sampled seismic data in the trace and shot domains. We evaluate the performance of the ResFFT-CAE network on the synthetic and field data. On synthetic data, we compare the ResFFT-CAE network with the compressive sensing method using the curvelet transform. On field data, we conduct comparisons with other neural networks, such as the CAE and U-Net. The results demonstrate that the ResFFT-CAE network consistently outperforms other approaches in all scenarios. It produces images of superior quality, characterized by lower residuals and reduced distortions. Furthermore, when evaluating model generalization, tests using models trained on synthetic data also exhibit promising results. In conclusion, the ResFFT-CAE network indicates great promise as a highly efficient tool for regularizing irregularly sampled seismic data. Its excellent performance suggests potential applications in the preconditioning of seismic data analysis and processing flows.
- North America > United States (0.93)
- Europe > United Kingdom > North Sea (0.16)
- Europe > Norway > North Sea (0.16)
- Geophysics > Seismic Surveying > Surface Seismic Acquisition (1.00)
- Geophysics > Seismic Surveying > Seismic Processing (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Neural networks (1.00)
Sparse seismic data regularization in shot and trace domains using a residual block autoencoder based on the fast Fourier transform
Campi, Alexandre L. (Darcy Ribeiro Northern Rio de Janeiro State University) | Missagia, Roseane Marchezi (Darcy Ribeiro Northern Rio de Janeiro State University, Instituto Nacional de Ciรชncia e Tecnologia โ Geofรญsica do Petrรณleo (INCT โ GP))
ABSTRACT The increasing use of sparse acquisitions in seismic data acquisition offers advantages in cost and time savings. However, it results in irregularly sampled seismic data, adversely impacting the quality of the final images. In this paper, we develop the residual block fast Fourier transform-convolutional autoencoder (ResFFT-CAE) network, a convolutional neural network with residual blocks based on the Fourier transform. Incorporating residual blocks allows the network to extract high- and low-frequency features from seismic data. The high-frequency features capture detailed information, whereas the low-frequency features integrate the overall data structure, facilitating superior recovery of irregularly sampled seismic data in the trace and shot domains. We evaluate the performance of the ResFFT-CAE network on the synthetic and field data. On synthetic data, we compare the ResFFT-CAE network with the compressive sensing method using the curvelet transform. On field data, we conduct comparisons with other neural networks, such as the CAE and U-Net. The results demonstrate that the ResFFT-CAE network consistently outperforms other approaches in all scenarios. It produces images of superior quality, characterized by lower residuals and reduced distortions. Furthermore, when evaluating model generalization, tests using models trained on synthetic data also exhibit promising results. In conclusion, the ResFFT-CAE network indicates great promise as a highly efficient tool for regularizing irregularly sampled seismic data. Its excellent performance suggests potential applications in the preconditioning of seismic data analysis and processing flows.
- North America > United States (0.93)
- Europe > United Kingdom > North Sea (0.16)
- Europe > Norway > North Sea (0.16)
- Geophysics > Seismic Surveying > Surface Seismic Acquisition (1.00)
- Geophysics > Seismic Surveying > Seismic Processing (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Neural networks (1.00)
The 2-D scalar wave equation (D-1) given in Section D.1 can be adapted to three dimensions as This equation describes propagation of a 3-D compressional zero-offset wavefield P(x, y, z, t) in a medium with constant material density and compressional wave velocity v(x, y, z), where x is the horizontal spatial axis in the inline direction, y is the horizontal spatial axis in the crossline direction, z is the depth axis (positive downward), and t is time. Given the upcoming seismic wavefield P(x, y, z 0, t), which is recorded at the surface, we want to determine reflectivity P(x, y, z, t 0). This requires extrapolating the surface wavefield to depth z, then collecting it at t 0. In equations (G-4a,G-4b), kx and ky are the wavenumbers in the inline and crossline directions, respectively, and ฯ is the temporal frequency in units of radians per unit time. Note that the inline and crossline normalized wavenumbers X and Y are coupled in equation (G-3). Next, suppose that we perform 2-D migration on all the crosslines.
- Information Technology > Knowledge Management (0.76)
- Information Technology > Communications > Collaboration (0.76)
Due to inherent limitations in data acquisition, seismic data reconstruction is an important procedure to recover the missing data or improve the observation density. A number of conventional methods exist to solve the reconstruction task. Reconstruction has been shown to be challenging, especially in the case of complex seismic data. Recently, convolutional neural network (CNN) has been applied in seismic data processing. In most cases, the architectures of these CNN-based methods are relatively simple, without sufficient feature interaction, limiting their performance. To improve reconstruction results, a multi-cascade self-guided network (MSG-Net) is presented. In general, MSG-Net is inspired by the self-guided scheme, and a multi-cascade architecture is designed to accomplish the informative-feature extraction within the analyzed seismic data in different resolutions. From the basis, a parallel spatial attention module is applied to further refine and enhance the primary features, thereby improving the reconstruction accuracy. To test and verify the new approach, a training dataset is generated, based on the synthetic records obtained by forward modelling methods. Experimental results demonstrate that MSG-Net is an promising approach for performing seismic data interpolation.
- Research Report > New Finding (0.66)
- Research Report > Experimental Study (0.48)
- North America > United States > West Virginia > Appalachian Basin > Marcellus Shale Formation (0.99)
- North America > United States > Virginia > Appalachian Basin > Marcellus Shale Formation (0.99)
- North America > United States > Texas > Permian Basin > Central Basin > Nelson Field > Ellenburger Formation (0.99)
- (4 more...)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Data Science & Engineering Analytics > Information Management and Systems (1.00)
ABSTRACT Seismic data interpolation is essential in a seismic data processing workflow, recovering data from sparse sampling. Traditional and deep-learning-based methods have been widely used in the seismic data interpolation field and have achieved remarkable results. In this paper, we develop a seismic data interpolation method through the novel application of diffusion probabilistic models (DPMs). DPM transforms the complex end-to-end mapping problem into a progressive denoising problem, enhancing the ability to reconstruct complex situations of missing data, such as large proportions and large-gap missing data. The interpolation process begins with a standard Gaussian distribution and seismic data with missing traces and then removes noise iteratively with a UNet trained for different noise levels. Our DPM-based interpolation method allows interpolation for various missing cases, including regularly missing, irregularly missing, consecutively missing, noisy missing, and different ratios of missing cases. The ability to generalize to different seismic data sets is also discussed in this paper. Numerical results of synthetic and field data indicate satisfactory interpolation performance of the DPM-based interpolation method in comparison with the f-x prediction filtering method, the curvelet transform method, the low-dimensional manifold method (LDMM), and the coordinate attention-based UNet method, particularly in cases with large proportions and large-gap missing data. Diffusion is all we need for seismic data interpolation.
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Data Science & Engineering Analytics > Information Management and Systems (1.00)
Combining unsupervised deep learning and Monte Carlo dropout for seismic data reconstruction and its uncertainty quantification
Chen, Gui (China University of Petroleum-Beijing, China University of Petroleum-Beijing) | Liu, Yang (China University of Petroleum-Beijing, China University of Petroleum-Beijing, China University of Petroleum-Beijing at Karamay)
ABSTRACT Many methods, such as multichannel singular spectrum analysis (MSSA) and deep seismic prior (DSP), have been developed for seismic data reconstruction, but they do not quantify the uncertainty of reconstructed traces, relying on the subjective visual inspection of results. Our goal is to quantify the reconstructed uncertainty while recovering missing traces. We develop a framework including an unsupervised deep-learning-based seismic data reconstruction method and the existing Monte Carlo dropout method to achieve this goal. The only information required by our framework is the original incomplete data. A convolutional neural network trained on the original nonmissing traces can simultaneously denoise and reconstruct seismic data. For uncertainty quantification, the Monte Carlo dropout method treats the well-known dropout technique as Bayesian variational inference. This refers to the fact that the dropout technique can be regarded as an approximation to the probabilistic Gaussian process and thus can be used to obtain an approximate distribution (Bernoulli variational distribution) of the posterior distribution. The reconstructed result and uncertainty of the trained model are yielded through multiple Monte Carlo dropout simulations. The analysis of the reconstructed uncertainty quantifies the confidence to use reconstructed traces. Tests on synthetic and field data illustrate that our framework outperforms the MSSA and DSP methods on reconstructed accuracy and quantifies the reconstructed uncertainty as an objective benchmark to guide decision making.
- Asia > China (0.30)
- North America > Canada > Alberta > Woodlands County (0.24)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Neural networks (1.00)
ABSTRACT Seismic data interpolation is essential in a seismic data processing workflow, recovering data from sparse sampling. Traditional and deep-learning-based methods have been widely used in the seismic data interpolation field and have achieved remarkable results. In this paper, we develop a seismic data interpolation method through the novel application of diffusion probabilistic models (DPMs). DPM transforms the complex end-to-end mapping problem into a progressive denoising problem, enhancing the ability to reconstruct complex situations of missing data, such as large proportions and large-gap missing data. The interpolation process begins with a standard Gaussian distribution and seismic data with missing traces and then removes noise iteratively with a UNet trained for different noise levels. Our DPM-based interpolation method allows interpolation for various missing cases, including regularly missing, irregularly missing, consecutively missing, noisy missing, and different ratios of missing cases. The ability to generalize to different seismic data sets is also discussed in this paper. Numerical results of synthetic and field data indicate satisfactory interpolation performance of the DPM-based interpolation method in comparison with the f-x prediction filtering method, the curvelet transform method, the low-dimensional manifold method (LDMM), and the coordinate attention-based UNet method, particularly in cases with large proportions and large-gap missing data. Diffusion is all we need for seismic data interpolation.
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Data Science & Engineering Analytics > Information Management and Systems (1.00)
ABSTRACT Projection over convex sets (POCS) is one of the most widely used algorithms to interpolate seismic data sets. A formal understanding of the underlying objective function and the associated optimization process is, however, lacking to date in the literature. Here, POCS is shown to be equivalent to the application of the half-quadratic splitting (HQS) method to the norm of an orthonormal projection of the sought after data, constrained on the available traces. Similarly, the apparently heuristic strategy of using a decaying threshold in POCS is revealed to be the result of the continuation strategy that HQS must use to converge to a solution of the minimizer. In light of this theoretical understanding, another methods able to solve this convex optimization problem, namely the Chambolle-Pock primal-dual algorithm, is shown to lead to a new POCS-like method with superior interpolation capabilities at nearly the same computational cost of the industry-standard POCS method.
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Data Science & Engineering Analytics > Information Management and Systems (1.00)
ABSTRACT Seismic data interpolation plays a crucial role in obtaining dense and regularly sampled data, contributing to improving the quality of seismic data in seismic exploration. Sparsity-promoting methods use a two-step iteration to gradually recover missing traces, by exploiting the sparsity representation of seismic data in transform domains, such as Fourier, wavelet, and curvelet transform, within the framework of the projection onto convex sets (POCS). In the first step, the missing traces are restored by applying the thresholding shrinkage to the transform coefficients. In the second step, the observed data are inserted into the updated result. However, this method relies on a preselected transform and lacks the capability to adaptively capture sparse representations. In addition, determining the optimal threshold parameters can pose difficulties. These limitations yield unsatisfactory reconstruction results. To address this issue, we propose a novel approach called sparse prior-based seismic interpolation network (SP-net) that combines the sparsity-promoting method with a deep neural network. Unlike traditional end-to-end networks, our proposed neural network integrates the widely used POCS method into its architecture, enabling automatic learning of the sparse transform, and threshold parameters from the training data set. By combining the merits of the sparsity-promoting techniques and data-driven deep-learning approaches, SP-net achieves enhanced adaptability and more accurate interpolation results. Through experiments conducted on synthetic and field seismic data, we demonstrate the effectiveness of our proposed method.
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Neural networks (1.00)