Phase decomposition is applied to low-impedance hydrocarbon-bearing sands in a clastic section where sand thicknesses vary from the vicinity of tuning to well below tuning. In order to properly interpret seismic phase changes caused by the introduction of hydrocarbons, it is useful to artificially "thin" the targets by high-cut filtering the data, thereby increasing the tuning thickness and making more layers seismically thin. Once the seismic thinning is performed, the amplitudes separate into the expected phase components, resulting in a different spatial distribution of mapped amplitudes than on the original seismic data. A useful method to determine what frequencies are required to obtain proper phase separation in a section with stacked interfering sands, is to apply spectral decomposition to a synthetic seismogram, followed by phase decomposition.
Presentation Date: Wednesday, October 19, 2016
Start Time: 3:10:00 PM
Presentation Type: ORAL
As seismic data responds primarily to the elastic properties of the earth, a way to assess the best case ability of seismic data to discriminate lithologies, is to test the procedure on compressional and shear-wave impedances derived from acoustic logs. A methodology for lithological discrimination using Genetic Programming (GP) and elastic attributes derived from well logs is found to classify lithology in Upper Albian deep water tight gas sands reservoirs of the Santos Basin, Brazil, more accurately than Naive Bayesian Classification (NB), Linear Discriminate Analysis (DA) and Multiple Layer Perceptron (RN) classification. Three different GP methodologies, Genetic Programming Classifier Expression (GPCE), Genetic Programming with Multiple Output (GPMO), and Genetic Programming using Gaussian Distribution (GPGD) are found to require fewer calibration data to achieve a better success rate on a blind test dataset than the NB, DA, or RN methods. Three experimentes were run in order to evaluate these methodologies: in situ conditions, in situ plus extension to a 100% water saturation condition, in situ plus extension to a 100% water and 100% gas saturation conditions. GP exhibits over 70% success rate when applied to lithology classification, being the most accurate and robust, when compared to the other methods: GPGD achieved an accuracy of 72.00% for the first experiment, 71.50% for the second and 72.83% for the third, followed by GPMO (69.33%, 75.33%, 72.00%), GPCE (67.83%, 72.50%, 68.00%), RN (58.67%, 69.50%, 67.83), DA (56.83%, 64.67%, 62.67%), and NB (55.50%, 66.17%, 64.67%).
Presentation Date: Monday, October 17, 2016
Start Time: 1:50:00 PM
Presentation Type: ORAL
Spectral decomposition data are sorted and visualized in a variety of ways. Useful sorting schemes include:
(1) pseudo-spectral volumes or “tuning cubes”, (2) timefrequency gathers, (3) space-frequency gathers, and (4) common frequency volumes. Sorting spectral data into maps and cross sections has useful application in visualizing a combination of time, frequency and offset dimensions
IntroductionSpectral decomposition data are multivalued at a given time and spatial position. Interpretation of subtle stratigraphic features from spectra may, thus, involve multidimensional visualization challenges. Consequently, the spectral data are sorted and visualized in a variety of ways (e.g. Partyka et. al., 1999). Useful sorting schemes include: (1) pseudo spectral volumes, such as “tuning cubes”, which are essentially frequency dependent amplitude maps that can be loaded into common interpretation platforms for visualization (2) timefrequency gathers, which show frequency spectra as function of time for a single seismic trace, (3) space-frequency gathers, which show lateral variation of spectra for a given time window or along a horizon, and (4) common frequency volumes.
A composite fault detection attribute is produced by Adaptive Principal Component Analysis of attributes derived from spectral decomposition. The composite fault detection attribute looks similar in time slice view to conventional attributes such as coherency and curvature but is far more readily interpretable in vertical cross-section view. Detailed interpretation of time slices reveals that window based attributes such as coherence can exhibit discontinuities in the incorrect spatial position if the time slice does not correspond to the strongest event in the window. This problem is less severe on time slices of the composite fault detection attribute and faults are thus, more correctly located. These ideas are demonstrated in a case study in the Hitts Lake Field, where faults are verified by missing section in well logs.
We compare the spectra of Short Time Window Fourier Transform (STFT) and Constrained Least Squares Spectral Analysis (CLSSA) for spectral minima periodicity. Using time thickness equals 1/df, where df is frequency period, we show that spectral minima approach using CLSSA gives more accurate time thicknesses than STFT for the same analysis window.
Starting with a broad band synthetic seismic data generated from a wedge model, we extract selected traces at known temporal thicknesses for time frequency analysis. We cross plot apparent time thicknesses derived from CLSSA and STFT line spectra using the approach above, against true time thicknesses of the wedge model. The result shows apparent CLSSA thicknesses that are strongly correlated with true time thicknesses
We extend this study to real seismic data from Hitts Lake Field, onshore Texas and show that the results are consistent with the results from model studies.
Although the S-Transform (ST) has better time resolution than the Continuous Wavelet Transform (CWT) at low frequencies, resolution may still not be adequate for certain seismic interpretation purposes. A modified S-Transform (MST), which reduces from the Generalized S-Transform (GST) is proposed as a simple means of controlling temporal and frequency resolution as a function of frequency. This is accomplished by replacing the frequency in the normalized Gaussian window of the S-transform with a linear frequency function. With appropriate choice of slope and intercept of this linear function, greatly improved temporal resolution can be accomplished at low frequencies, while retaining the temporal resolution of the ST at high frequencies. Synthetic examples show that low frequency ST spectra may suffer from interferences from nearby reflectors, and that this effect can be greatly reduced or eliminated using the MST. Application to real seismic data indicates that the MST can be useful in direct hydrocarbon indication.
This paper demonstrates a method for performing spectral decomposition of a non-stationary signal with Heisenberg’s minimum uncertainty wavelets. Due to Heisenberg’s principle one cannot obtain arbitrarily narrow time and frequency resolution simultaneously. Therefore wavelets with optimum time and frequency localization would aid in best decomposition of a signal. This paper also introduces a new family of wavelets which was discovered during the research work. These wavelets have the optimum time-frequency localization and are termed as µ-pseudo Wavelets. The conventional decomposition approaches use Ricker or Morlet wavelet for analysis and its time frequency localizations are more than the minimum uncertainty wavelets. Due to the least uncertainty product these new wavelets open up a new dimension of application for various decomposition techniques. A synthetic example explains the result of the robustness of the minimum uncertainty wavelet decomposition.
Principal Component Analysis (PCA) has been found to be an effective way of combining a variety of fault detection attributes exhibiting discontinuities that line up along particular 3D orientations. The Barnes (2006) discontinuity-filter method can be used to further enhance these results. Fault detection is improved using phase discontinuities obtained using Constrained Least Squares Spectral Analysis (CLSSA; Puryear et al., 2012) which is a higher resolution spectral analysis method than the short time Fourier transform (STFT). As CLSSA has reduced spectral smoothing relative to the STFT it potentially can produce a higher resolution discontinuity attribute. Based on this feature, CLSSA is applied in the Barnett Shale to detect faults. As compared to coherence and curvature, the resulting PCA fault-attribute better resolves minor tectonic and karsting-related faults.
Inversion of seismic reflection data for reflection coefficients can be formulated using predefined reflectivity patterns as basis functions. When basis pursuit decomposition with a wavelet dictionary of seismic layer responses is employed, the process is a sparse-layer, rather than a sparse-spike, inversion and consequently has improved resolution of thin layers. This is demonstrated on synthetic and real seismic data.
A method to estimate density using other well logs, like sonic, gamma ray and resistivity logs, in a unified rock model is proposed. An empirically derived rock model is an efficient way to compute density information including pore pressure and diagenetic effects, such as cementation and dissolution. This method provides a direct correlation between changes in density due to changes in lithology and fluid content. One immediate benefit of estimating pore-fluid density using well-log data in very low porosity rocks is its value in verifying the integrity of the log data. Another benefit of this method is that performing fluid replacement in the rock model provides an effective means of predicting zones prone to gas accumulation, sometimes by-passed by incorrect values or absence of data in the density logs. The rock model is constructed as a mixture of two lithologies, sand and shale, with variable proportions of each mineral, and brine as the pore fluid. For very low porosity rocks, under certain temperature and pressure conditions, the mineral composition can be related to grain mineral velocities by bounding equations. Lithology and porosity is then empirically related to velocity by the time-average equation or Issler''s transform with An applied diagenesis factor. A diagenesis factor is estimated and applied to porosity. Once the relationship between density and velocity is established from the available well log data, the transform from velocity to density can be performed