Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Results
ABSTRACT Poynting vector based reverse time migration () can migrate seismic data into angle-domain common-image gathers (). The quality of common-image-gather is related to the used migration velocity. However, in the seismic data processing, we usually cannot obtain a very accurate depth migration velocity. The difference between the true depth velocity and estimated migration velocity would introduce image artifacts, which are often observed as non-flat events in the . These non-flat events will degrade the final image that is linearly stacked from all angles. To mitigate such image artifacts caused by the inaccurate migration velocity, we can use both quantitative and qualitative methods to optimize the stacking. The quantitative method uses a numerical method to achieve an optimal velocity model for migration. However, there is residual moveout error still left in the gathers. Instead, we propose a qualitative method to assure geologically meaningful result, so-called segmentation method using Moore neighborhood algorithm, which can decompose events in the into isolated signal groups. From this sequestered image domain, signals can manifest easier because of dealing only local signals that do not interfere with adjacent ones. We can automatically align concave events and focus stretching amplitudes at far angles. Additionally, we do not need cross-correlation to measure the moveout of non-flat events. Consequently, we can avoid the mismatching among the events at different depths. We test our proposed method by one numerical dataset. The numerical results show that the can correct the non-flat events in the from small angle to large angle. Compared with the tradition with Laplace filtering and small angle stacking, our new method can produce a superior migration image with fewer artifacts. In practice, this proposed method can reduce the requirement of migration velocity accuracy in the depth velocity building and provide a first-look of image. Presentation Date: Tuesday, October 16, 2018 Start Time: 1:50:00 PM Location: Poster Station 20 Presentation Type: Poster
- Geophysics > Seismic Surveying > Seismic Processing > Seismic Migration (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling (1.00)
Joint inversion for microseismic event positions and velocity structures by combining multiscale deformable-layer tomography and master station earthquake location method
Wei, Zhili (University of Houston) | Zheng, Yingcai (University of Houston) | Zhou, Hua-Wei (University of Houston) | Hao, Hu (University of Houston) | Lau, August (University of Houston)
ABSTRACT Hypocenter determination is an ever-green research topic in earthquake seismology, and it has gained interest recently for monitoring microseismicity in unconventional petroleum exploration. In the microseismic monitoring, both the seismic event positions and the velocity model of production area are critical for understanding the changes in the subsurface caused by the hydraulic fracturing. However, there are uncertainties in all velocity models due to limited data quality and coverage, and such uncertainties would degrade the accuracy of the event position inversion. We address these issues by simultaneously inverting for the velocity model and microseismic event positions. We incorporate the master station earthquake location method with equal differential time (EDT) surface and the multi-scale deformable-layer tomography (DLT). In this abstract, we tested this joint inversion with several synthetic datasets which yielded satisfactory results. Both the average travel time residual and the average position distance misfit were dramatically decreased by iterations. These synthetic examples successfully showed that our joint inversion is capable of improving the accuracy of event locating and velocity building. Presentation Date: Monday, October 17, 2016 Start Time: 1:25:00 PM Location: 144/145 Presentation Type: ORAL
- Well Completion > Hydraulic Fracturing (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
Deepwater fold-and-thrust imaging: A case study with joint inversion, tomography, and dense velocity analysis
Shih, Chung-Chi (Schlumberger) | Masnaghetti, Luca (Schlumberger) | Mantovani, Marco (Schlumberger) | Luca, Luciana (Schlumberger) | Hayo, Kate (Schlumberger) | Wang, Tien-Huei (Schlumberger) | Chen, Zengbao (Schlumberger) | Reid, Eva (OMV) | Langdale, Scott (OMV) | Lau, August (Project Consultant)
ABSTRACT It is a challenge to derive a detailed velocity model for seismic imaging in complex geological terranes. For salt-prone areas, there has been some good progress for short wavelength velocity, e.g., full-waveform inversion (FWI) (Sun et al., 2012), the seismic adaptive optics approach (Etgen et al., 2014), and surface warping by reverse time migration time-lag gathers (Hu et al., 2015), used to achieve small-scale detailed salt geometry. Deepwater fold-and-thrust imaging faces similar challenges to define local structures in the Gulf of Mexico. This case study shows further complexity in that the deep-water thrusts are made of both younger Neogene deposits and older Cretaceous sedimentary sequences compared to older rocks typical of US onshore, e.g., the Arkoma Basin thrust zones (Alrefaee et al., 2012). Hence, we cannot assume that velocity within a thrust sheet is approximately constant. Our approach was modified from successful onshore workflows, designing around a processing flow for an offshore-deep water fold-and-thrust area using 2D seismic data. The main velocity-modelling components of this workflow are joint inversion of seismic and gravity for long wavelengths, travel time tomography for intermediate wavelengths and dense normal moveout-based (NMO) velocity analysis for local short-wavelength structural anomalies. To reduce the time to perform a dense velocity analysis, geological interpretation-based velocity functions were derived first. These guiding functions were then used to constrain automatic and densely spaced velocity analysis. The overall results showed a better high-resolution velocity model and solved the shorter-wavelength structure distortions. Presentation Date: Tuesday, October 18, 2016 Start Time: 11:35:00 AM Location: Lobby D/C Presentation Type: POSTER
- North America > United States > Oklahoma (0.25)
- North America > United States > Arkansas (0.25)
- Geophysics > Seismic Surveying > Seismic Processing (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling > Seismic Inversion (1.00)
Summary In depth migration for marine seismic data, the water-layer related multiples will cause severe artifacts in migration images, sometimes misguiding interpretation. For a deep-water case, we always use surface related multiple elimination (SRME) to suppress the multiples. In recent years, the shallow water multiple elimination (SWME) method is also developed for tackling multiples where the water bottom is relatively shallow (<100m). Furthermore, poststack wave-equation based multiple prediction and subtraction methodology can be utilized to remove residual multiples. In this study, we applied SWME on the original shot gathers followed by the depth migration, using the demultipled data for input. After stacking, we also applied the wave-equation based poststack multiple prediction and subtraction on the stacked image. Comparing the results before and after multiple removals, we can see dramatic improvement in the subsalt area. Introduction One of the major challenges for subsalt imaging in the Gulf of Mexico is the poor signal-to-noise ratio and the lack of signal content in the lower frequencies. For deeper prospecting, the application of sophisticated demultiple techniques is thus very important as the signal to noise ratio in subsalt areas is inherently poor. It is crucial that the primary seismic signal is well preserved and relatively free from interfering multiples so that subtle subsalt geologic information can be properly imaged. In this case study, we have used Shallow water multiple elimination (SWME) in the prestack domain, followed by a wave equation based post-stack demultiple with the end goal of trying to improve the quality and reliability of the migrated image in a subsalt prospect area for the interpreter. We will show that the improved quality of the image leads to significant improvements in the interpretation process for the subsalt prospect. In the following sections we first give a brief description of the demultiple workflow that was used and then we show the resulting improvements in the interpretation of the subsalt prospect that was achieved. Prestack SWME Data-driven surface-related multiple elimination (SRME) method proposed by Berkhout et al. (1997), along with its 3D implementation (Moore et al., 2004) has been widely used in seismic processing to suppress surface related multiples. However, conventional SRME (2D as well as 3D) has severe difficulties in dealing with a shallow water environment due to the short period nature of the water bottom multiples. For such shallow water environments, in deeper areas the problem with the multiple model is even more severe. This leads to poor attenuation of water-bottom related multiples, which in turn leads to difficulty in interpreting subsalt prospects with a high degree of confidence. In recent years, model-based methods have been proposed to handle such shallow water situations (Wang, et al., 2011). TGS showed that such a model-based method combined with conventional SRME can be used as an effective tool to attenuate multiples in broadband data (Zhai et al., 2015). The proposed shallow-water multiple-elimination (SWME) approach predicts the multiple model accurately by using a broadband wavelet for the Green’s function of the water-bottom and enforces adequate aperture control on the Green’s function as well. Such aperture control (based on the critical angle of the water bottom reflection) is key to obtain a clean and artefacts free multiple model by reducing contributions from spurious post-critical water bottom reflections to the convolution process. In this study, the SWME workflow is applied on the prestack data. Then demultipled data is passed to depth migration engine. The comparison of the depth migration images before and after SWME is shown in latter section.
Summary We present a newly developed method of velocity inversion that does not require explicit picking of either common image gathers or first breaks. Seismic data are first decomposed into Gaussian packets. The early-arrival packets are selected and used as input for an anisotropic traveltime tomography solver. Using information contained in the packets, we can correctly model head waves, and diving waves. The workflow is fully automated and can be used for in a batch processing environment with minimum human intervention. We demonstrate the method with two synthetic examples and one real-data example. In the synthetic examples, we are able to use super-long offset data and reduce traveltime residuals from 400ms to 5ms or less in three iterations. In the real-data example, we achieved an accurate velocity model that yields both an improved image and flatter common-image gathers. This method can also be combined with traditional post-migration tomography.
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic modeling (0.92)
Seismic Depth Imaging From High to Low Signal: A Case History
Lau, August (Apache Corporation) | Yin, Chuan (Apache Corporation) | Tiwari, Dhananjay (TGS) | Guo, Zhiqiang (TGS) | Rodriguez, Gary (TGS) | Ji, Jean (TGS)
Summary The ultimate goal for seismic depth imaging is to find new hydrocarbon prospects or improve existing ones. High quality seismic data and an accurate velocity model are the main drivers for good imaging. Prospects may be identified on seismic data in areas which can range from high to low signal. Roughly speaking, prospects generated for shallow targets will be on good signal data and prospects for deeper targets will be in low signal areas, with the signal strength somewhere in between for the mid-range depths. Different imaging approaches, including migration algorithms and velocity model building techniques, are needed for the different situations of varying geology and signal-to-noise ratios. In areas of high signal, tomography is useful for refining the velocities for three-way (e.g. fault traps) or four-way depth closures. Where signal-to-noise is lower, more modern tools will be required. Areas of medium signal might include salt overhangs and fold-and-thrust belts, where we might want to define closure under a high angle thrust fault. These areas could benefit from Reverse Time Migration (RTM) based Delayed Imaging Time (DIT) scans. For deeper targets where the signal is often low, efficient RTM layer stripping can be very effective for improving the imaging of plays below salt or beneath a detachment or unconformity. We are presenting a case study showing improvement in the overall imaging in terms of fault closure, subsalt sediments truncating against the salt flanks, and better focusing around the salt overhang in an area of the Gulf of Mexico (GOM). This paper will demonstrate the benefit of tomography for sediment velocity model building and updating for depth imaging, along with the improvements gained by using RTM based DIT scans and layer stripping RTM.
- North America > Mexico (0.34)
- North America > United States > Kentucky > Butler County (0.24)
- Geology > Structural Geology > Tectonics > Compressional Tectonics > Fold and Thrust Belt (0.75)
- Geology > Structural Geology > Fault (0.69)
- Geophysics > Seismic Surveying > Seismic Processing > Seismic Migration (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling (1.00)
Summary In many areas, available seismic acquisition data may not be optimal, and purely data-driven velocity modeling methods are inadequate to resolve specific imaging problems. In this case study to image beneath a fault shadow offshore in the Gulf of Mexico, limitations of the available data were the original motivation to incorporate various interpretation-derived constraints in conjunction with data-driven tomographic velocity updates to improve the overall velocity modeling process. Automated tomographic modeling is increasingly relied upon to produce high-quality models in cases of robust data input. However, in the case of sub-optimal acquisition that limits illumination, and noise and multiples that affect the ability to pick residual moveout, interpretation constraints can enhance data-driven modeling. Strategies incorporated here include "manually seeded" velocity analysis, explicit interpretation-guided pick weighting, and implicit geologic steering filters based on dip fields. The goal of the study is to highlight effective strategies for interpretation-constrained modeling to complement data-driven modeling in an effective holistic workflow.
- North America > United States (0.36)
- North America > Mexico (0.24)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic modeling (1.00)
Summary During the last five years elastic prestack waveform inversion has been applied on a number of case studies. Recently isotropic elastic prestack waveform inversion has been extended to VTI anisotropic prestack waveform inversion. The case studies in this presentation refer to development of onshore oil and gas fields in North America. Seismic inversion has been deployed to derisk the drill locations and also generate improved reserve estimates prior to drilling. Prestack waveform inversion estimates elastic parameters by matching the seismic waveforms on each gather with the NMO correction removed over the complete zone of interest. In one particular case study we have compared a prestack waveform inversion with the results of an angle stack inversion.
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic modeling (1.00)
- Management > Professionalism, Training, and Education > Communities of practice (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Knowledge management (1.00)
Summary We present a nonlinear processing methodology designed to separate out various structures in the seismic data. This methodology builds on the strategy of decomposing seismic data into simple and complex parts (see Lau et al 2008) in which a variational minimization approach extracts the simple structure and complex structure. This paper presents an alternative construction using diffusion semigroups. It is a nonlinear method like the variational method. But it does have the advantage of controlling the components in the nonlinear decomposition . We show that by using various diffusion geometric tools we can build nonlinear filters which enable a decomposition of the data into various intrinsic substructures designed to facilitate interpretation tasks. Introduction It is very useful to decompose seismic data (prestack or poststack) into different components. The applications of such decompositions can be found in signal processing, data compression, coherent energy useful for structural interpretation, complex stratigraphy, etc. Seismic decompositions can roughly be divided into two categories of algorithms. The first category is using a predetermined basis. Examples of such method are Fourier decomposition and wavelet transform. The basic building blocks have definite geometric shapes. It yields fast decomposition. But in general, these are linear decompositions where the superposition of the components yields the original input data. The second category is projection methods like SVD (singular value decomposition) which defines the projections (basis elements) based on the input data. The geometric shape of each projection is not predetermined. The approach here is that complex objects cannot be fitted into a predetermined basis. The connectivity and conductivity of objects like network or images determine the projections (basis elements) to be used. The geometry of each projection is data dependent. Diffusion geometry using the Markov semigroup approach belongs to the second category. It employs the concept of affinity in the seismic data to determine the local geometry. Then the local geometry determines the diffusion semigroup which in turn determines the global geometry of the data. Methodology To motivate diffusion geometry and semigroup, we will start with a 2-D image first. The left side of figure 1 shows a crossplot which looks like an inner dense circle and outer ring of dense points. The input crossplot is nonlinear in the sense that no line could separate the inner and outer circle. The first thing to do with a diffusion geometry is to define affinity. Affinity is analogous to a generalized distance function. Once that is done, we will use a transformation of the affinity to a Euclidean geometry (see right side of figure 1). The mathematics can be found in Coifman et al (Proceedings of National Academy of Science 2005 ). Notice that in the transformed space on the right, the inner and outer circles can be separated roughly into linear components (left and right sides of the transformed plot of the first 3 eigenfunctions). We want to show another crossplot example in figure 2 to further illustrate the change in the geometry of the original input data versus the transformed diffusion coordinates. The long term diffusion of heterogeneous material is remapped.
Summary With more sophisticated algorithms like reverse-time migration and waveform inversion, it requires a closer examination as to how we do preprocessing of the input seismic data. Too much signal processing could remove signal which used to be called “noise” since previous noise was an incomplete understanding of signal based on the wave equation. Too little preprocessing could cause migration artifacts when the migration operator is presented with data which does not fit the assumptions. We introduce a new criterion for migration based on geometric simplicity. Most migration optimization or tomography optimization methods are based on amplitude like RMS measurements or semblance or correlations. Geometric constraints are usually difficult to define for migration or inversion since geometry is in general a global concept. In this paper, we will use the Betti numbers from computational topology to describe geometric simplicity. The larger the Betti numbers are in an area, the more complex the seismic response would be for migration. The B0 (Betti number of zero homology group) measures connectivity and B1 (Betti number of first homology group) measures one-dimensional holes and B2 (Betti number of second homology group) measures two-dimensional holes. This gives a geometric criterion for migration or inversion methods like tomography. Introduction It is common to see migration artifacts especially when we zoom in close to the reservoir of interest. They look like little smiles or frowns in a 2-D section and they look like little circles in a time or depth slice. These are detrimental artifacts when we try to interpret detail features at the reservoir level. It is also common to see “bull’s eye” in tomography when we view depth slices. Tomographic artifacts are also evident as V-shaped artifacts in 2-D profile of the velocity model. Regularization methods are usually used to mitigate these problems. For tomography, common regularization method would be to use multi-scale method or to change damping parameter. Usually these methods cannot comprehend the geometry of the problem. They mitigate the problem but can create non-geologic looking shapes. Our strategy is to identify geometric complexity when there are many crossing events in the unmigrated data. One such source of many crossing events is the complex diffraction pattern of many point scatterers close together. The diffraction limbs are interfering with each other in a nonlinear way. One might think that we can undo the diffractions with migration when we get the proper velocity field. But the interactions between the scatterers when they are close together cannot be easily unraveled with a migration operator in real field data. So we would want to reduce the complexity of the diffractions before migration. Methodology To motivate geometric simplicity, we will describe simple situations first on synthetic data to illustrate the concept of Betti number. For the various mathematical definitions of Betti numbers, see Ghrist survey paper and Kaczynski et al. Intuitively, the Betti numbers can be thought of as counting the number of holes in space. The time slice of complicated diffraction pattern will have high Betti number.
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling (0.88)
- Geophysics > Seismic Surveying > Seismic Processing > Seismic Migration (0.71)