We introduce an inversion-driven free surface multiple modelling scheme based on multi-order Green’s functions. The approach optionally combines surface related multiple modelling with source designature and receiver deghosting. We demonstrate the effectiveness of the approach for pegleg multiple suppression as well as highlighting the benefits of combined receiver deghosting and demultiple. In addition we show how the use of multiples can provide uplift for cable interpolation.
Attenuation of shallow water peg-leg multiples remains a challenge for a number of reasons. Although successful in deep water settings, surface-related multiple attenuation (Verschuur et al., 1992) is compromised in shallow water by missing near offsets and insufficient spatial sampling. While τ - ρ deconvolution is broadly effective, it assumes a locally 1D multiple generator and wavelet truncation is often unavoidable. More recent approaches based on multichannel prediction operators estimate shallow section multiple generators from multiples (Biersteker, 2001; Hargreaves, 2006; Hung et al., 2010). This reduces the impact of missing near offsets and the poorly recorded water bottom primary reflection. However, the operators can be contaminated by noise and other reflectors.
More recently, Wang et al. (2011) proposed a model-based water-layer demultiple approach based on a known Green’s function. For a sufficiently accurate Green’s function, the method can predict multiples with a high level of temporal precision. The strategy was extended by Cooper et al. (2015) to improve the amplitude integrity of the model and Huang et al. (2015) to improve the data consistency.
We introduce an inversion-driven free surface multiple modelling technique (MOGIN) using multi-order Green’s functions. The approach optionally combines multiple modelling, source designature, receiver deghosting, and cable interpolation.
We propose an efficient and accurate plane wave reverse time migration (RTM) imaging condition using frequency domain plane wave Green’s functions. The proposed imaging condition can be applied in anisotropic media, as well as in isotropic media. Based on the discrete Fourier transform (DFT), a procedure is developed to extract frequency plane wave Green’s functions from time domain wave fields that include anisotropy. The rapid expansion method (REM) is implemented for the decoupled P wave equation in an acoustic vertical transversely isotropic (VTI) medium to compute time wave fields. Only plane wave Green’s functions need to be computed for imaging, which leads to a fast migration scheme. Furthermore, the proposed imaging condition can generate ray-parameter common image gathers (CIGs). Therefore, the migration approach can be a useful tool for anisotropic parameter analysis. The proposed method is demonstrated to be efficient and effective through one synthetic example.
The RTM (Baysal et al., 1983; McMechan 1983; Zhang and Sun 2008) is a powerful tool for imaging complex subsurface areas. In areas where the effect of anisotropy cannot be ignored, isotropic wave propagation used for RTM may result in mis-positioning of structures. The acoustic transversely isotropic (TI) media, including VTI and tilted transverse isotropy (TTI), have been proven to be good approximations for many subsurface areas. Implementing RTM under VTI (Du et al., 2008; Liu et al., 2009; Pestana et al., 2012; Zhou et al., 2006a) and TTI (Zhan et al., 2012; Zhang and Zhang 2008; Zhou et al., 2006b) assumptions have been shown to recover structures correctly in anisotropic media.
In this study, we first derive the plane wave imaging condition using plane wave Green’s function for double plane wave (DPW) dataset with the Born approximation (Bleistein et al., 2001; Stolt and Weglein 2012). Tatalovic et al. (1990) introduced the DPW transform and DPW imaging in coupled ray-parameter and frequency domain. DPW full waveform modeling was introduced and implemented by Sen and Frazer (1991) and Sen and Pal (2009). Stoffa et al. (2006) implemented DPW migration by computing vertical delay time for plane wave components with the aid of asymptotic ray theory (ART). DPW-based time domain RTM (Zhao, Stoffa, et al., 2014) and frequency domain RTM (Zhao, Sen, et al., 2014) were also realized.
Multidimensional seismic data reconstruction can be viewed as a low rank matrix or tensor completion problem. Different rank-reduction approaches can be employed to perform seismic data interpolation and denoising. For these methods, the computational cost and reconstruction quality are two important aspects that must be carefully considered. In this paper, we present a new fast and economic tensor completion method named Parallel Square Matrix Factorization (PSMF). We apply the algorithm to the ubiquitous 5D seismic data regularization problem. 5D reconstruction entails reconstructing a series 4th-order multilinear arrays (tensors) in the frequency domain. For this purpose we transform the data to the frequency domain and 4D spatial volumes in midpoint-offset are reshaped into matrices. Rank-reduction of these matrices is at the core of our reconstruction algorithms. We show that properly reshaping the data tensor into almost square matrices lead to an improved tensor completion algorithm. We demonstrate the effectiveness of the proposed approach via synthetic examples and by a data set from Western Canadian Sedimentary Basin.
In the frequency-space domain, properly sampled seismic data can be represented by a low rank matrix or tensor. Decimation of traces and additive noise increase the rank of the matrix or tensor. Hence, matrix and tensor completion methods which are widely applied in computer vision and recommendation systems (Liu et al., 2013) can be adopted to recover missing traces and to enhance the SNR of the seismic volume. Different reduced-rank methods have been adopted for seismic data processing. For instance, Trickett et al. (2010) proposed a matrix rank-reduction method based on Cadzow Filtering (CF) to interpolate missing traces. Similarly, Oropeza and Sacchi (2011) proposed a Multichannel Singular Spectrum Analysis (MSSA) method to reconstruct the 3D data and adopted a Randomized SVD algorithm to speed up the rank reduction filter required by their algorithm. Gao et al. (2013) expanded the MSSA method to reconstruct 4D spatial data and adopted fast multilevel Toeplitz matrix-vector multiplication algorithms to improve the computational efficiency of the original MSSA algorithm. Kreimer and Sacchi (2012) introduced a low rank tensor completion method, named High Order SVD (HOSVD), to reconstruct 5D seismic volumes. Kreimer et al. (2013) proposed a nuclear norm minimization method which does not require the provision of a priori rank estimates. The common place of these methods is that they all reduce the rank of the tensor via the SVD algorithm or via the Lanczos bidiagonalization technique. Kumar et al. (2013) proposed a robust nuclear norm minimization method and adopted iterative iterative low rank matrix factorization as an alternative to the SVD. Recently, Gao et al. (2015) introduced a fast SVD-free tensor completion approach, named the Parallel Matrix Factorization (PMF) method, to reconstruct multidimensional seismic data.
Diffraction imaging aims to image small subsurface objects, such as faults, fractures, channels, etc. As well as in classical reflection processing, velocity analysis is crucially important for diffraction imaging. Path-integral calculation provides a practical and efficient imaging method, which produces a velocity-independent image of the subsurface. We propose a direct analytical formula to improve the efficiency of path integral evaluation. We also develop an adaptive filtering approach and a Gaussian weightning scheme for removing artifacts in path-integral images.
Path-integral formulation provides an efficient imaging method, which does not require subjective and time-consuming velocity picking and produces a velocity independent image of the subsurface (Landa et al., 2006; Burnett et al., 2011). The concept of path integral is based on the fact that an image can be obtained from summing (stacking) of wavefields along wavefronts: the image accumulates contributions from only those events that are nearly in phase and which correspond to real media features (Keydar, 2004; Landa, 2004).
A path-integral image can be calculated through a summation of a set of constant velocity images, generated by velocity continuation (VC), which accounts for both lateral and vertical shifts of the event under velocity perturbation. The transformation is continuous and analytical: a velocity step between constant velocity images can be in theory infinitely small (Claerbout, 1986; Fomel, 1994, 2003a,b; Burnett and Fomel, 2011; Decker and Fomel, 2014). Diffractions are sensitive to velocity perturbation (Novais et al., 2006): hyperbolae’s flanks change their shape under VC transformation whereas their apexes remain stationary. Therefore, by stacking of constant velocity images one can superimpose diffraction apexes constructively, cancel out hyperbolae’s flanks and generate a path integral image (Burnett et al., 2011). If the constant velocity images are weighted before stacking by their corresponding velocities they produce a double path integral image, which can be used for velocity estimation (Schleicher and Costa, 2009).
The forementioned approach for path integral calculation requires a sequence of VC steps to generate a set of constant velocity images and therefore may consume significant computational resources. Moreover, produced images appear to be contaminated by artifacts compared to the images produced by regular VC workflow (Burnett et al., 2011). The artifacts come from incomplete cancellation of under- and overmigrated flanks of hyperbolas. We refer to these flanks as tails. Weighting scheme for path integral calculation (Landa, 2004; Landa et al., 2006) can reduce the artifacts but at an additional expense of determining the weights.
A time-domain seismic simulator used to compute either the forward simulation of a source or the adjoint simulation of a recorded wave field is often implemented using a time stepping algorithm based upon a selected explicit or implicit finite difference approximation to either a first or a second time derivative. Any finite-order approximation provides a solution that suffers from some degree of temporal dispersion, particularly at higher frequencies. The temporal dispersion errors can be corrected via a resampling operation in the frequency domain when using this type of simulator for forward simulation, Reverse Time Depth Migration (RTM), and Full Waveform Inversion (FWI) gradients. This correction can impact how well broad-band RTM or FWI inversion results tie to well logs.
An impulsive source is simulated at zero time and the simulated receiver data due to a finite difference algorithm is shown in Fig. 2. The acoustic finite difference algorithm applied to the model is second order in time and 14th order in space. The vertical axis is time in seconds. The horizontal axis is trace number and there is a 10 m increment between trace locations. A large time step is used in the simulation and the events are distorted by temporal numerical dispersion. High-frequency energy arrives early. Very low-frequency energy arrives at roughly the correct time. The different wave propagation speeds for different frequencies is called dispersion. The exact acoustic wave equation in this case should provide a solution where all frequency components propagate at the same speed. The numerical solution has dispersion because the temporal finite difference operator is inexact. The temporal numerical dispersion is a larger effect for higher frequencies and later times.
The Xiashihezi Formation in the Ordos Basin, China is a quartz-sandstone reservoir having low porosity, low permeability, and low pressure. Its quality is controlled primarily by deposition facies and secondarily by diagenesis. The principal targets of horizontal drilling, completion, and production are the thick massive sands and the vertically stacked sand beds. In this study, we acquired 266 km2 of three-dimensional (3-D) seismic data, 20 vertical and 25 horizontal well data to show the potential of seismic attribute analyses in locating geological and engineering sweet spots for the placement of horizontal wells. The joint analyses of amplitude, texture, curvature and frequency attributes demonstrate that the geology, geophysics and the horizontal drilling properties of the reservoir are linked. The study results have improved the placement of horizontal wells in tight gas reservoirs of the Sulige gas field.
The tight gas-bearing sand formation is often characterized by permeability as low as 0.1 millidarcies (Law and Curtis, 2002). Tight gas plays have received considerable attention in the past few years. The Sulige gas field (Figure 1a), Ordos Basin, China, is one of such plays for the subject study.
The Lower Permian Xiashihezi Formation (P1h8) of the Sulige gas field is one of the largest and most actively developed tight gas sand plays in China. It extends across the northern Shaanxi and the southwestern Inner Mongolia. The reservoir quality is controlled primarily by depositional facies and secondarily by diagenesis. The Xiashihezi formation is a large fluvial–deltaic complex (Figure 1b). Most of the gas-bearing formations which are in the P1h8 reservoir correlate to point-bar sandstones, which isolated in continuous distributary channel-fill sandstones. Wang et al. (2015) demonstrate feasibility of applying of seismic attributes to identify the higher productivity sandy reservoirs in the Xiashihezi Formation. The seismic attributes analyses can greatly enhance gas recovery in tight gas reservoir. In Sulige gas field, He et al (2013) suggested that thick massive sands (net productive thickness > 6 m and gas-bearing sand length > 500 m) and vertically stacked sand beds (6 m < net productive thickness < 15 m and gas-bearing sand length > 1000 m) are the primary targets for horizontal well drilling, completion, and production (Figure 2).
We propose enhanced microseismic event localization methods based on the representation theorem without the requirement of picking the P- and S-wave first arrivals. Both acoustic and elastic versions of the method are introduced in this extended abstract. The new method incorporates not only the wavefield but also its spatial gradient in back-propagation process to get an enhanced image of the microseismic event. In the acoustic representation theorem, the three-component particle velocity or displacement field is combined with the pressure wavefield. In the elastic representation theorem, the three-component particle velocity or displacement field is combined with three-component rotational wavefield. Both the acoustic and elastic representation theorem based method can provide a promising true event location; the latter method can furthermore image the source mechanism if we have a good coverage of receivers. This method also necessitates a robust focusing criterion to automatically determine microseismic event locations and origin times. A Hough transform based method is used for wavefront detection and magnitude summation in the timespace domain.
Microseismic monitoring involves locating and characterizing microseismic events triggered by human activity (Van der Baan et al., 2013). The most common application of the technology in the hydrocarbon industry is to monitor hydraulic fracturing treatments. Accurate microseismic event locations aid in understanding the fracture development, estimating simulated rock volumes and determining future drainage strategies (Maxwell, 2009).
Traditional travel-time based microseismic event localization methods require P- and S-wave first arrivals picking, which can be challenging and time consuming task for low signal-to-noise ratio data (Artman et al., 2010). Mispickings and missing picks will lead to incorrectly located microseismic events.
Migration based methods are suitable for low signal-tonoise ratio data because they avoid picking first arrivals. Reverse time extrapolation is one of the most important methods under this category.
Seismic wave contains both the travel-time information and the waveform information. Traditional imaging methods mainly use the trave-time information to image the structure of reservior and can not easily be used to the attribute inversion. On the other hand, seismic waveform methods such as Full Waveform Inversion (FWI) and Least-Squares Migration (LSM) have been deomonstrated its ability to compute the high resolution image. Based on the full wave equation, we presents a strategy for high resolution image which implements a joint inversion imaing by FWI and LSM. Numerical experiments on the synthetic dataset proves the efficiency of the method.
Least-Squares Migration (LSM) is an effective way to generate an amplitude-preserved imaging result (Nemeth et al., 1999). Based on the minimization of a least-squares function, by using the inversion of a Hessian kernel, amplitude distortion in the migration process, caused by the frequency band limitation, limited acquisition geometry, and complex overburden can be compensated by LSM (Shin et al 2001, Plessix and Mulder 2004, Symes 2008, Ren et al 2011). Especially by using the full wave operator, Least-Squares Reverse Time Migration (LSRTM) is brought into focus by reserchers these years and is now proving its efficiency for the amplitude-preserved imaging. Simultaneously, Full waveform inversion (FWI) has been fully demonstrated its ability to update the velocity model (Tarantola, 1984; Virieux and Operto, 2009).
Acturally FWI and LSRTM are based on an uniform mathematical base by fitting the seismic waves in both amplitude and phase. Starting from a background velocity, FWI and LSRTM update the velocity model and the imaging result respectively. Literatures has proved that FWI has many limitations for application partly because it use the “full” waveform. Strategies have been presented to solve this problem. Such as the envelope FWI (Wu et al., 2013) and the Reflected-data FWI (Xu et al., 2012). All these strategies try to pick up given character of seismic wave and mitigate nonlinearity of the misfit function to the model.
The velocities and densities of heavy oil mixed with hydrocarbon solvent (diluent 12 wt and propane 88 wt ) were measured and analyzed for a wide range of temperatures from 8°C to 90°C, pressures from 1 MPa to 48 MPa, and mixed hydrocarbon solvent from 5wt to 80wt . Velocities and densities of the mixture decrease with increasing temperature, decreasing pressure, and increasing weight fraction of the solvent. The 6 wt of the solvent shows the highest effect to reduce velocity and density, and then, the effect reduces gradually.
Solvent based recovery methods for heavy oils can significantly reduce viscosity of heavy oil, and have been recognized suitable for deeper reservoirs, which are capable to achieve high recovery rates, and avoid high temperature reactions and / or high water requirements, which often occurred in a steam based methods (Jiang, 1997). In addition, solvent is reusable. Usually, solvents mainly include toluene, and hydrocarbon gas/liquid (HC solvent). Recently CO2 has also been recognized as additional solvent candidate. In order to design and optimize solvent-based processes, we need to monitor how solvents work with heavy oil. Seismic method is the first choice for monitoring reservoir performance of the solvent based processing. As we known that acoustic properties of solvent-heavy oil mixture are a fundamental issue to be solved. We have focused on different solvents, mainly propane (C3H8) and light hydrocarbon based solvents. In this abstract we present experimental results on properties of heavy oil mixed with hydrocarbon solvent including P-wave velocity and density, and analysis of solvent efficiency to reduce heavy oil viscosity.
Experimental design and methodology
In order to investigate solvent effect within a wide range of in-situ condition, the samples were prepared to cover from heavy oil rich to solvent rich end.
We used weight percentage to prepare samples. The volume percentages of the data used in the measurement have been converted with the given pressure and temperature conditions.
He, Conghui (Tsinghua University) | Fu, Haohuan (Tsinghua University) | Liu, Bangtian (Tsinghua University) | Ruan, Huabin (Tsinghua University) | Yang, Guangwen (Tsinghua University) | Yang, Hui (Statoil (Beijing) Technology Services) | Osen, Are (Statoil (Beijing) Technology Services)
The Kirchhoff beam-stack migration is quite popular inproduction with both better image quality and faster speed compared to Kirchhoff migration. Meanwhile, continuous HPC developments offer new opportunities for the industry to further enhance the efficiency of beam migration methods. We present a design of a highly efficient GPUbased beam migration. By parallelizing both the ray tracing and the beam mapping kernels with millions of GPU threads and using an asynchronous IO scheme, we derive a parallel beam migration design that fits current CPU-GPU hybrid clusters. Then, we test our GPU-based beam migration on the SEG/EAGE salt model and the SEAM salt model for different generations of GPU architectures, presenting accurate imaging results with 2-6 times speedup compared to a parallel 16-core CPU design.
Although Kirchhoff Migration has difficulties in imaging complex geologic structures (Etgen et al., 2009), it is still widely used for 3D prestack depth imaging in production due to its robustness and easy usage. Also, it has the flexibility of target-oriented processing, which helps minimizing processing turn-around time (Sun et al., 2000). Therefore, efforts to improve the image quality and to accelerate Kirchhoff migration will still be necessary.
By making full use of the directional and spatial properties of the adjacent traces, 3-D pre-stack Kirchhoff beam migration (Sun et al., 2000) is proposed as an alternative to Kirchhoff migration with both better image quality and faster speed. It is done by first stacking the traces into a τ-p beam volume, and mapping the beams, instead of traces, into the image volume.
Beam migration has been widely used for seismic imaging due to its high performance on CPU cluster with either distributed or shared memory architectures. Recently, the rapid development of GPU architectures provides a high performance platform for geophysical processing, and has brought significant speedup for Kirchhoff and RTM methods (Shi et al., 2011; Liu et al., 2013). In contrast, the variations in the sizes of beams and the irregular memory access patterns make it difficult to derive an efficient mapping of the beam migration to the GPU platform (Jang et al., 2011), and few works about GPU-based beam migration designs are proposed.