AVO processing workflows are designed to preserve relative amplitude between offset gathers making them ideal for reservoir characterization. But in the event where the traces are not aligned, post-processing techniques predicated on windowed cross-correlation are employed to adequately condition seismic data. A new application of dynamic time-warping algorithm, a technique used for speech recognition by matching similarities in two discrete time series that are out of phase is applied to non-linearly correct for time misalignment. A case study with time-lapse dataset from the Norne field, Norwegian Sea is presented showing significant improvement in Bayesian inversion for elastic properties as the traces are warped prior to the Inversion process.
Reservoir depletion gives rise to changes in both seismic amplitude and reflection timing between the base and monitor surveys that can be associated with property changes in the reservoir. In addition, compaction can occur (change in reservoir thickness). Time-lapse AVO inversion techniques utilize the change in seismic amplitude as a function of offset to quantitatively decouple reservoir production effects into pressure and saturation attributes (Landro, 2001). Fidelity of AVO as either a hydrocarbon indicator or for deciphering production effects can be improved by realignment of offset gathers and warping of 4D surveys.
AVO studies for fluid delineation and production effects require reasonably flat gathers. The premise behind adopting AVO equations (Zoeppritz and myriads of its linear forms) to fit data in a seismic inversion scheme assumes relatively flat traces (horizontal reflectors). This maybe far-fetched in reality, hence necessitating AVO post-processing techniques. Singleton (2009) classified these methods as either velocity-based or static based-methods (Hinkley, 2004; Gulunay et al., 2007). Static methods employed windowed cross correlation between traces to apply shifts, re-alignment done in this manner may not suffice in flattening the events, the earth being very complex and nonlinear require a non-linear technique. A dynamic timewarping algorithm (Roberto, 2012; Hale, 2013), which is a static method of gather realignment is applied to initially flatten prestack angle stacks to improving the seismic inversion results. The importance of gather flattening by dynamic time-warping techniques will be demonstrated by improvement in time-lapse inversion results implemented in a Bayesian framework, with a case study from Norne field, Norwegian Sea.
We present a simple method for eliminating surface-related multiples in shallow water environments. After a brief technical discussion, the procedure is demonstrated on an OBC survey imaging a producing field in the central North Sea. The North Sea's shallow bathymetry and well-known strong water bottom reflectivity make for both a challenging and ideal survey for testing removal of shallow-water layer borne multiples. We demonstrate with CDP stacks and autocorrelations that surface-related multiples are accurately estimated and effectively removed from the dataset.
The shortcomings of the industry standard data-driven surface related multiple elimination (SRME) in shallow water environments are well documented. However the method is not straightforward to extend to the OBC case because SRME requires the shot and receiver to be near the surface as a surface-consistent technique. As an alternative approach, tau-p decon has been widely used in shallow water OBC data to remove water-bottom related multiple reflections based on the multiples’ periodicity in the tau-pspace. This is however potentially harmful as primaries with similar periodicity are likely to be attenuated as well.
In this paper we discuss a workflow for successful attenuation of shallow-water multiples for a time lapse OBC-4C dataset using a wavefield extrapolation method. Our study shows that application of this wavefield demultiple methodology can act as a complementary tool, which when applied after a regular tau-p deconvolution can successfully attenuate the strong remnant water-bottom related multiples. In the following sections, we first briefly review the methodology of multiple model estimation using a wavefield extrapolation approach and then present the results of our study on a shallow water OBC dataset.
The shallow water demultiple method we propose is model driven and uses the bathymetry of the water bottom to estimate a model of the water-layer multiples. After the multiples are estimated, successful multiple elimination relies on subsequent adaptive subtraction of the model from the input. The engine behind the method is a one-way wavefield propagation and extrapolation in the water column.
Compared with conventional reservoirs, complex reservoirs often have more variety of pore types and more complex pore shape. For example, carbonate rocks can have a variety of pore types, such as moldic, vuggy, interparticle, intraparticle and crack. Therefore the essential of rock physics modelling of complex reservoirs is to characterize the complicated pore structure, that is to say, the rock physics model can contain a variety of pore types. This paper selected the critical porosity model as the study object, by combining with Kuster-Toksöz equations, established the relationship between the critical porosity of rocks and the pore structure (i.e. the pore aspect ratio) of rocks, proposed a new critical porosity model for multiple-porosity rock which can contain various pore types and can be used to model complex reservoirs.
The elastic properties of the rock depend on the pore structure significantly. Most of rocks usually have two or even more than two different pore types, such as pore, crack, cavity, etc., whose complex pore system makes the relationship between the velocity and porosity of the rock highly scattered (Sayers, 2008). Therefore, it needs to establish a multiple-porosity rock physical model for characterizing the elastic modulus of porous rock varying with porosity accurately.
Effective medium theory is often used to study the elastic properties of porous rock, such as Kuster-Toksöz theory. Kuster and Toksöz (1974) derived expressions for bulk and shear moduli of multiple-porosity rock by using wave scattering theory, in which the effects of elasticity, volume content and pore shape of inclusions are taken into account. Also there are several empirical models to calculate the dry-frame bulk and shear moduli. Nur (1992) proposed the concept of the critical porosity, by which established a linear relationship between the bulk and shear moduli of the rock matrix and the dry frame, and the critical porosity value dependent on the rock type.
In 2014 we proposed a new technique that used well information to correlate anisotropy with velocity for localized lithology dependent anomalies (Birdus at al., 2014). It is based on the assumption that in appropriate geological settings localized variations in both velocity and anisotropy are caused by changes in the lithology. This results in some correlation between anisotropy and velocity anomalies. We used well information to establish such a correlation for tomographic PSDM imaging anisotropic velocity models. In this paper we extend our approach to high resolution FWI depth velocity modeling. We use a real 3D seismic dataset from the NW Australian shelf to illustrate how our technique produces more realistic anisotropic velocity models and reduces depth misties.
Having an accurate anisotropy model is very important for depth-velocity modeling, in particular for correct positioning of seismic reflectors in depth. The effective seismic velocity V_seis (seismic moveout) in VTI media depends on the true vertical velocity V_vert and anisotropic parameter δ (Thomsen, 2002):
V _ seis =V _ vert* [not complete] 1+ 2δ (1)
The same effect is present in more complex models for anisotropy (TTI, orthorhombic etc). If we use only seismic data we are not able to separate the two factors in the right hand side of equation (1), i.e., we cannot unambiguously distinguish arrival time variations due to velocity and anisotropy. This leads to uncertainties in velocity/anisotropy estimations. In this paper, we focus on the elliptical component of anisotropy (δ = ε), which is responsible for the errors in depth estimation. We concentrate on small and medium-size anomalies when the major global trends are known.
Uncertainties and depth misties in anisotropic depthvelocity modelling and imaging.
We use seismic data, available well information and apriori geological knowledge to build imaging anisotropic depth-velocity models. The problem with uncertainties is resolved by creating the simplest model that satisfies all input data (Artemov and Birdus, 2014). Traditionally we put all detected small scale anomalies into the imaging velocities and set the anisotropy values using simplified smoothed models.
Converted-phase (CP) imaging produces high resolution images and can be used effectively for updating both P- and Swave speed models with an optimization scheme that is formulated in the extended image domain. This optimization is referred to as source-independent converted-phase WEMVA (SICP-WEMVA). However, the convergence of the optimization scheme depends on the selection of parameters and the formulation of the objective functions and their gradients. In this study, we investigate the sensitivity of the extended images for SICP-WEMVA to the domain where the objective function is formulated. We derive analytically the behavior of the seismic energy (i.e., seismic moveout) in the extended horizontal and vertical subsurface space-lag images as a function of P- and S-wave speed variations, and compare them with numerical results. The results of the moveout analysis demonstrate that the extended vertical subsurface space-lag images have higher sensitivity to the background P- and Swave speed variations than the horizontally extended images and thus may have significant implications on the resolution and convergence of the SICP-WEMVA.
In recent years, full waveform velocity analysis methods have become standard and the use of elastic waves is now drawing more attention. Converted phase (CP) waves are an integrated part of the recorded elastic seismic signal and are investigated in numerous studies in the research areas of VSP data (e.g., Esmersoy, 1990; Stewart, 1991; Xiao and Leaney, 2010), surface reflection (e.g., Purnell, 1992; Stewart et al., 2003; Hardage et al., 2011) and transmission seismic data (e.g., Vinnik, 1977; Vinnik et al., 1983; Bostock et al., 2001; Rondenay et al., 2001; Shang et al., 2012; Brytic et al., 2012; Shabelansky et al., 2014). In particular, e.g., Xiao and Leaney (2010) and Shang et al. (2012) show that converted phase seismic images can be calculated using one elastic wave propagation without using source information (i.e., location, mechanism, time-function), and may have higher resolution and fewer artifacts than reflection type imaging (Shabelansky et al., 2012). Shabelansky et al. (2013, 2015) presented an analysis for updating elastic P and S wave speed models based on the source independent converted phase imaging framework in the so-called extended image gather domain. This analysis is referred to as source-independent converted-phaseWEVMA(SICP-WEMVA). For this technique, the misfit function is calculated in the extended image gather domain from the interference between P and converted S (or/and S and converted P) waves, and we call the distribution of this wave interference its moveout.
Time/amplitude warping is an automatic way to align two seismic traces even when time shifts between them are larger than a quarter of a wave period. The results of time/amplitude warping have been qualitative and less informative in predicting underlying physical properties. We demonstrate that pure time/amplitude warping without background velocity information can produce quantitative relative velocity changes for time-lapse applications. Our warping method uses total variation norms for regularization, and piecewise linear basis functions for the time warping function. Although the method can detect and quantify variations in thin beds, accuracy may not be sufficient for reservoir monitoring. We demonstrate that predictions of velocity variations over thicker intervals can be quite accurate, making the approach useful for other applications such as monitoring time-lapse changes in the overburden, or estimating Vp/Vs ratios when registering PP and PS data. We also point out that time/amplitude warping does not destroy 4D signals.
The time/amplitude warping has the capability of matching corresponding reflection events separated by more than a wave period. From the optimization point of view, the basin of attraction is much larger and the cycle skipping problem is mitigated (Baek et al., 2014b). However, from the reservoir monitoring point of view, warping gives more or less qualitative results (Baek et al., 2014a) which are not easy to be integrated to reservoir management, e.g., reservoir model update (Vasco et al., 2015). Moreover, the fundamental resolution limit of seismic data makes the data integration even more difficult. Hence, we focus on the resolution and accuracy that time/amplitude warping is able to provide.
The previous warping algorithm in Baek et al. (2014b) assumes continuous warping functions; the piecewise cubic Hermite polynomials are used to represent warping solutions. However, the choice of the cubic Hermite polynomials was based on an assumption that large time shifts are created by large discrepancy in smooth background velocities, which is reasonable in velocity estimation. The assumption does not seem to be valid in time-lapse analysis. Hence, we consider different basis functions for the representation of warping functions since reservoir events and substructure physical properties are more or less discrete and patchy. The relation between velocity changes and the time shifts gives us a clue about the smoothness and continuity of warping functions. It also suggests new types of regularization term. Hence, we propose a new optimization method which allows less smooth optimal solutions.
The inverse Laplace transform is one of the methods used to obtain time-domain electromagnetic responses in geophysics. The Gaver-Stehfest algorithm has so far been the particular technique used to compute the Laplace transform in the context of transient electromagnetics. However, the accuracy of the Gaver-Stehfest algorithm, even based on double-precision arithmetic, is relatively low at late times due to round-off errors. We therefore turned our attention to two other algorithms for computing inverse Laplace transforms, namely the Euler and Talbot algorithms. Using as examples a rectangular loop source and a horizontal electric dipole source for layered half spaces, these two algorithms, implemented using normal double-precision arithmetic, are shown to have the capacity for efficiently yielding more accurate time-domain responses at late times than the standard Gaver-Stehfest algorithm.
In electromagnetic geophysics, computing synthetic data for specific acquisition systems and configurations and for prospective Earth models is essential for feasibility analysis, survey design, and data inversion and interpretation.
Synthesizing data for the time-domain method is typically more complex than for the frequency-domain case. For some simple models and particular configurations of source and receivers, analytic methods do exist. For example, Goldman and Fitterman (1987) present a technique for computing the transient response excited by a rectangular loop on the surface of a two-layer medium directly in the time domain. For more general scenarios however, the only possibilities are numerical solution of the time-dependent partial differential equation. This can be done by timestepping, or by the so-called spectral method in which the partial differential equation is transformed to the frequency or Laplace domain, solved numerically in this domain, and the time-domain responses obtained by inverse Fourier or Laplace transformation.
The most common algorithms used to compute the inverse Fourier transforms of electromagnetic geophysics are the cosine and sine transforms that use a digital filtering method (Anderson, 1983; Newman et al., 1986). The fast Fourier transform (FFT) algorithm has also been used (Jang et al., 2013), however, the number of frequencies required for the FFT algorithm is very large, and simple interpolation over a sparsely sampled frequency response does not give accurate results (Hohmann, 1983). Typically, for the digital filter method, dozens of frequency-domain responses distributed evenly in terms of the logarithm of frequency over a broad spectral bandwidth are computed directly, and then cubic spline interpolation used to obtain the few hundred frequency-domain response estimates needed for the filtering. The lagged convolution approach of Anderson (1983) enables the same frequency-domain responses to be reused in the calculation of the responses at multiple times. The digital filtering approach using cosine and sine transforms has been shown to provide both good accuracy and efficiency in many geophysical contexts (e.g. Newman et al., 1986; Key, 2012).
A true amplitude reverse time migration (RTM) algorithm is formulated for the common shot prestack depth migration (PSDM). This formulation is borrowed from the past effort to design true amplitude Kirchhoff /Born PSDM in the frame of asymptotic theory and is adapted here to the case of RTM imaging method. 2D/3D RTM numerical experiments on synthetic cases show the impact and the benefit of using the true amplitude imaging condition. The resulting images, compared with those obtained with classical RTM, are better in terms of slowness perturbation recovery, of higher spectral resolution and by reducing migration artefacts.
In conventional depth migration method, seismic images are usually obtained by applying the Claerbout’s imaging principle, which consists in correlating the incident wavefield and the backpropagated wavefield coming from the receiver data at every image points (Claerbout, J. F., 1971). The resulting images may suffer from a lack of resolution, from the presence of migration artefacts, and may not be correct in terms of amplitude response. Deconvolution imaging condition (IC), introduced originally by Clearbout for the common shot PSDM, has been recently revived by several authors. This is mainly due to the fact that it provides a better approximation which reduces the geometrical spreading loss and compensates energy in area with weak illumination. However, this comes at the expense of enhancing migration artefacts in the depth images. In order to reduce those artefacts, different techniques that are applied during RTM or as post-processing tools have been proposed. In this work, we investigate the development of a common shot true amplitude IC to be used within an RTM method. Our approach for this formulation will be in the frame of general stochastic inverse problem theory (Tarantola (1984)) and we will use high frequency asymptotic analysis to get a closed form of RTM solution. Hence, the goals of this approach is first to establish an RTM formulation with quantitative recovery of model reflectivity for migrated common shot gathers. Those RTM shot gathers may be mapped to classical offset domain (Giboli et al., 2012) for post-processing enhancement, velocity model updating or amplitude vs. offset analysis. This will give a true amplitude common offset RTM results since all wave propagation effects were accounted for during RTM. The second goal is a better insight of what factors are missing in the classical or current approach of common shot RTM methods and its consequences. Another objective is to get a framework of iterative least-squares RTM with faster convergence. This faster convergence should be the consequence of using a better approximation of inverse modeling operator instead of the adjoint modeling operator used widely in iterative least-squares RTM method.
The goal of Marchenko redatuming is to reconstruct, from single-sided reflection data, wavefields at virtual subsurface locations containing transmitted and reflected primaries and internal multiples, while relying on limited or no knowledge of discontinuties in subsurface properties. Here, we address the limitations of the current Marchenko scheme in retrieving waves in highly heterogeneous media, such as subsalt or subbasalt. We focus on the initial focusing function that plays a key role in the iterative scheme, and propose an alternative focusing function that uses an estimate of the inverse transmission operator from a reference model that contains sharp contrasts (e.g., salt boundaries). Using a physics-driven estimate of the inverse transmission operator, we demonstrate that the new approach retrieves improved subsurface wavefields, including enhanced amplitudes and internal multiples, in a subsalt environment.
The retrieval of wavefields within the earth’s subsurface where no receivers or sources are available is a key component of wave-equation imaging and inversion; however, retrieving fullwave responses containing internal multiples with improved amplitudes has long presented a challenge to imaging practice. The method of Marchenko redatuming or autofocusing (Broggini et al., 2011; Wapenaar et al., 2013) proposes to retrieve such wave responses inside the subsurface, while using relatively little information about the earth’s properties. The fields retrieved by Marchenko redatuming can, in principle, be used to improve imaging beyond current capabilities, as discussed by Behura et al. (2012), van der Neut et al. (2013), Broggini et al. (2014), Slob et al. (2014),Wapenaar et al. (2014a) and Vasconcelos et al. (2014). Recently, Ravasi et al. (2015) validated the imaging capabilities of the method on oceanbottom field data. While indeed capable of retrieving internal multiples and correcting amplitudes, recent studies in the presence of highly complex media brought forth some limitations of the current Marchenko scheme (van der Neut et al., 2014a; Wapenaar et al., 2014b). With the aim of applying Marchenko redatuming in geologically complex media such as subsalt, we review the limitations of the existing approach and propose an alternative scheme capable of accounting for higher medium complexity.
Bright spot (including stacked seismic brigh-spot and traditional AVO anomaly) does not mean high gas saturation hydrocarbon (maybe non reservoir, or brine, or low gas saturation hydrocarbon), and high gas saturation hydrocarbon maybe not exhibit bright-spot (maybe nonbright- spot) behavior especially in deep water area or deep gas situation. But high gas saturation hydrocarbon must exhibit low elastic properties (Poisson’s ratio, or density, etc.) behavior in the northern South China Sea and Gulf of Mexico. Quantitative elastic prediction is the effective solution of non-gas bright-spot and non-bright-spot gas identification problem in those areas.
In past decades, bright-spot of stacked seismic amplitude has been used to predict gas (high gas saturation hydrocarbon) in the northern South China Sea and Gulf of Mexico. It is commonly accepted that the identification of commercial gas based on bright-spot technique will take great risk due to the presence of non-gas bright-spot caused by low gas saturation hydrocarbon since 1980s.
In recent years, multiple exploratory wells failed in exploiting gas based on bright–spot technique in deep water area in the northern South China Sea, since those bright-spots were caused by non-reservoir, or brine, or low gas saturation hydrocarbon. Few exploratory wells also encountered gas zone which didn’t exhibit bright-spot behavior in the northern South China Sea.
Allen and Peddy (1993) noted that many seismic amplitude anomalies are not caused by economic hydrocarbon (gas) accumulation and discovered close relationship between Poisson’s ratio and Sw (water saturation). It is commonly accepted that density may reveal crucial information about fluid saturation. Roderick W. Van Koughnet et al. (2001, 2003) discovered that low density is the main behavior of gas-sand with high gas saturation in deep water Gulf of Mexico.
In this paper, we present multiple examples of non-gas bright-spot and non-bright-spot gas in the northern South China Sea, and then analyze the elastic property of non-gas bright-spot and non-bright-spot gas and give the solution of identification and prediction. Finally, one case study will be given to prove that elastic prediction is the effective solution of non-gas bright-spot identification problem in the northern South China Sea.