The Earth model parameters are essential to hydrocarbon exploration. In particular, the velocity representation of the subsurface permanently engages creative minds to find ways to derive more accurate fields. One of these tools is full waveform inversion. This compute-intensive tool uses acquired seismic data and forward modeling to obtain a velocity field in an iterative manner. Since the mideighties, the geophysical community has been devoting considerable research to waveform inversion. The inversion can be implemented in either the time or frequency domain. In this paper, we investigate the pros and cons for each domain along with some discussion on their ability to be used for 3D surveys.
The challenge for waveform inversion is to produce an improved velocity model that can be used to predict seismic data that more closely resemble the acquired, observed data and to demonstrate that the updated velocity model is a better representation of the true model than that derived by various model building methods.
The full-waveform inversion based on the finite difference approach was originally introduced in the time-space domain (e.g.Tarantola, 1984, Gauthier et al., 1986, Crase et al., 1990, 1992, Pica et al., 1990, Sun & McMechan, 1992). The full-waveform inversion described in this paper is solved by an iterative local, linearized approach using a gradient method (e.g. Tarantola, 1987). At each iteration, the residual wavefield (the difference between the observed data and the wavefield predicted by the starting model) is minimized in a least-squares sense. The process is iterated non-linearly by using the updated model from the previous iteration for the subsequent iteration.
The inverse problem can also be implemented in the frequency domain (Pratt et al., 1998, 1999, Operto et al., 2007). The seismic data is decomposed through Fourier transform into monochromatic wavefields which permits the inversion to be performed using a limited set of discrete frequency data components at a time. The inversion proceeds from low to high frequency components to insert progressively higher wavenumbers into the velocity model. A single frequency is equivalent to a sinusoidal component in the time domain (Sirgue and Pratt, 2004). When a range of frequencies are used, the frequency-domain method is equivalent to the time-domain method when using the same range of frequencies.
Hereby we describe the advantages and disadvantages of both the time and frequency domain implementations.
Parallel computation efficiency: The time-domain method can easily be performed in parallel by distributing the shots or plane waves (Vigh and Starr, 2007) and then summing the results to finalize the gradient. This results in a coarse-grain parallel application that can be run on commodity hardware. Frequency domain implementations are typically solved for a large number of shots using LU decomposition of a large sparse matrix. This approach is difficult to parallelize on distributed memory architectures. This leads to using more expensive large scale SMP machines and/or distributed high speed communication mechanisms such as Infiniband, Myrinet, or Quadrics. The iterative frequency and the hybrid approach can be efficiently parallelized.
We consider the linearized constant density viscoacoustic wave equation, which involves simultaneous inversion both for velocity and attenuation contrasts or perturbations. The medium parameters can be characterized by a complex-valued velocity that includes wave speed as well as attenuation. The least-squares error measures the squared norm of the difference between modeled and observed data. Its gradient with respect to the medium parameters represents a migration image. We can use a gradient-based minimization algorithm to invert for the model parameters. Convergence rates will improve by using a suitable preconditioner, which usually is some approximation of the Hessian. For the linearized, constant density viscoacoustic wave equation we derive an exact Hessian that differs from the more conventional Hessian by including the complex-valued part. For the inverse problem, we consider a single point scatterer and investigate the one-dimensional vertical line as preconditioner. We observe that this preconditioner improves the convergence and accuracy. However, the question remains if the suggested preconditioner is feasible, since computing and inverting it is computationally costly.
This paper aims at clarifying the true nature of AVO gradient and the root of its ability as a DHI. It is demonstrated that coefficient B in R(θ)≈A+Bsin2θ may not always be an acceptable approximation to the gradient of amplitude variation with offset (or, in otherwords, with incidence angles). With large offset data being included in AVO inversion, the so-called AVO gradient estimated on basis of the three-term approximation cannot be the coefficient B defined by Shuey in equation (3). Nor can it be the gradient of amplitude versus sin2θ. It is a mixture of coefficient B and the term C(2tg2θ+tg4θ).
The root of ability of the AVO gradient as a DHI might be buried in AVO inversion practice where large offset data are included into the inversion. Such ability and the success of AVO technique might mainly depend upon the degree of approximation of the estimated AVO gradient to the Poisson reflectivity defined by Verm and Hilterman (1995).
Seismic attributes are derived from seismic records, and many seismic attributes can find counterparts in gravity and magnetic transformations. In this work, I explain some quantities computed from seismic, gravity and magnetic data that use the same mathematics but may bear different names. In particular, I compare the development history and physical meaning of (a) the complex trace attributes of seismic data and the complex analytic signal of gravity and magnetics and (b) the seismic dip and azimuth and the horizontal gravity gradient vector. I also show differences of the curvature attributes in applications to seismic and gravity and magnetic data. Such comparisons may be beneficial to the use and further development of seismic attributes and gravity and magnetic transformations.
This paper presents a new method for segmenting channel features from 3D seismic volumes. Anisotropic diffusion using Gaussian-smoothed first order structure tensors is conducted along the strata of seismic volumes in a way that filters across discontinuous regions from noise or faulting, while preserving channel edges. The eigenstructure of the second order structure tensor is used to generate an estimation of orientation and channel curvature. Gaussian smoothing of second order tensor orientations accounts for noisy vectors from imprecise finite difference calculations and generates a stable tensor across the image. Analysis of the confidence and direction of second order eigenvectors is used to enhance depositional curvature in channel features by generating a confidence and curvature attribute. The tensor-derived attribute forms the terms of a PDE, which is iteratively updated as an implicit surface using the level set process. This technique is tested on two 3D seismic volumes with results that demonstrate the effectiveness of the approach.
Candidate time-lapse down-hole gravity measurands are compared for monitoring water front advancements at the inter-well scale. The ability to sense small temporal changes to already minimal density contrasts presented by typical OWC problems drives performance requirements of new down-hole technology being developed by Lockheed Martin (Meyer, 2007).
The question naturally arises regarding which component of gravity is best observed, as the miniature interferometric device can be configured to sense one of several gravity components. The concept of measuring differential gravity is introduced and simulated data are compared to respective inline and cross gravity gradients.
Distributed density contrasts associated with gradual water saturation changes due to homogeneous sweep show 50 m advancement increments are observable beginning at 200 m proximity to monitoring wells if differential gravity is measured to 1 ?Gal. Likewise, a change from ideal sweep to breakout is observed at 130 m proximity. Corresponding sub-Etövös-level inline and cross gradient signals are deemed unobservable.
The differential gravity measurement is nominally drift-free and immune to invaded zone irregularities, so can be collected as part of a periodic wireline service without undue concern regarding accurate downhole replacement. Permanent emplacement in i-fields is also possible. By virtue of common mode rejection, differential gravity surveys are also free of the multiple environmental corrections typically required of surface micro-gravity acquisitions (Hare, et. al., 1999, and Ferguson, et. al., 2007).
An inversion methodology for marine controlled-source electromagnetic (MCSEM) data with approximate Hessianbased optimization and a fast finite-difference time-domain forward operator is presented. Using data from a synthetic hydrocarbon reservoir, we demonstrate that models are reproduced with a spatial resolution determined by the skin depth of the frequencies included in the inversion. Both single and multiple resistive bodies can be resolved in the subsurface. Using reciprocal treatment and multiple frequencies at each receiver position, the comprehensive inversion sequence of a typical MCSEM survey, which should match the acquired data to within the measurement error, executes within ~100 iterations, with about 30 iterations per day, requiring at most a few hundred nodes on a parallel cluster.
A new integrated basement study of the Fort Worth basin (FWB) that includes a high-resolution aeromagnetic data (HRAM), its derivatives, 3-D seismic data and well data reveals a highly segmented and complex basement. The preliminary new result of the structural mapping of the basement using HRAM derivatives reveals correlations with features seen on seismic attribute images. This correlation enables us to establish a relationship between basement lineaments and intra-sedimentary faults. Also, new depth estimates from Euler deconvolution provide a basis for comparison with depth-converted seismic data.
Since many geophysical inverse problems are ill-posed, implementation of constraints is effective in reducing solution ambiguity. This paper presents a simple transform method for enforcing lower and upper bounding constraints about the solution to restrict model parameter updates during the inversion process such that non-realistic results are suppressed. The lower and upper bounding constraints are realized by a logarithmic or an inverse hyperbolic tangent transformation of model parameters. The width of the two bounds reflects the reliability of a priori information, which may be obtained through well logging and surface geological surveys. It is straightforward to recast the cost functional gradient in terms of the transformed variable.
Seismic imaging in depth is limited by the accuracy of velocity model estimation. Slope tomography uses the slowness components and traveltimes of picked reflection or diffraction events for velocity model building. The unavoidable data incompleteness requires additional information to assure stability to inversion. One natural constraint for ray based tomography is a smooth velocity model. We propose a new, reflectionangle- based kind of smoothness constraint as regularization in slope tomography and compare its effects to three other, more conventional constraints. The effect of these constraints are evaluated through comparison of the inverted velocity models as well as the corresponding migrated images. We find the smoothness constraints to have a distinct effect on the velocity model but a weaker effect on the migrated data. In numerical tests on synthetic data, the new constraint leads to geologically more consistent models.
Slope tomography is one of the many methods that try to determine a macrovelocity model for time or depth imaging. It uses slowness vector components to improve and stabilize the traveltime inversion. Slope tomography was initially proposed by Billette and Lambaré (1998) as a robust tomographic method for estimating velocity macro models from seismic reflection data. They had recognized the potential efficiency of traveltime tomography (Bishop et al., 1985; Farra and Madariaga, 1988) but also the difficulties associated with a highly interpretative picking. The selected events have to be tracked over a large extent of the pre-stack data cube, which is quite difficult for noisy or complex data. The idea is to use locally coherent events characterized by their slopes in the pre-stack data volume. Such events can be interpreted as pairs of ray segments and provide independent information about the velocity model.
However, the data for slope tomography are incomplete (Bishop et al., 1985). This causes depth and velocity ambiguities that depend strongly on the size of the acquisition aperture (Bube et al., 2005). Therefore, stability and convergence can only be achieved if additional information is prescribed. This additional information contains desirable properties for the solution, reducing ambiguity (Menke, 1989). It can be shown that stability is obtained only if we try to determine a smooth model of the subsurface (Delprat-Jannaud and Lailly, 1992, 1993). Moreover, for ray based inversion, smoothness is a requirement, because rough models cause the forward problem to break down during linear iterations. The use of combined smoothness constraints enables an interpretation-oriented inversion while keeping solutions consistent with the data.
We investigate the effect of different kinds of smoothness constraints in slope tomography, prescribing lateral, vertical and isotropic smoothing constraints in different combinations. Moreover, we propose a structurally motivated smoothing constraint in the direction of a potential reflector. This regularization is based on information that is contained in the data, in contrast to standard regularizations that impose global smoothness constraints. We test the different regularizations on the Marmousoft data set (Billette et al., 2003).
Slope tomography differs from conventional reflection tomography by the data that are used for the inversion (Billette et al., 2003).