We propose a method for decomposing a seismic record into atomic events defined by a smooth phase and a smooth amplitude. The method uses an iterative refinement-expansion tracking scheme to minimize the highly nonconvex objective function. We demonstrate the proposed method on a noisy synthetic record from the shallow Marmousi model. Finally, we show an application of our method to low frequency extrapolation on the same record. This note is a short version of Li and Demanet (2015).
We address the problem of decomposing a seismic record into elementary, or atomic components corresponding to individual wave arrivals. Letting t for time and x for receiver location, we seek to decompose a shot profile d into a small number r of atomic events vj as
d(t, x) ~/– r/Ʃ/j=ivj(t, x). (1)
Each vj should consist of a single wave front – narrow yet bandlimited in t, but coherent across different x – corresponding to an event of direct arrival, reflection, refraction, or corner diffraction.
In the simplest convolutional model, we would write vj (t, x) = aj(x)w(t -τj(x)) for some wavelet w, amplitude aj(x), and time shift τj(x). In the Fourier domain, this model would read
vj(ε,x) = ŵ(ε)aj(x)eiwτj(x).(2)
This model fails to capture frequency-dependent dispersion and attenuation effects, phase rotations, inaccurate knowledge of w, and other distortion effects resulting from near-resonances. To restore the flexibility to encode such effects without explicitly modeling them, we consider instead throughout this paper the expression
vj(ω, x) = ŵ(ω)aj(ω, x)eib j(ω:,x), (3) where the amplitudes aj and the phases bj are smooth in x and ω, and bj deviates little from an affine (linear + constant) function of ω.
Finding physically meaningful, smooth functions aj and bj to fit a model such as Equation 1 and 3 is a hard optimization problem. Its nonconvexity is severe: it can be seen as a remnant, or cartoon, of the difficulty of full waveform inversion from high-frequency data. We are unaware that an authoritative solution to either problem has been proposed in the geophysical imaging community.
Electrical anisotropy has a strong effect on CSEM data (Ramananjaona et al, 2011), and understanding this effect is key in ensuring robust survey design and well constrained data analysis (MacGregor & Tomlinson, 2014). Electrical anisotropy can also provide key information that can be used to understand regional variations in rock physics properties as well as provide possible indications to geological drivers in an area, such as uplift. To date there have been no systematic regional studies of electrical anisotropy in background geological structure. Addressing this need, by investigating electrical anisotropy variations across the Barents Sea is one of the main goals of the industry funded ERA consortium.
Bulk anisotropy values were derived from CSEM data for each of the major stratigraphic units across the Barents Sea. This was achieved by performing 1D anisotropic inversion of CSEM data acquired around well bores, and tying the horizontal resistivity to the induction log measurements from these wells. Results were then mapped and regional trends are investigated. The modelling confirms the presence of high electrical anisotropy ratios in the Barents Sea area and a correlation between anisotropy ratio and formation age: In general the older the formation, the higher the anisotropy ratio. Although resistivity varies regionally, the variation in anisotropy ratio is less pronounced.
The anisotropy analysis covers multiple Barents Sea areas and includes 20 drilled wells. The wells included in this study have been subdivided in 10 different groups based on their geographical location (Table 1). Note that in area 10 (Hoop) no wells were available, and results are based solely on CSEM data. For each area CSEM data were inverted to determine resistivity and anisotropy values.
Geophysical seismic methods are used to monitor subsurface variations when carbon dioxide (CO2) is injected into a reservoir. Besides saturation and pressure changes, there is also alteration to the rock microstructure in the presence of CO2- water mixtures. In this petrophysical study we compare the expected changes due to fluid substitution and those due to mineral dissolution for carbonate cemented sandstones at Pohokura Field, New Zealand. Firstly, a suite of well logs and petrographic analysis are used to determine the applicability of the constant-cement model in describing the elastic behaviour of the cemented sandstones. Secondly, we study the effects of fluid substitution of brine for CO2 and carbonate cement dissolution on the seismic properties of the reservoir rocks. Velocity changes due to cement dissolution are much greater than those due to fluid substitution alone (more than twice in this study). Our analysis suggests that changes to the rock frame cannot be ignored when interpreting time-lapse seismic data in a CO2 injection field.
In recent years, carbon capture and storage (CCS) has become an increasingly popular process to reduce the amount of carbon dioxide (CO2) emitted into the earth’s atmosphere. CO2 geosequestration in sedimentary rocks has been successfully applied and seismically monitored atWeyburn Field (Canada), Cranfield (USA) and Sleipner (Norway) (Chadwick et al., 2006). Up-to-date studies analyze time-lapse wave velocity changes in CO2 injection fields as a result of either fluid substitution (Daley et al., 2008), pore pressure change (Eiken and Tøndel, 2005; Tura and Lumley, 1998) or both (Grude et al., 2013). Geophysicists have rarely examined time-lapse seismic wave signatures resulting from fluid-rock interactions that can change the rock frame (Ivandic et al., 2014) and there is evidence that CO2 injection procedures can lead to bulk dissolution of dolomite, siderite and calcite (Worden and Smith, 2004; Vialle and Vanorio, 2011; Grombacher et al., 2012; Vanorio, 2015). Here we quantify how fluid substitution and frame dissolution affect time-lapse velocity change for carbonate cemented sandstones by combining laboratory experiments, geophysical well logs and rock physics models.
The Mangahewa Formation in Pohokura Field (New Zealand) produces gas-condensate from the carbonate-cemented sandstones (Higgs, 2014). This field has been proposed as a potential CO2 injection site once it is deemed non-productive (King et al., 2009). In this abstract we present the analysis for one of the wells at Pohokura (Pohokura-3), focusing on the three producing sandstone-units (orange boxes in Figure 2 a-c) which contain shale volume of less than 40% and are saturated with hydrocarbon gas (Figure 2 c). Units that have a shale volume greater than 40% are not included in this study. Laboratory CO2-water reaction experiments were performed on core samples from these three intervals, and time-lapse wave velocity data collected. Detail laboratory procedures are described in Adam et al. (2015). We focus on characterizing how fluid substitution and cement dissolution due to CO2 injection affects time-lapse velocity changes.
We present a simple method for eliminating surface-related multiples in shallow water environments. After a brief technical discussion, the procedure is demonstrated on an OBC survey imaging a producing field in the central North Sea. The North Sea's shallow bathymetry and well-known strong water bottom reflectivity make for both a challenging and ideal survey for testing removal of shallow-water layer borne multiples. We demonstrate with CDP stacks and autocorrelations that surface-related multiples are accurately estimated and effectively removed from the dataset.
The shortcomings of the industry standard data-driven surface related multiple elimination (SRME) in shallow water environments are well documented. However the method is not straightforward to extend to the OBC case because SRME requires the shot and receiver to be near the surface as a surface-consistent technique. As an alternative approach, tau-p decon has been widely used in shallow water OBC data to remove water-bottom related multiple reflections based on the multiples’ periodicity in the tau-pspace. This is however potentially harmful as primaries with similar periodicity are likely to be attenuated as well.
In this paper we discuss a workflow for successful attenuation of shallow-water multiples for a time lapse OBC-4C dataset using a wavefield extrapolation method. Our study shows that application of this wavefield demultiple methodology can act as a complementary tool, which when applied after a regular tau-p deconvolution can successfully attenuate the strong remnant water-bottom related multiples. In the following sections, we first briefly review the methodology of multiple model estimation using a wavefield extrapolation approach and then present the results of our study on a shallow water OBC dataset.
The shallow water demultiple method we propose is model driven and uses the bathymetry of the water bottom to estimate a model of the water-layer multiples. After the multiples are estimated, successful multiple elimination relies on subsequent adaptive subtraction of the model from the input. The engine behind the method is a one-way wavefield propagation and extrapolation in the water column.
We present a workflow for rock physics-guided velocity modeling for depth imaging and applications in real case studies from the Gulf of Mexico. The rock physics model used in this method contains both mechanical and chemical compaction that depends on thermal history and controls the effective stress variation in depth. We highlight the capability of this method in predicting the velocity for unknown areas such as places that lack well control or beyond the total depth of a well. The assumptions and source of error for this method are also discussed.
Since the development of depth migration, earth model building, including p-wave velocity (Vp) and rock properties such as pore pressure, has been an important aspect for imaging. The accuracy of seismic event positioning strongly depends on the accuracy of the final earth model that must not only flatten the common-imagepoint gathers, but also be plausible and physical. Currently, seismic-data-based methods such as tomography become the primary engine for migration velocity analysis (Woodward et al., 2008). Nevertheless, the seismic data quality and pitfalls in the analysis process can strongly affect the velocity inversion result. This includes low signal-to-noise ratio, low incident angle at deep targets, shadow zones below complex geology such as salt bodies, uncertainty in anisotropy parameters, etc. This makes further contributions from non-seismic data such as rock physics or pore pressure models valuable resources for image improvement.
One of the commonly used rock physics principles that can be helpful to depth imaging is the relation between effective stress and Vp. The effective stress in the subsurface can be estimated from the difference between the overburden stress and pore pressure. Then, the effective stress can be converted to Vp using a thermal and burial history-dependent rock physics model. The resulting velocity model from this approach will capture some largescale geomechanical relationship between effective stress and velocity, thus, impact the velocity model building. For example, the effective stress below a salt body will be lower than non-salt areas since the salt density is lower than the surrounding shale. Therefore, the subsalt velocity usually becomes lower than a regional trend when derived from non-salt wells. To quantify the velocity drop one must model the subsalt effective stress. Dutta et al. (2015) and Liu et al. (2014) described some applications of using this approach to build subsalt velocity models in detailed.
We propose two ideas to reduce ocean noise generated by seismic surveys. One is a new low-pressure source; an evolution of conventional airguns with some significant mutations. We estimate that low-pressure sources will reduce high frequency noise and strengthen weak low frequency signal, providing significant reductions in ocean noise from seismic surveys, improved sub-salt and subbasalt imaging, and better blocky earth models. The other idea is to use more receivers and less shots per area in wide-azimuth long-offset surveys. In essence to shoot less and make each shot count more. We propose the utilization of swarms of motorized unmanned surface vessels towing streamers. Such swarms will provide the wide azimuths and the far offsets that are required to image deep targets under complex overburden with less shooting per area compared to conventional wide azimuth solutions that use additional sources rather than additional receivers.
One obvious idea to reduce ocean noise is to use seismic sources that generate less unusable high frequency energy. The airguns that we are using today are very inefficient; only a few percent of the energy that they release generate acoustic waves at useful frequencies. A lot of energy is wasted to heat, cavitation causes too much of the remaining energy to generate acoustic waves at frequencies that are too high to be useful to us. Waves over 220 Hz are taken out by the anti-aliasing high cut of our 2-millisecond analog to digital converters. We can use 1 millisecond and sub millisecond sampling, but we don’t, except in sitesurveys that anyway need only small source to image shallow hazards. There is no sense in using higher sampling rates because depending on the target depth and overburden’s attenuation and complexity, waves over 25- 150Hz are attenuated and scattered. Although we do not know the actual impacts of high frequency energy on marine life, it is generally believed that by reducing them as much as possible is a step in the right direction.
The seismic industry is attempting to address the issue of potential environmental impact from seismic sources. Marine Vibroseis may be an excellent long-term solution; no cavitation, just turbulence, and controlled sweeps that generate very little energy at useless high frequency harmonics. Marine vibroseis has been in development and testing for a long time (Dragoset, 1988; and many more publications) but it still remains a future solution.
Born waveform inversion is a partially linearized version of full waveform inversion based on Born (linearized) modeling, in which the earth model is separated into a smooth background model and a short scale reflectivity, and both are updated to fit observed trace data. Because kinematic variables (velocity) are updated, the possibility of cycle-skipping and consequent trapping at local minimizers exists for Born waveform inversion, just as it does for full waveform inversion. Extended Born waveform inversion allows reflectivity to depend on additional parameters, potentially minimizing the likelihood of cycle skipping by permitting data fit throughout the inversion process. Extended or not, the Born waveform inversion objective function is quadratic in the reflectivity, so that a nested optimization approach is available: minimize over reflectivity in an inner stage, then minimize the background dependent result in a second, outer stage. This paper uses a 2D acoustic modeling, reflectivity permitted to depend on shot coordinates (shot record extension), a differential semblance penalty to control this dependence, and the variable projection variant of nested optimization. Our examples suggest that neither extended modeling nor variable projection alone are sufficient to enable convergence to a global best-fitting model, but the two together are quite effective.
Seismic full waveform inversion (FWI) is used to infer the interior structure of the earth from observed seismic waves by posing model-based data fitting as a nonlinear least squares problem. Studied in the 1980’s by Tarantola and others (Tarantola, 1984), it has recently become a viable model building strategy (Virieux and Operto, 2009). Because of the bandlimited feature of seismic data, the FWI objective function may exhibit many local minima sharing few features with a best-fitting model (Gauthier et al., 1986).
Replacement of full waveform modeling by linearized or Born modeling in the formulation of FWI yields a partly linear least squares problem, in fact underlying much of seismic imaging theory and practice. The Born approximation is most accurate when the background is smooth on the wavelength scale, hence transparent, and the perturbation contains all short scale or oscillatory features of the earth model. Therefore we will adopt the convention that the linearization is based on such a long/short scale decomposition. The linearized forward model is linear, hence the mean square error objective function is quadratic, in the perturbation or reflectivity component of the Born model. The objective is still quite non-convex in the background model, hence in principle as likely to suffer from cycle-skipping and local minima as the FWI objective.
Amestoy, P. (INPT-IRIT) | Brossier, R. (ISTERRE-LJK-CNRS) | Buttari, A. (_) | L’Excellent, J.-Y. (INRIA-LIP) | Mary, T. (UPS-IRIT) | Méitivier, L. (ISTERRE-LJK-CNRS) | Miniussi, A. (Geoazur-CNRS-UNSA) | Operto, S. (Geoazur-CNRS-UNSA) | Virieux, J. (ISTERRE-UJF) | Weisbecker, C. (INPT-IRIT)
Three-dimensional frequency-domain full waveform inversion (FWI) of fixed-spread data can be efficiently performed in the visco-acoustic approximation when seismic modeling is based on a sparse direct solver. We present a parallel algebraic Block Low-Rank (BLR) multifrontal solver which provides an approximate solution of the time-harmonic wave equation with a reduced operation count, memory demand, and volume of communication relative to the full-rank solver. We analyze the parallel efficiency and the accuracy of the solver with a realistic FWI case study from the Valhall oil field.
Seismic modeling and full waveform inversion (FWI) can be performed either in the time domain or in the frequency domain (e.g., Virieux and Operto, 2009). In the frequency domain, seismic modeling consists of solving an elliptic boundaryvalue problem, which can be recast in matrix form as a system of linear equations where the solution (i.e., the monochromatic wavefield) is related to the right-hand side (i.e., the seismic source) through a sparse impedance matrix, whose coefficients depend on frequency and subsurface properties (e.g., Marfurt, 1984). One distinct advantage of the frequency domain is to allow for a straightforward implementation of attenuation in seismic modeling (e.g., Toks¨oz and Johnston, 1981). Second, it provides a suitable framework to implement multi-scale FWI by frequency hopping, that is useful to mitigate the nonlinearity of the inversion (e.g., Pratt, 1999). Third, monochromatic wavefields can be computed quite efficiently for multiple sources by forward/backward substitutions if the linear system can be solved with a sparse direct solver based on the multifrontal method (Duff and Reid, 1983). However, the LU factorization of the impedance matrix that is performed before the substitution step generates fill-in, which makes this preprocessing stepmemory demanding. Dedicated finite-difference stencils of local support (Operto et al., 2014) and fill-reducing matrix ordering based on nested dissection (George and Liu, 1981) are commonly used to minimize this fill-in.
We utilize the term critical zone to denote the region near the surface of the earth that extends from the soil to the base of weathered rock. Geophysical studies designed to investigate this region have increased in recent years and one of the commonly used tools is seismic refraction tomography (SRT). While SRT provides valuable information, it is important to recognize the limitations of this approach. We examine the ability of SRT to distinguish between blocky and smooth velocity models in a critical zone like setting. We simulate a seismic refraction experiment over smooth, semi-blocky and blocky velocity models of the critical zone, then pick traveltimes and invert with a commonly used commercial SRT package. The synthetic inverted velocity profiles were found to be virtually indistinguishable, and in the blocky model the depth to consolidated bedrock was overestimated by 25%. Inability to distinguish between blocky and smooth gradients must be considered a significant source of uncertainty when utilizing such data, for example, to inform hydrologic models.
As interest in the Critical Zone continues to grow, so too does the need for large scale measurements of its depth and physical properties. Increasingly, geophysical methods such as seismic refraction are being used to make these large scale measurements. There are many benefits to seismic refraction surveys as they are minimally invasive, can be applied over a large area and give information about the physical characteristics of the rock. It has been particularly useful in identifying the depth to unaltered bedrock, which acts as the lower bound for water movement in the critical zone and so is of key importance (Freer et. al., 2002). Because of this there are many examples of seismic refraction surveys in studies of the critical zone in mountainous watersheds (e.g. Holbrook et. al., 2014; Befus et. al., 2011; Yamakawa et. al., 2012). In all of these studies, the authors were able to interpret their inverted seismic lines to produce meaningful observations about the critical zone. However, in these papers, there are often assumptions made about whether the velocity gradates smoothly or discretely across the various layers in the weathering profile. Whether the seismic refraction inversion algorithms are sensitive to the either a discrete or smooth velocity gradient has yet to be explored, creating a source of uncertainty in the interpretation of seismic velocity profiles for the critical zone studies.
The production of underwater sound is more and more considered to be an environmental risk. This has already been the case for military sonar for more than a decade, as sonar was identified as a possible cause of marine mammal strandings. The approach we adapted for military sonar is the following. The risk is characterized by computing the exposure (sound produced by the sonar) in an area around the source and by coupling that information to the effects it causes on a certain animal species. The risk is then quantified by taking into account the probability of the presence of that species in the area. If too large, the risk can be mitigated. We observe a trend of shifting the focus from individual disturbance to more general population consequences.
A similar approach is advised to characterize the risks involved in the use of airguns in seismic acquisition.
In recent years the awareness of possible negative effects of underwater sound on the marine environment has grown considerably. This includes the effects on marine mammals and other species being exposed to the underwater sound produced by seismic sources such as airguns. To quantify and mitigate the resulting risk is not an easy task. Elements to consider are for instance:
• exposure-effect assessment: how to assess the effect of seismic sources on individual animals, how to separate the influence of seismic sources from the influence of other sources (both man-made and natural), what measures of underwater sound are relevant, how to translate the short-term behaviour of individual animals to the long-term consequences for the population of a certain species.
• exposure assessment: how to determine the exposure quantitatively, i.e., how to model and quantify the underwater soundscape related to seismic acquisition.
• mitigation: what mitigation measures can be taken, such as marine mammal observers and ramp-up schemes, what can be done in the planning stage.
We have participated in several experiments related to the effects of military sonar on marine mammals in recent years. These have lead to new insights in how to deal with the risk assessment, see Figure 1. In this paper we would like to share these insights with the seismic community.