Chen, Yangkang (The University of Texas at Austin) | Jiao, Shebao (China University of Petroleum, Beijing) | Gan, Shuwei (China University of Petroleum, Beijing) | Yang, Wencheng (China University of Petroleum, Beijing)
The bandpass filtering is a common way for estimating the ground rolls in land seismic data because of the low-frequency property of ground rolls. However, the frequency-mixing problem makes bandpass filtering inconvenient or even impossible to achieve a successful removal of ground rolls. In this paper, we propose a novel band-limited orthogonalization approach for removing ground rolls without harming useful primary reflections. We orthogonalize the initially denoised signal via bandpass filtering with a relatively high low-boundfrequency (LBF), and the corresponding noise section that contains loss of primary reflections, using local signal-andnoise orthogonalization. The local orthogonalization guarantees that the least coherent primary reflections are lost in the noise section. The procedure of the proposed approach is very convenient to implement because only a bandpass filtering and a regularized division between the initially denoised signal and initial noise are used. The OZ-25 field dataset demonstrates a successful performance using the proposed approach.
Ground rolls are a type of seismic noise, with a high amplitude, low frequency, and low velocity. Ground rolls are the main type of coherent noise for land seismic survey. Those ocean bottom node (OBN) based marine seismic surveys may also contain this type of noise (Chen et al., 2014). The ground rolls usually mask the shallow reflections at short offset, and deep reflections at larger offset (Claerbout, 1983; Saatilar and Canitez, 1988; Henley, 2003), and must be removed before the subsequent processing tasks. Instead of being incoherent along the spatial direction like random noise (Yang et al., 2015; Chen et al., 2015), the ground rolls are coherent and behave much similarly to the primary reflections, which makes the removal of them very difficult using simple signal processing methods. There have been a lot of researches about removing ground rolls done in the literature and many researchers have proposed different methods for handling the ground rolls problem (Shieh and Herrmann, 1990; Brown and Clapp, 2000). Most of the ground-rolls removal approaches either fail to remove all the ground rolls or remove much useful primary reflections energy. An efficient and effective technique for removing ground rolls are always in demand.
Ellis, Michelle (RSI) | MacGregor, Lucy (RSI) | Ackermann, Rolf (RSI) | Newton, Paola (RSI) | Keirstead, Robert (RSI) | Rusic, Alberto (RSI) | Bouchrara, Slim (RSI) | Alvarez, Amanda Geck (RSI) | Zhou, Yijie (RSI) | Tseng, Hung-Wen (RSI)
In this study we use Controlled Source Electromagnetic (CSEM) data, well log data and rock physics to investigate electrical anisotropy drivers in the Snøhvit area of the Barents Sea. Results show that for the shale dominated sediments electrical anisotropy varies systematically with porosity, depth and elastic properties. However there is little systematic trend with clay content.
CSEM can be used to provide higher sensitivity to hydrocarbon saturation than is possible to achieve with conventional seismic reflection data (MacGregor & Tomlinson, 2014). In CSEM’s infancy anisotropy was ignored, however, disregarding resistivity anisotropy will lead to misleading CSEM survey feasibility studies, inaccurate CSEM data analysis, inaccurate estimations of hydrocarbon saturations and, consequently, erroneous interpretations (Ellis et al., 2011). In order to improve our interpretation of CSEM data we need to understand what drives the anisotropy for a given rock type. The aim of rock physics is to understand the relationship between geophysical observations and the underlying physical properties of the rock (Mavko et al., 2009). Physical properties include properties such as porosity, mineral composition, pore-fluid composition and sediment microstructure. By using rock physics we can start to understand the controls on electrical resistivity and anisotropy in a given area. The aim of this project is to determine the controls on electrical anisotropy in the Snohvit area of the Barents Sea and forms part of a wider study of Barents Sea electrical properties (Bouchrara et al, 2015). The Barents Sea was chosen as a study area because of the current interest in the area and the rich dataset which included well logs and CSEM surveys (Figure 1). Also the Barents Sea is geologically complex – stratigraphically, structurally, and historically (Gabrielsen et al., 1990). One component of this complexity is the presence of strong anisotropy in measured and derived electrical resistivity (Fanavoll et al., 2012).
It may superficially seem that given all the logistics of marine seismic exploration, the permitting process involving marine mammals, sea turtles and other protected marine resources should be a fairly simple activity. As it turns out, it is not, especially under new and changing regulations and in new regions, and even new/changing agency personnel. Permitting and its concomitant mitigation, monitoring, and reporting requirements in the United States in particular are a complex interaction among multiple laws, agency jurisdictions, and associated agency personnel levels. Successfully receiving and implementing of permits requires more than simply applying. It requires prudent advance planning to at minimum (1) accommodate the often long agency and public review process (and delays), and (2) understand each agency’s specific information requirements for processing and evaluating associated environmental impacts. For those conducting seismic exploration, it is important to (1) be aware of agency expectations for environmental protection and required monitoring associated with that equipment use, (2) provide sufficient activity details to inform design and modeling that will result in agency acceptance. For industry applicants, it is imperative to be prepared for the time and level of due diligence and background work required to process and ultimately obtain permits and meet regulation and guideline standards. It is paramount to develop reasonable expectations for what the combined permitting, mitigation, and monitoring will entail in terms of document preparation and agency and public review time. We provide flowcharts and other visual aids, as well as describe basic permitting processes for marine seismic exploration, briefly discussing regional differences and future directions in policies associated with these processes.
A thorough understanding of the processes involved in permitting seismic exploration relative to marine mammal and other protected marine resources is critical to timely and successful permit receipt and implementation. Permitting processes are the framework for addressing marine environmental issues associated with seismic exploration and the technologies applied to it. For experts in technical aspects of seismic exploration, it is critical to have a sense of how the techniques (1) are evaluated by reviewing agencies, (2) are potentially limited by laws, regulations, and guidelines, and (3) relate to the importance and meaning of required application of “best available science” regarding environmental impacts. This process requires effective merging of goals, knowledge, and overlapping aspects of environmental regulations with those of exploration and development activities. Achieving this merge facilitates the most efficient and effective marine seismic exploration program for those in industry. There are ways to streamline this process through increased awareness.
Navigating the rough waters of federal permitting for marine seismic exploration is a challenge. There are many applicable laws and a myriad of alphabet soup agencies engaged in enforcing these laws through often confusing and changing regulations and guidelines. Involved agencies must balance permitting requests with constituent and other agency concerns that may arise through the permitting process to avoid delays and bureaucracy. Our presentation will synthesize and provide pathway guidance to improve understanding of the overall permitting process, including the steps, relationships, involved entities, and procedures of agency consultation. The goal is to provide a cumulative one-stop-shop approach to understanding basic permitting requirements for conducting marine seismic exploration in the United States. The information comes from laws, regulations, agency websites, personal communications with agency staff, and our combined years of professional experience on both sides of the fence: as agency and academic resource managers and permit issuers, to our current role with Smultea Environmental Sciences engaging in permitting, mitigation, and monitoring support for industry.
We present a novel method that optimizes CMP stacking and reduces the effects of “NMO stretch” by replacing conventional NMO and stack with a regularized inversion to zero offset. We use shaping regularization to achieve a stack that has a denser time sampling and contains higher frequencies than the conventional stack. The resulting stack is a model that best fits the data using additional constraints imposed by shaping regularization. Numerical tests demonstrate that “stretching effects” caused by NMO are reduced and the resulting stacked section contains higher frequencies and preserves shallow reflectors better compared to the conventional stacked section.
NMO correction and stack is one of the most fundamental routine processes applied to seismic data (Yilmaz, 2001). Since NMO correction is not an exact solution, it produces some distortions of a seismic trace (“NMO stretch”), and the corrected trace is always different from the ideal zero-offset trace (Shatilo and Aminzadeh, 2000). The traditional stacking process is based on the assumption that signal is coherent while noise is random. When dealing with real seismic data, imperfectly aligned reflections, noise bursts, and coherent noise commonly occur (Rashed, 2014). Conventional stacking is also flawed when there is inaccuracy in NMO correction or stretch muting, resulting in lower amplitude and lower resolution stacks.
Several different algorithms were developed to improve the process of NMO and stack in attempt to recover higher resolution stacks. Claerbout (1992) described Inverse NMO Stack, which recasts NMO correction and stacking as an inversion process in the constant velocity case. This approach combines conventional NMO and stack into one step by solving a set of simultaneous equations using conjugate-gradient iterations. Sun (1997) extended Claerbout’s idea to the case of depthvariable velocity. The inverse NMO stack operator applied depends on hyperbolic moveout relation and can be employed to remove non-hyperbolic events and random noise. Trickett (2003) introduced stretch-free stacking, which is a method for computing the stacked trace directly from a CMP gather in a way that avoids NMO stretch. This method uses a variation of Claerbout’s Inverse NMO Stack to replace NMO and stack with an inversion to zero offset. The results tend to be higher frequency, but nosier than a conventional stack. Wisecup (1998) proposed random sample interval imaging (RSI2), which maps the CMP gather into the “after NMO space” using the exact moveout times and no interpolation. The NMO corrected values are then collected in the stack, rather than summed, where the input sample values are mapped to their correct time values in the stack. Shatilo and Aminzadeh (2000) introduced Constant Normal Moveout (CNMO) correction, which applies a constant NMO shift within a finite time interval that is equal to the wavelet length of a trace.
Electromagnetic (EM) methods are used to characterize the electrical conductivity distribution of the earth. EM geophysical surveys are increasingly being simulated and inverted in 3D, due in part to computational advances. However, the availability of computational resources does not invalidate the use of lower dimensional formulations and methods, which can be useful depending on the geological complexity as well as the survey geometry. Due to their computational speed, simulations in 1D or 2D can also be used to quickly gain geologic insight. For example, this insight can be used in an EM inversion starting with a 1D inversion, then building higher dimensionality into the model progressively. As such, we require a set of tools that allow a geophysicists to easily explore various model dimensionalities, such as 1D, 2D, and 3D, in the EM inversion. In this study, we suggest a mapping methodology that transforms the inversion model to a physical property for use in the forward simulations. Using this general methodology, we apply an EM inversion to a suite of models in one, two, and three dimensions, and suggest the importance of choosing an appropriate model space based on the goal of the EM inversion.
Electromagnetic (EM) fields and fluxes can be used to excite the earth, and in a geophysical survey, we measure and interpret the resulting signals. These signals are sensitive to the conductivity distribution of the earth. By numerically solving Maxwell’s equations, we can compute the forward response for a system with a known conductivity distribution. To conduct a forward simulation for a 3D conductivity distribution, we require the property to be discretized numerically, and we typically employ a voxel-based mesh to discretize the earth. Once we have a mechanism to simulate EM fields and fluxes, we can consider approaching the inverse problem. The aim of an EM inversion is to recover a model that is consistent with the measured EM data and prior knowledge of the geologic setting.
Three dimensional EM inversion techniques using gradient based optimization have been actively developed and applied for various survey types and geologic settings (Oldenburg et al. (2013); Gribenko and Zhdanov (2007); Chung et al. (2014)). A gradient based inversion approach requires defining an objective function that will be minimized in the optimization. Equally important, yet often overlooked, is the definition of the model over which we minimize. Our focus in this paper is the construction of this inversion model in a flexible framework.
A novel particle swarm optimization (PSO) method for discrete parameters and its hybridized algorithm with multi-point geostatistics are presented. This stochastic algorithm is designed for complex geological models, which often require discrete facies modeling before simulating continuous reservoir properties. In this paper, we first develop a new PSO method for discrete parameters (Pro-DPSO) where particles move in the probability mass function (pmf) space instead of the parameter space. Then Pro-DPSO is hybridized with the single normal equation simulation algorithm (SNESIM), one of the popular multipoint geostatistics algorithms, to ensure the prior geological features. This hybridized algorithm (Pro-DPSO-SNESIM) is evaluated on a synthetic example of seismic inversion, and compared with a Markov chain Monte Carlo (McMC) method. The results show that the new algorithm generates multiple optimized models with the convergence rate much faster than the McMC method.
Pervious VSP diagnoses of surface seismic velocity models in the GOM (Li and Hewett, 2014) indicated that shallow velocities were typically poorly constrained by VSP due to multiple casing strings causing ringing. This ringing also hampered direct measurement of seawater average velocity (SWAV) at a rig site with ZVSP direct arrivals. We proposed to directly measure SWAV at a rig site with a known water depth by using differential times between primary water bottom multiples (WBM) and direct first arrivals acquired in a marine ZVSP survey. A procedure was developed to process ZVSP WBM signals for SWAV measurement. This WBM method was applied to 17 rig sites in the deep water environments in North and South America. It is recommended that VSP processors should add the SWAV measurement in their future velocity survey report. We assume that there is little lateral variation of SWAV in the rig vicinity and applied the WBM method to estimate the seawater bottom depth profile in the vicinity of a rig site along the well-path.
An accurate velocity model is critical to obtain high quality images of sub-surface formation structures for oil and gas exploration and production. Li and Hewett (2014) used multifarious Vertical Seismic Profile (VSP) data to diagnose, calibrate and update velocity models derived from surface seismic surveys in the Gulf of Mexico (GOM) to improve quality of surface seismic imaging. They found that shallower velocity uncertainties (Fig. 1a) may play a large role in distorted wave-fields, resulting in poor and blurred images of sub-surface targets. The velocity model diagnosis results also indicated that seismic velocity model could be a better match with the VSP data when a ballpark time shift is applied for correction (Fig. 1b) by increasing seawater velocity by +2%. Larger errors in shallow model velocities (both water and sediments) are partially due to lack of shallow VSP data constraints, which can be mostly attributed to multiple casing strings causing ringing.
It is well-known that P-wave velocity in seawater can vary from 1440 to 1570 meters per second (m/s) with an average value of 1500 m/s. The seawater velocity changes with temperature, salinity, and pressure or depth (e.g., Chen and Millero, 1977). Depth sounding with a sonar system allows the measurement of underwater distances using the two way travel time of an acoustic pulse. Accurate seawater velocity is critical important for correct depth migration images since an inaccurate seawater average velocity could result in a predicated target depth error of more than hundreds of meters.
The hydrocarbon deposits are often associated with higher than usual values of attenuation, which is generally ignored during amplitude-versus-offset (AVO) analysis. The moduli of standard linear solid model was chosen to substitute into the Zoeppritz equation to analyze the influence of attenuation and dispersion. A new forword modeling method on seismic reflectivity is developed to convolute reflection coefficients and wavelets in frequency domain. This method can obtain angle gathers with the attenuation informantion and have high efficiency. On the basis of reflection coefficients in viscoelastic media, we use frequency decomposition convolutional model (FCDM) to illustrate the impact of attenuation on prestack seismic angle gathers. It allow us to consider the viscoelasticity on seismic gathers and have the potential to applied AVA/AVF analysis to the real data.
In the past decades, low-frequency seismic anomalies associated with the presence of hydrocarbon have been investigated by many researchers(Taner et al. (1979);Castagna et al. (2003); Korneev et al. (2004);Chapman et al. (2006)) The hydrocarbonsaturated reservoir zones often cause significant velocity dispersion characteristics (Castagna et al. (2003)), which are also confirmed by laboratory measurement in the seismic frequency range (Batzle et al. (2006)).
Regarding the velocity dispersion and attenuation, many physical mechanisms have been proposed and modeled in rocks, such as viscous fluids (Biot (2005a),Biot (2005b)), local flow or squirt mechanisms (Mavko and Nur (1975), Mavko and Nur (1979),Budiansky and O’connell (1976), and O’Connell and Budiansky (1977)), pachy-saturation model (White (1975)), double-porosity model (Pride and Berryman (2003a), Pride and Berryman (2003b), Pride et al. (2004)) et al. In spite of these distinctive mechanisms, we use viscoelasticity theory here, especially standard linear solid model, to describe velocity attenuation and dispersion by the low and high limiting moduli and a characteristic frequency. Our intent is not to discuss the interior mechanisms of poroelasticity but to present the concise way to depict dispersion.
Most studies are focused on the poroelastic mechanism of velocity dispersion and attenutaion, however, the plane-wave reflection coefficients in attenuative media are not fully discussed. A weak-contrast and weak-attenuation approximation of reflection and transmition coefficients in a thinly layered viscoelastic isotropic medium is proposed in Ursin and Stovas (2002). Ren et al. (2009) have investigated characteristics of the normal-incident reflection coefficient as a function of frequency at an interface between a nondispersive medium and a patchy-saturated dispersive medium. Innanen (2011) studies the problem of determining earth properties from the frequency signatures of anelastic AVF and AVA. Zhao (2014) developed a zoeppritz-style equation in Effective Biot Media.
Because the grid spacing is fixed in the finite-difference method (FDM), the numerical dispersion usually imposes the use of restrictive small grid spacing (given by the lower velocity) in the whole model. Then, the number of grid points per wave-length is too large in deep layers, resulting in loss of efficiency of the FDM. Although the use of high-order operators allows bigger grid spacing and could make FDM more efficient, this does not solve the problem. In this paper, we develop a new 3D explicit scheme based on finite-volume method (FVM) that addresses the aforementioned problem in an efficient way. The algorithm is constructed using a general formalism with sparse matrices and allows both the use of different grid spacing in layers with different velocities and the refinement in regions of interest using OcTree meshes. Oc- Tree is easy to generate using regular grids, avoiding additional complications in meshing. Computational tools for parallel processing on the GPU were used for the algebraic manipulation of the sparse matrices. The numerical scheme can be seen as an extension of the staggered-grid scheme; actually, it reduces to the classical staggered-grid scheme for the case of regular grids. Finally, an example is presented to show the effectiveness of the method.
Offshore salt tectonics regions became common exploration targets in oil industry and, at present, it is well known that a significant proportion of the world hydrocarbon reserves are in structures related to these geological formations. In many places around the world, great interest has been focused in presalt geologic formations. These areas are hard to illuminate and the geological features are very complex, being present layers with high-velocity together with lower velocities, all below thousands of meters of the water surface. On the other hand, the technological development in acquisition and processing has allowed images with increasing resolution.
The huge challenges in exploration and reservoir characterization put the wave equation based algorithms in highlight, making full waveform inversion (FWI) and reverse time migration (RTM) important tools in the industry. In this context, developing powerful and efficient numerical schemes for seismic modeling has fundamental importance. The finite-difference method (FDM) is the most popular method used to model seismic waves, mainly because of its robustness: it is applicable in a simple way to complex regions and, at the same time, it is relatively accurate and computationally efficient. Additionally, it is relatively simple and easy to implement in computer codes and is very suitable to parallelization. Of course, the FDM is not free of inherent limitations. Some of the main limitations are the difficult to handle accurately irregular interfaces between discontinuous material layers and the fixed and too small grid spacing imposed by numerical dispersion (unnecessary in high-velocity layers), in addition to impossibility of local refinement in regions of interest, e.g., a thin rock layer fulfilled with oil and salt flanks.
In this paper, we investigate the use of analysis in the spectral domain to overcome the limitations imposed by limited aperture, a common disadvantage to typical monitoring geometry in hydraulic fracturing processes. A typical microseismic monitoring configuration contains two horizontally drilled boreholes – one treatment well and one observation well. This configuration, while cost-effective, leads to an inability to execute moment tensor inversion through traditional means. However, through careful analysis in the spectral domain, parameters like center frequency and bandwidth can be used in tandem with knowledge of process parameters to better understand microseismic source characteristics.
There has been a significant increase in the amount of hydraulic fracturing projects in the United States as a result of a number of technical and economic factors. One of the main technological advances that has enabled hydraulic fracturing projects to be completed that were previously economically infeasible is the ability to drill horizontal boreholes. There are many advantages to this approach over the traditional vertical borehole method. For example, a much larger treatment zone in an area of interest can be produced as a direct result of the project geometry. Specifically, due to the orientation of shale formations, a much larger pay zone can be realized by drilling for a greater distance within a horizontal formation.
In order to monitor the microseismic activity resulting from these types of hydraulic fracturing processes, surface arrays or crosswell monitoring arrays are utilized. Surface arrays can be an effective tool for monitoring microseismic since they can provide large azimuthal coverage. However, given that the magnitudes of events resulting from hydraulic fracturing typically range from -1 Mw to -4 Mw, and that the depth of fracturing is usually one or more miles below the surface, signal-to-noise can become a difficult problem to overcome. As such, there is usually the need for both a very large number of acoustic sensors (6,000-24,000 geophones) and a large area at the surface (1-3 miles) to achieve coherent monitoring of microseismic events (Duncan and Eisner 2010).
Downhole monitoring with a horizontal observation well requires significantly fewer acoustic sensors to achieve good signal-to-noise; however, there are also a number of disadvantages to this approach. For example, there is increased uncertainty when determining microseismic event location. This comes as a direct result of the survey geometry. Specifically, since the monitoring array is parallel to the treatment well, location estimates rely on hodogram angle of inclination for depth determination (Maxwell 2014). The main disadvantage of crosswell monitoring, however, is an inability to perform moment tensor inversion with a single monitoring well (Vavryčuk 2007). This constraint is due to the small solid angle as a result from the close proximity of geophones and accompanying limited azimuthal coverage of the treatment zone. This is referred to as the limited aperture problem. In an effort to overcome this restriction, we turn to the spectral domain.