The forced oscillation method was adopted to measure Young’s Modulus (E) and the corresponding attenuation (1/QE) in Fontainebleau sandstone over the seismic frequency range (1-100Hz). Measurements on the sandstone were performed under confining pressures of 5, 10 and 15 MPa. The sample was fully saturated with waterglycerin mixtures, yielding a variety of pore fluid viscosities (8cP to 1414 cP). A typical frequency dependent attenuation curve, peaking at ~7 Hz and a corresponding dispersion in the Young’s modulus was observed at all confining pressures when the rock was fully saturated with glycerin (1414 cP). A decrease in fluid viscosity caused a shift of the attenuation curve to higher frequencies, suggesting a fluid flow mechanism where the peak frequency is inversely proportional to fluid viscosity. A simple squirt flow model fit the behavior observed in our dataset relatively well. Although the magnitude of attenuation does not match exactly, the peak frequency changed with viscosity as predicted by the theory. Moreover, an increase in the confining pressure caused the magnitude of attenuation to decrease, also in line with squirt flow theoretical predictions.
propagates. Intrinsic attenuation, what we are concerned with in this study, is the anelastic absorption in the medium. It is widely represented as the dimensionless inverse quality factor (1/Q) which indicates the fraction of energy lost per cycle (O’Connell and Budiansky, 1978). Using the forced oscillation technique (e.g., Subramaniyan et al., 2014), attenuation can be obtained as the phase shift between the applied sinusoidal stress and the resulting strain. The ratio of the stress and strain signals yields the corresponding rock modulus. Attenuation is studied in the laboratory in an effort to understand the controlling role of rock-fluid properties. These roles are described by different attenuation mechanisms (e.g., White, 1975; Mavko and Nur, 1979).
Several experiments have been performed on sandstones which agree upon the fact that the dry rocks exhibit low and frequency independent attenuation, whereas rocks saturated with fluids exhibit a relatively higher magnitude and frequency dependent attenuation (e.g., Tisato and Quintal, 2014). Different mechanisms have been attributed to studies performed under different experimental conditions, providing continuing insight into the frequency and saturation dependence of attenuation in fluid saturated sandstones (Tisato and Quintal, 2013; Pimienta et al., 2015).
Yao, Xingmiao (University of Electronic Science and Technology of China) | Wang, Qian (University of Electronic Science and Technology of China) | Liu, Zhining (University of Electronic Science and Technology of China) | Hu, Guangmin (University of Electronic Science and Technology of China) | Huang, Dongshan (CNPC Chuanqing Drilling Engineering Company Limited Geophysical Exploration Company) | Zou, Wen (CNPC Chuanqing Drilling Engineering Company Limited Geophysical Exploration Company)
Kriging interpolation algorithm is a kind of optimal linear unbiased interpolation method and applied to many fields, especially in geological field. As a kind of regional algorithm, how to choose the neighbor points is an important part of kriging algorithm. Moreover, in recent years, with the wide use of kriging interpolation technology, higher efficiency of kriging interpolation is required and traditional kriging interpolation algorithm has not been able to meet the current requirement for efficiency. For kriging is an algorithm defined in the limited space domain, it has to select the calculation area, which is completed by selecting neighbor known points as input for interpolating unknown points. But the existing kriging interpolation method could not consider neighbor points. Instead, it uses all the known points as the input. And if all the known points are used as the input, the scheme will be infeasible. Considering these problems, this paper puts forward a fast kriging interpolation algorithm based on Delaunay tetrahedron with CUDA-enabled GPU, which establishes a spatial index for 3D Delaunay tetrahedron to quickly search neighbor points. Meanwhile, based on CUDA platform, an effective parallel interpolation strategy is proposed by using powerful computing capability of GPU to improve the efficiency of interpolation.
Kriging algorithm (D. G. Krige) is an important interpolation method, which provides optimal linear unbiased estimation for the discrete points. In recent decades, kriging method has been extensively applied to the field of geological research. As an algorithm defined in the limited space domain, kriging inevitably will meet problems of the selection of the calculation area. Especially when there is a large scale of known points data, the algorithm needs to select neighbor known points as input instead of choosing all of the known points, considering computational complexity or feasibility. At present, in 2D kriging interpolation, Masoud Hessami et al. (2001) used Delaunay triangle implementation to improve kriging computing efficiency. But most of the 3D Kriging interpolation usually use all known points to the interpolation calculation, and there are few studies about the search of neighbor points in 3D Kriging interpolation, which resulting in a low-accuracy interpolation or heavy computational workload. As an optimized triangle subdivision, 3D Delaunay tetrahedron provides a good solution for this problem. Therefore, this paper uses 3D Delaunay tetrahedron as the rule of searching neighbor points to realize a fast search for neighbor known points.
Swinbourne, Michael (University of Adelaide) | Hatch, Michael (University of Adelaide) | Taggart, David (University of Adelaide) | Sparrow, Elisa (Zoos SA) | Ostendorf, Bertram (University of Adelaide)
Ground penetrating radar (GPR) was used to map the warrens of southern hairy-nosed wombats (Lasiorhinus latifrons) at four sites in South Australia in different soil types; sandy loam, clay and calcrete limestone Although GPR is used extensively for many geophysical applications, it has had limited use in wildlife research to date, with most previous studies not making full use of the post-survey processing features available. We successfully produced 3D maps of the wombat warrens at three of the four sites surveyed, with the heavy clay soil at the fourth site being too highly conductive for any useful radar data. The study provided the first data on wombat warrens obtained by non-invasive means, and was the first time that a warren under a layer of calcrete limestone has been mapped. The information obtained on warren morphology expands our knowledge of wombat population dynamics, and demonstrated the effectiveness of GPR as a useful tool for wildlife research.
The burrows and warrens of fossorial species are important to the animal for a range of reasons; including protection from predators, a safe location for raising young, and shelter from environmental extremes (Reichmann and Smith, 1990). The number of burrows in an area is an important predictor of population abundance (Tiver, 1981; Hubbs et al., 2000; Styrsky et al., 2010), with abundance also being influenced by the subterranean extent and complexity of the warren systems (Hickman, 1989). Consequently, a thorough knowledge of warren structure is an essential precursor to developing an understanding of such characteristics as population abundance, social structure, habitat use, life-history characteristics, and home range (Cortez et al., 2013).
Mapping the subterranean morphology of warrens has always presented a major challenge for wildlife researchers. Numerous methods have been tried, including excavation (Kolb, 1985), inserting vertical tubes through the roof of the warren (Shimmin et al., 2002), filling the warren with expanding polyurethane foam (Felthauser and McInroy, 1983), using remotely operated cameras (Smith et al., 2005), and tracking collared animals using magneto-inductive localisation (Markham et al., 2012). All of these approaches have their limitations, not least of which is that the destruction of a warren whether by excavation or filling with foam is likely to be fatal to the animal (Johnson, 1998), and it prevents any follow-up research on changes to the warren over time. Other approaches such as the use of vertical tubes are not always practical due to the rocky nature of some soils and they can interfere with the warren micro-climate (Ganot et al., 2012), whilst remote cameras and radio-collars do not necessarily reveal the full extent of complex warren systems (Vercauteren et al., 2002; Markham et al., 2012). What is needed is a nonintrusive research tool that can reveal the full extent of a warren without damaging it or harming its occupants.
The magnetotelluric (MT) method can be effectively applied for depth-to-basement estimation, because there exists a strong contrast in resistivity between a conductive sedimentary basin and a resistive crystalline basement. Conventional inversions of MT data are usually aimed at determining the volumetric distribution of the conductivity within the inversion domain. By the nature of the MT method, the recovered distribution of the subsurface conductivity is typically diffusive, which makes it difficult to select the sediment-basement interface. This paper develops a novel approach to 3D MT inversion for the depth-to-basement estimate. The key to this approach is selection of the model parameterization with the depth to basement being the major unknown parameter. In order to estimate the depth to the basement, the inversion algorithm recovers both the thickness and the conductivities of the sedimentary basin. The forward modeling is based on the contraction integral equation approach. The inverse problem is solved using a regularized conjugate gradient method. The Fréchet derivative matrix is calculated based on quasi-Born approximation. The developed method and the algorithm for MT inversion for the depth-to-basement estimate are illustrated on several realistic geoelectrical models.
There is a strong interest in developing effective geophysical methods for depth-to-basement estimation. It is well known that seismic imaging is characterized by the highest resolution of the subsurface structures. However, in the case of complex near-surface heterogeneity (e.g., shallow, high-velocity, highly heterogeneous basalt sills), typical for many frontier exploration regions, interpretation of seismic data represents a significant challenge, while using 3D seismic surveys is very expensive. These circumstances stimulated growing interest in using nonseismic geophysical methods, which could provide reasonable resolution but with lower cost (Tournerie and Chouteau, 2005).
Among the passive-source geophysical methods, potential field surveys have been widely used to estimate the depth to basement for decades (e.g., Barbosa et al., 1997; Gallardo-Delgado et al., 2003; Martins et al., 2010; Silva et al., 2001; Cai and Zhdanov, 2015a, b). Modern approaches to solving this problem are mostly based on the 3D inversion of gravity and magnetic data to recover the thickness of the columns, which are used to discretize the sedimentary basin. In the inversion, the horizontal dimensions of the columns are fixed and the column thickness is updated to fit the observed data. The low resolution of potential field inversion in this application can be compensated by joint inversion with seismic refraction data collected at some sparsely distributed receivers, with minor extra cost.
Dutta, Gaurav (King Abdullah University of Science and Technology) | Giboli, Matteo (TOTAL Exploration and Production) | Williamson, Paul (TOTAL Exploration and Production) | Schuster, Gerard T. (King Abdullah University of Science and Technology)
We present a least-squares reverse time migration (LSRTM) method using a factorization-free priorconditioning approach to overcome the low signal-to-noise (SNR) problem arising out of using severely undersampled data. Priorconditioning is a technique where the prior information is incorporated directly into the forward operator and into the solution space of the problem. The prior information that is used in this work is that the inverted reflectivity is sparse in the radon domain. The proposed method is factorization-free since the forward mapping is defined through the action of a sparse operator on a vector. The priorconditioning method is shown to produce reliable images with good SNR and free from aliasing artifacts when using very sparse shots for both synthetic and field data.
Least-squares migration (LSM) has been shown to produce images with balanced amplitudes, better resolution and fewer artifacts than standard migration (Lailly, 1984; Nemeth et al., 1999; Duquet et al., 2000; Plessix and Mulder, 2004; Dai and Schuster, 2009; Tang, 2009; Wong et al., 2011). Besides a reverse time migration (RTM) of the data residual, every iteration of LSRTM involves Born modeling to compute the predicted data from the reflectivity image and to estimate the steplength. Thus, every iteration of LSRTM is approximately 2-3 times more computationally expensive than that of standard RTM and the cost becomes proportional to the number of iterations carried out. Thus, the computational cost of LSRTM becomes very high for practical 3D problems.
To reduce the cost of standard RTM, Morton and Ober (1998) and Romero et al. (2000) proposed the idea of multisource phase-encoded migration where they blended several shotgathers using encoding functions with random time shifts and random source polarities into one supergather which they then migrated. Later, Dai et al. (2010) extended this idea to multisource LSRTM and showed that by an iterative migration of supergathers, multisource LSRTM can produce more accurate reflectivity images than standard RTM at a fraction of the computational cost. Herrmann and Li (2012) adopted a similar approach and used a combination of randomized dimensionalityreduction and divide-and-conquer-techniques to decimate the LSM problem as a series of smaller sub-problems where each sub-problem involved iterating on a small randomized subset of the data. They also combined their approach with compressive sensing and curvelet-domain sparse recovery (Candes et al., 2006) to obtain crosstalk free images from multisource LSM.
A simpler way of reducing the computational cost of LSRTM is by migrating very sparse shots since the cost of LSRTM is also proportional to the number of shotgathers migrated. However, using very sparse shots during migration has its pitfalls because the final image will be degraded in quality because of low SNR and migration artifacts which are not cancelled out by insufficient stacking. Thus, for incomplete or undersampled data, it becomes important to incorporate some sort of regularization into the inversion that would allow for a more accurate representation of the subsurface model parameters. Relaxing the sampling requirements will also lead to a reduction in the cost of data acquisition and processing.
Lu, Wenkai (Tsinghua University) | Xu, Ziqiang (CNOOC EnerTech-Drilling & Production Co.) | Fang, Zhongyu (CNOOC EnerTech-Drilling & Production Co.) | Wang, Rui Liang (China National Offshore Oil Corp.) | Yan, Chengzhi (CNOOC Ltd.)
It is well known that seismic signal is of super-Gaussian distributions, i.e., sparse signal. In this paper, we present a super-Gaussianity based deghosting method (SGDG) for variable depth streamer in time-space domain. In our method, the ghosts received by variable depth streamer are modeled by two time-space variant parameters, one is the sea surface reflection coefficient, and the other is the timeshift between the upgoing wave and its ghost. In SGDG method, these two parameters are estimated in time-space domain by a 2D scanning method, which is based on maximization of super-Gaussianity of the deghosting outputs. According to the estimated parameters, the selected deghosting outputs are merged together to obtain the final result. Applications of the proposed method on the synthetic and real seismic data give some promising results.
In marine seismic survey, the receivers record not only the desired signals (upgoing waves), but also their ghosts (downgoing waves), which arise from water surface reflection since it is very strong. The sea-surface ghosts constructively and destructively interfere with the desired signals, lead to frequency notches and lower-frequency attenuation, which reduce the resolution of the recorded seismic data (Jovanovich et al., 1983). Soubaras (2010) introduced variable depth streamer, which incorporates variable receiver depths along offset, as a marine broadband solution. The receiver depth varies along offset, leading to diversity in the receiver ghosts. Post-stack and pre-stack deghosting methods based on joint deconvolution (Soubaras, 2010; Soubaras, 2012) are proposed to remove the receiver ghosts for variable depth streamer.
For one upgoing wave, its ghost can be derived from itself directly provided that two ghost parameters, the sea surface reflection coefficient and the time-shift between the desired signal and its ghost, are known, no matter what shape the streamer is. Since the recorded seismic signals are some mixtures of the desired signals and their ghosts, we can separate them if we get an accurate enough estimation of these ghost parameters. Mo and Lu (2009) proposed a method for the estimation of these ghost parameters in the f-x domain based on maximization of non-Gaussianity, taking an assumption that these ghost parameters are constants for one seismic trace. In this paper, we present a super-Gaussianity based deghosting method (SGDG) for variable depth streamer. The proposed method extend the above method (Mo and Lu, 2009) for variable depth streamer by assuming that these ghost parameters are variant in time-space domain.
In standard seismic full waveform inversion updates (e.g., of Gauss- Newton type) small angle, backscattered amplitudes are incorporated linearly. Making an effort to include nonlinearity in each update may be useful, however, both for estimation of difficult-to-discriminate parameters such as density, and for improvement of convergence rates. We consider, in a theoretical scalar environment, one possible approach to including nonlinearity, wherein sensitivities at iteration n are computed by varying the field associated with the n+1th, rather than nth, model iterate. This produces an extended, series form, sensitivity expression. To understand the basic character of updates based on these revised sensitivities, the expression is truncated at second order, and the resulting Gauss-Newton-like updates are analyzed to expose their use of 1st and 2nd order reflectivity information. Differences between standard and nonlinear updates suggest that the latter may hold promise for the effective incorporation of high angle and high contrast reflectivity information in FWI.
Seismic full waveform inversion updates (Lailly, 1983; Tarantola, 1984; Virieux and Operto, 2009) can be constructed so as to respond to small-angle backscattered data, e.g., pre-critical specular reflections, in a manner consistent with linearized inverse scattering and AVO inversion (Innanen, 2014). This means a multi-parameter reflection FWI updating scheme can be protected against parameter cross-talk to the same degree as AVO and linear inverse scattering. However, it also means that linearization error will be present to the same degree as it is in those other methods, and concern registered in those domains (e.g., Weglein et al., 1986) is equally applicable to FWI.
Backscattered wave amplitudes are generally nonlinear in medium property variations. In the special case of two elastic half-spaces, for instance, the Zoeppritz equations define a highly nonlinear relationship between reflection coefficients and relative changes in elastic properties across a reflecting boundary. The relationship is often linearized; in the two half-space example, the Aki-Richards approximation, which is linear in the relative changes (Aki and Richards, 2002; Castagna and Backus, 1993; Foster et al., 2010), is commonly used in AVO inversion. Linearization error takes the form of a decrease in accuracy with an increase of incidence angle and/or magnitude of relative changes. This is one reason in AVO/AVAZ inversion why density is difficult to separate from VP (e.g., Stolt, 1989), and why certain anisotropic parameters are difficult to determine (e.g., Mahmoudian et al., 2013). These parameters are best distinguished at high angle, where the Aki- Richards formula and its extensions are insufficiently accurate.
In FWI, nonlinearity is primarily accommodated through iteration. Nevertheless, mitigating linearization error within individual FWI updates could play an important role, in principle making available to FWI both (1) the improved parameter discrimination known to be possible at large scattering angles, and (2) an uplift in convergence rate.
Linearization error has been mitigated in inverse scattering environments (Weglein et al., 2003; Zhang and Weglein, 2009a,b) and AVO environments (e.g., Stovas and Ursin, 2001; Innanen, 2011, 2013). In this paper we consider means by which nonlinearity can also be at least partially accommodated during FWI iterations—specifically, if it is possible to make changes to the basic ingredients of FWI such that an update naturally accommodates nonlinearity in the reflectivity/ step length relationship. General nonlinear sensitivity formalisms have been discussed in the literature, in a seismic context by Wu and Zheng (2014) and elsewhere (e.g., in optical tomography by Kwon and Yazici, 2010). Here we will discuss the construction of very particular, analyzable FWI update formulas that have direct expression in terms of nonlinear operations on the residuals.
Value of information analysis is used for evaluating and comparing geophysical data gathering schemes in the petroleum industry. Applications related to seismic amplitude data processing and electromagnetic data acquisition and processing are considered, for reservoir characterization, and resolving drilling and development decisions. We advocate a unified framework integrating statistical and geophysical modeling, and decision analysis.
Making decisions in the petroleum industry can be challenging. There is often notable uncertainty pertinent to the decision and there could be a lot at stake as investments may be considerable and there may be huge financial losses. Reliable geophysical information that resolves some of the uncertainty could go a long way towards improving the overall quality of decisions. What is the value of the geophysical data and how much data is enough? Information almost always comes at a price, so when is the information worth its price? The decision theoretic notion of value of information (VOI) is useful for evaluating and analyzing various sources of data.
The power of analyzing information sources using VOI is that: i) it allows the decision maker to perform a reasonable evaluation before the information is purchased and therefore revealed, and ii) if the decision maker can model value using monetary units, then VOI is also in monetary units. Compared with the traditional use of VOI in the petroleum industry, we stress the spatial multivariate aspects of the uncertain reservoir variables, the alternatives, and the potential information gathering schemes. We describe and motivate these spatial characteristics, and conduct VOI analysis for two examples.
Accurate first arrival information is needed for microseismic source location. Although manual picking-up has a higher accuracy, it is difficult to satisfy the real-time processing, because of its poor efficiency. In this paper, we have discussed the characteristic of the frequency of the microseismic signal and noise at first, then, used the CWT to analysis the microseismic signal multi-scalely, finally applied HOS to pick up P-arrivals. We have proposed a W-S/L_K first arrival picking-up method, which overcomes the defect to some extent, which can not be picked up accurately using traditional method because of the noise influence. According to the tests by synthetic examples and real data, the W-S/L_K picking-up method, can pick up the microseismic P-arrivals accurately from the low S/N data.
Microseismic monitoring is a geophysical technic by observing underground fracture, which is important to high production for the oilfield development. However, in view of the low S/N and weak energy level of the microseismic signal, although there are many methods to process seismic data, the applied effects of microseismic data is weak. So, it will directly affect the quality and accuracy of microseismic monitoring. Therefore, searching for appropriate method to pick up the weak effective signal form microseismic data is the key to microseismic data processing and interpretation.
In microseismic monitoring and analysis, accurate and fast P-wave arrival picking-up is improtant. The arrival of P-wave, which has a large number of geophysical and seismological information, is very important to microseismic source location, fracture prediction and source mechanism anylasis strcuture description(Duncan, P. M., and L. Eisner,201 0).Although artifical picking-up has a relative higher accuracy, it can not satisfy the need of real-time processing because of its low efficiency. So, how to pick up P-wave first arrival fast and accurately has a great significance.
Currently, P-wave arrival picking-up is based on the difference between signal and noise such as amplitude, frequence, polarization, etc. So far, many methods are proposed such as crossing of the ratio of short-term average to long-term average( STA/LTA)(Baer and Kradolfer,1987;Chen and Stewart, 2005),autoregressive method(Morita and Hamaguchi,1- 984;Leonard,2000),wavelet transform(Anant and Dowla,1- 997;Yung and Ikelle,1997),Hybrid methods(Saragiotis et al.,1999; Diehl et al., 2009;Fangyu Li and Jamie Rich et al, 2014).However, these methods are applied to the natural earthquake and sensitive to the nosie influence. So, the application result of microseismic is poor by using only one method.
Rough topography encountered when conducting controlled-source electromagnetic (CSEM) surveys significantly distorts electromagnetic (EM) fields. However, there have been few studies on the accurate consideration of topography into modeling and inversion algorithms. We have developed a three-dimensional (3D) CSEM inversion algorithm based on the forward modeling algorithm which rigorously incorporates topography based on an edge-based finite element (EFEM) method. We focused on the strict reflection of topographic information to the modeling algorithm and the accuracy of the algorithm was verified through comparing with the semi-analytic solutions. In numerical experiments, we successfully performed the inversion of data obtained from a ridge and valley model. This implies that real data acquired at rough terrain can be handled reasonably by using the inversion algorithm developed in this study.
Controlled-source electromagnetic (CSEM) method has been used to image the distribution of physical properties, such as electrical resistivity. In the mineral exploration which is one of the main applications of CSEM method, it is important to consider rough topography because many surveys are conducted in mountainous areas. Katayama and Zhdanov (2004) carried out inversion of real data for exploration of kimberlite using a topographic correction based on the integral equation (IE) method. However, the correction could not be applied to data acquired rough terrain because it considered the effect of topography as the linear part of electromagnetic (EM) fields. These problems could be handled by incorporating topography into the modeling and inversion. Sasaki and Nakazato (2003) rigorously incorporated topography into their algorithms based on a staggered-grid finite difference method (FDM). They showed that strict treatment of topography is essential to avoid the misinterpretation due to the distortion caused by topographic effects. However, their algorithms had the limitation in simulating topography precisely because of intrinsic limitation of FDM. Although this limitation had almost no effect on their study because they used airborne EM method, it can cause significant distortion o