Seismic data processing in deep-water slope belt area has always been a challenge for marine seismic data processing. Steep slopes could induce strong diffraction multiples, difficult velocity modelling, weaker illumination. This paper will introduce three key methods on dual direction acquisition seismic data imaging for slope belt area. The first key method is joint multiple attenuation, this method uses 3D SRME,vertex drift high-precision radon multiple attenuation, super CDP gather median filter to multiple attenuation; The second key method is geological constrained velocity modeling. This method uses geological information to restrict the initial velocity which could reduce the impact of rapid land surface change on velocity modeling; The third method is dual direction data merge. After PSDM, this method uses fast and slow wave velocity correction and advantage data to get a dual direction merging data stack section. Joint multiple attenuation could eliminate diffraction multiples effectively, and it can improve the signal to noise ratio which will be benefit for the velocity analysis and final imaging; Constrained velocity modeling could improve the velocity model accuracy in the steep slope area, which can help final imaging to avoid imaging distortion under steep seabed; To get a dual direction merging data stack section is a relabel way to increase data folds and illumination and improve the quality of imaging faults. The successful application of these three methods shows that dual direction acquisition data processing provides an effective way to solve the imaging difficulties in steep slopes in deep water.
The existence of elastic anisotropy in the reservoir is obtained through the equivalent media theory. An isotropic elastic theory fairly explains the reservoir modeling or characterization but its not well explain anisotropic characteristic fairly for reservoir characterization which is extremely challenging without considering a self-consistent theory of effective equivalent media theory. In this research, equivalent media theory has been explained and implemented on a producing well-log data with consistent Vp, Vs, density and other parameters. Instead of using Voigt averaging, equivalent media theory used to estimate the effective stiffness parameters and compare with Thomsen's parameters and finally used effective anisotropy parameters and compare with gamma log. Result shows the effectiveness of equivalent media theory for future application for developing reservoir modeling and characterization.
The wave velocity is defined theoretically by the Newton-Laplace equation, which relates the wave velocity, V, to the square root of the ratio of the elastic modulus, M, and density, ρ. Therefore, the equation indicates that the velocity is inversely proportional to density. However, the in-situ field measurements and laboratory experiments of compressional wave velocity through different rocks show otherwise, where the velocity is directly proportional to approximately the 4th power of density as stated by Gardner's numerical approximation. To clarify the apparent contrast between theory and observations, a new expression for the elastic modulus, M, is derived using Wyllie's time average equation and the Newton-Laplace equation.
I examine the basis of slow convergence of tomographic full waveform inversion (TFWI) and discover that the reason behind it is the unbalanced effects of amplitudes and phase in the design of the regularization term. This imbalance results in a strong reliance of the kinematic updates on the amplitude fitting, slowing down the convergence. To mitigate the problem I propose two modifications to the tomographic inversion. First, by modifying the regularization term to focus more on the phase information, and second, simultaneously updating the source function for modeling. The adjustments reduce the gradient artifacts and allow for explicit control over the amplitudes and phases of the residuals.
Tomographic full waveform inversion (
The modeling operator is able to match the observed data by extending the velocity model with the proper axis, no matter what the accuracy of the initial model is, by using kinematic information from the extended axis with disregard to the occurrence of cycle skipping. The inversion is set up to extract all the essential information from the virtual axes and smoothly fold them back into their original, nonextended form of the model. The kinematic and dynamic information of the data were successfully inverted with exceptional robustness and precision.
Even though cycle-skipping is not an issue with TFWI, this method creates its own challenges, which are; its high computational cost and the big number of iterations that it needs (
Two adjustments to TFWI are proposed to reduce the slow convergence and allow for more control of the ratio between amplitude and phase. These adjustments are consistent in the framework of TFWI and allow for an accurate calculation of the gradient in the data space. The adjustments were tested and resulted in a reduction in the kinematic artifacts in the gradient.
3D wide azimuth seismic data plays a vital role in fault interpretation, which has significant importance during exploration and development stages. Interpreting faults in 3D seismic data is one of the most time consuming and challenging process especially when dealing with poor quality seismic data. This paper provides a complete workflow and example of its application from seismic pre-conditioning to fault detection and extraction automatically based on published concepts by Dave Hale. With recent advancement in computer technology, multi-threaded algorithms and data driven methodologies, geoscientists can automatically detect and interpret virtually all discontinuities in seismic data in an efficient manner.
This workflow involves random and coherent noise suppression, seismic likelihood attributes generation to enhance the discontinuities, detect faults and extract them from thinned fault likelihood volume. Unlike other fault tracking methods that use local seismic continuity attributes, such as coherency, this automated method incorporates aspects of Hale's fault-oriented semblance algorithm, which highlights fault planes with unprecedented clarity.
This methodology has been successfully applied on complex faulted reservoirs. It contributes to the extraction of detailed discontinuity information (minor and major) from 3D seismic data. The traditional manual interpretation step that follows the detection of faults was time consuming and error prone. Automated fault interpretation improves the fault tracking accuracy, consistency and significantly reduces fault interpretation time in prospect generation. This workflow will optimize and reduce uncertainty associated with the seismic fault interpretation process.
This short course will discuss various “Industrial Internet of Things (IIoT)” technologies and their application to oilfield automation. The purpose of IIoT’s is to integrate sensing, communications and distributed analytics capabilities in helping the petroleum industry better manage existing assets at lower TCO. The goal is improving asset visibility and reliability, optimizing operations, and creating new value. The emergence of IOT’s in oilfield operation require new skill sets and this course is intended to update the oilfield professionals. This course is for oilfield professionals who would like to expand their knowledge and skill-set in Automation of oilfields using IIoT.
This is a three-part tutorial of a workflow for evaluating unconventional resources including organic mudstones and tight siltstones. Part 1 reviews the unique challenges and we provide an overview of the proposed workflow. Part 2 describes in more detail the many components of the workflow and how they come together to determine the storage capacity of the reservoir. Finally, Part 3 links the petrophysical results to the production potential in terms of fractional flow and water cut.
One of the most important functions that the petrophysicist provides is the estimation of accurate storage properties. In the oil and gas industry, storage defines the opportunity, and flow pays the bills. Estimation of storage is more than just estimation of porosity and water saturation. It begins with accurate assessment of rock composition which begets accurate porosity and subsequently water saturation. However, storage estimation need not end there. With an understanding of fluid type and properties, and with the application of appropriate equations of state that describe the variation of formation volume factor, bubblepoint or dewpoint pressure, oil viscosity and density as a function of temperature, pressure, GOR, API gravity and gas gravity, very accurate assessments of oil in place (OIP), gas in place (GIP), and water in place (WIP) are possible in profile. These profiles are then integrated into cumulative storage volumes by bench.
Shear slowness is commonly computed from well log dipole flexural mode data using slowness-time-coherence (STC) processing. Flexural dispersion is handled by restricting the signal's frequency content to the low frequencies that travel close to the formation's shear velocity, or by altering the phase relationships within the waveforms prior to STC processing in accordance with observed dispersion characteristics. Restricting the frequency range may not eliminate the need for a residual dispersion correction, however, and in noisy environments the dispersion curves needed for modifying phase relationships may be of poor quality. Formation and borehole properties have a significant influence on observed frequency content, and selection of bandwidth for the optimal balance between noise and size of the residual dispersion correction adds to overall processing time. Inversion addresses these difficulties by computing shear slowness directly from observed dispersion characteristics, but the process needs to be fast and tolerant to noise to be effective in commercial applications. In order to make the inversion efficient the iterative steps which compare observed and forward modeled dispersion curves are replaced with a fast neural net trained on a large number of pre-modeled curves generated with known formation and borehole properties. Automated mode frequency detection constrains the bandwidth over which dispersion curves are matched, accounting for potentially high levels of noise seen, for example, in horizontal wells. Results from 127,000 modelled and field data points show improved accuracy and precision relative to STC processing.
Azevedo, Leonardo (Cerena/Decivil, Instituto Superior Técnico) | Demyanov, Vasily (Institute of Petroleum Engineering, Heriot-Watt University) | Lopes, Diogo (Cerena/Decivil, Instituto Superior Técnico) | Soares, Amílcar (Cerena/Decivil, Instituto Superior Técnico) | Guerreiro, Luis (Partex Oil & Gas)
Geostatistical seismic inversion uses stochastic sequential simulation and co-simulation as the perturbation techniques to generate and perturb elastic models. These inversion methods allow retrieve high-resolution inverse models and assess the spatial uncertainty of the inverted properties. However, they assume a given number of a priori parametrization often considered known and certain, which is exactly reproduce in the final inverted models. This is the case of the top and base of main seismic units to which regional variogram models and histrograms are assigned. Nevertheless, the amount of existing well-log data (i.e., direct measurements) of the property to be inverted if often not enough to model variograms and its histograms are biased towards the more sand-prone facies. This work shows a consistent stochastic framework that allows to quantify uncertainties on these parameters which are associated with large-scale geological features. We couple stochastic adaptive sampling (i.e., particle swarm optimization) with global stochastic inversion to infer three-dimensional acoustic impedance from existing seismic reflection data. Key uncertain geological parameters are first identified, and reliable a priori distributions inferred from geological knowledge are assigned to each parameter. The type and shape of each distribution reflects the level of knowledge about this parameter. Then, particle swarm optimization is integrated as part of an iterative geostatistical seismic inversion methodology and these parameters are optimized along with the spatial distribution of acoustic impedance. At the end of the iterative procedure, we retrieve the best-fit inverse model of acoustic impedance along with the most probable value for the location of top and base of each seismic unit, the most likely histogram and variogram model per zone. We couple stochastic adaptive sampling (i.e., particle swarm optimization) with global stochastic inversion to infer three-dimensional acoustic impedance from existing seismic reflection data. Key uncertain geological parameters are first identified, and reliable a priori distributions of potential values are assigned to each parameter. The type and shape of each distribution reflects the level of knowledge about this parameter. Then, particle swarm optimization is integrated as part of an iterative geostatistical seismic inversion methodology and these parameters are optimized along with the spatial distribution of acoustic impedance. At the end of the iterative procedure we retrieve the best-fit inverse model of acoustic impedance along with the most probable value for the location of top and base of each seismic unit, the most likely histogram and variogram model per zone.
Cui, Dong (Research Institute of Petroleum Exploration & Development, PetroChina) | Hu, Ying (Research Institute of Petroleum Exploration & Development, PetroChina) | Zhang, Yan (Research Institute of Petroleum Exploration & Development, PetroChina) | Zhang, Cai (Research Institute of Petroleum Exploration & Development, PetroChina) | Zhang, Yujie (Institute of Geology and Geophysics, China Academy of Sciences)
Near-surface velocity model is the key issue to seismic imaging and static correction. It has becomes a broad consensus that it is velocity model, not imaging algorithm, determines the quality of image. Waveform tomography or FWI is a powerful tool to obtain underground velocity. However, the application of waveform tomography has some obstacles. One of the biggest problems that prevent the application is the non-linear waveform misfit function. Correlation-based first arrival travletime tomography could obtain highly accurate velocity model in complex structure district using cross-correlation misfit function, and even don’t need to pick first arrival time or wavelet. Though this method appears to provide less model resolution compared to waveform tomography, this method could work well in those places traditional ray-based tomography may fail because of the high velocity layer exposing to the ground. Theory and numerical example indicate that this method could accurately perform near-surface velocity modeling and has a broad application prospect.
Presentation Date: Wednesday, October 17, 2018
Start Time: 1:50:00 PM
Location: Poster Station 22
Presentation Type: Poster