Al Ramadhan, Abdullah (EXPEC Advanced Research Center, Saudi Aramco) | Hemyari, Emad (EXPEC Advanced Research Center, Saudi Aramco) | Bakulin, Andrey (EXPEC Advanced Research Center, Saudi Aramco) | Erickson, Kevin (EXPEC Advanced Research Center, Saudi Aramco) | Smith, Robert (EXPEC Advanced Research Center, Saudi Aramco) | Jervis, Michael A. (EXPEC Advanced Research Center, Saudi Aramco)
In 2015, Saudi Aramco started a CO2 Water-Alternating-Gas (WAG) EOR pilot project in an onshore carbonate reservoir. To monitor lateral expansion of the CO2 plume, the area was instrumented with a hybrid surface/downhole permanent seismic monitoring system. This system consists of over 1000 buried seismic sensors at a depth of around 70 m, below the the depth of expected weathering layer to mitigate the time-lapse noise. Despite receiver burial, seismic data still suffers from numerous challenges including: significant amounts of high-amplitude coherent noise such as guided waves, mode conversions, and scattered energy; amplitude variations over space and time caused by source and receiver coupling; variability of wavelet shape and arrival times due to seasonal near-surface variations; and low signal-to-noise ratio (SNR). A novel processing workflow was designed for 4D processing of such data. The workflow involves five critical processes. First, the high-amplitude coherent noise is eliminated using FK-based techniques that are 4D compliant to preserve the reservoir changes between repeated seismic surveys. Second, a four-term joint surface-consistent amplitude-scaling algorithm resolves the amplitude variations. The algorithm allows both source and receiver terms to have different scalars for the same positions, but it restricts the other two terms to be position-invariant over different time-lapse surveys, as the window of analysis does not include the reservoir. This is to guarantee that the source and receiver terms are survey-dependent while the other two terms are survey-independent. Thus, the amplitude variability is linked to source and receiver positions over space and time. It also assures that the reservoir changes are not affected by changes in the overburden. Third, wavelet shape variations are addressed using a four-term joint surface-consistent spiking deconvolution algorithm that applies similar principle as the scaling algorithm. Fourth, the small variations in reflection times between different surveys (4D statics) caused by seasonal variations are corrected by a specialized surface-consistent residual statics algorithm using a common pilot derived from the base survey. Fifth, the pre-stack data is supergrouped to enhance the signal-to-noise ratio and repeatability.
The processing workflow has been applied to frequent land 3D seismic data acquired over a CO2 WAG EOR pilot project in Saudi Arabia. As a result, we obtained very repeatable seismic images that may successfully detect small CO2-related changes in a stiff carbonate reservoir.
In 2015, Saudi Aramco commenced its first carbon capture and sequestration project, with carbon dioxide (CO2) injected into a small area of a carbonate reservoir located in the Eastern Province of Saudi Arabia. To monitor the expansion of the CO2 plume, continuous 4D seismic data is being recorded at an average rate of one survey per month. The interpretability of data requires: (1) a high degree of repeatability, which has been achieved through dedicated acquisition and processing, (2) sufficient sensitivity of seismic data to injected CO2, (3) accurate characterization of reservoir heterogeneity, and (4) a fit-for-purpose workflow to interpret time-lapse seismic images. This paper focuses on the last three points.
First, a rock physics model (RPM) is calibrated from available well data (well logs, fluid analysis), showing that CO2 injection causes a drop in both acoustic impedance and Poisson's ratio of 6% at the well log scale, leading to moderate seismic data sensitivity, and therefore, a priori challenging interpretation. Second, the RPM allows estimating changes in elastic parameters corresponding the CO2 saturation variations predicted at different calendar times by the history matched reservoir model. Synthetic seismic data is modeled, to which we add realistic seismic noise, directly derived from seismic monitoring data. A multivariate statistical model between 4D amplitude maps and CO2 concentration is calibrated. Finally, this model is used to produce maps of CO2 concentration as predicted by the seismic data. Uncertainty is propagated throughout this approach, which allows us to use statistical methods to highlight regions where a 4D signal is strong enough to produce a detectable response.
Besides providing an integrated interpretation of 4D seismic, the proposed approach also quantifies the impact of seismic data uncertainty, providing a more realistic, and therefore, more useful interpretation.
We present results from a first of its kind permanent seismic monitoring system in a desert environment. This new system consists of sensors buried at 70 m depth and surface vibroseis sources. We describe processing challenges associated with this single-sensor and single-source dataset and present initial solutions that allowed us to obtain robust 3D reservoir images. The system has achieved a remarkable repeatability with mean NRMS of 4-5 % across high fold area between closely spaced repeat surveys.
Presentation Date: Wednesday, October 19, 2016
Start Time: 8:50:00 AM
Presentation Type: ORAL
We perform synthetic tests to evaluate how source and receiver amplitude variations can affect image quality and repeatability of the virtual source gather. We use different workflows in order to either remove the effect of these variations after redatuming or correct them before redatuming. In particular, we consider a multi-dimensional deconvolution-convolution redatuming approach and use it to improve imaging and repeatability in the presence of source amplitude variations. In addition, we demonstrate how surface-consistent scaling can balance the amplitudes of both sources and receivers. We demonstrate that the surface-consistent processing using a deeper reflection time window produces the best result. However, a shallow time window that includes early arrivals can still be used when reflection amplitudes are not reliable.
Presentation Date: Thursday, October 20, 2016
Start Time: 8:55:00 AM
Presentation Type: ORAL
We apply supergrouping of land data after moveout corrections and prove that it can address noise issues caused by extreme near-surface scattering present in a desert environment. We demonstrate how supergrouping enhances pre-stack single-sensor data as well as data previously group-formed in the field. Improved statics, velocity estimation, and deconvolution operators obtained with supergrouping all lead to better images. Imaging of supergrouped data also has a clear advantage at least in simple structural settings.
Presentation Date: Tuesday, October 18, 2016
Start Time: 8:00:00 AM
Presentation Type: ORAL
Virtual source redatuming is an effective way for improving repeatability of onshore seismic data acquired with buried receivers that can suffer from near-surface variations and acquisition geometry changes. However, redatuming is less effective in correcting for amplitude variations of the downgoing wavefield caused by variable source signatures, coupling or other factors. We present an improved redatuming workflow, which has the benefits of the virtual source approach and corrects for additional non-repeatability of the downgoing wavefield between surveys. The method involves construction of the virtual source gather for each survey, deconvolution with the corresponding point spread-function (PSF) and convolving with a reference PSF. Here we employ a reference PSF computed for a homogeneous replacement near surface. This reduces imaging artifacts and provides additional control over the dominant frequency of the output data. We demonstrate a significant repeatability improvement using a field 4D multi-survey onshore dataset from Saudi Arabia.
Time-lapse seismic reservoir monitoring on land is a challenging task. The repeatability of seismic data can suffer from various factors such as near-surface changes, variability of source/receiver geometry and coupling, and surface topography variations over time. Recently an experiment was reported, which involved 11 repeated land seismic surveys in a desert environment over an onshore reservoir (Bakulin et al., 2012). The data were acquired using shallow buried receivers and surface sources. This acquisition design has great potential to improve repeatability as well as enable virtual source redatuming (Bakulin and Calvert, 2006; Bakulin et al., 2007) in order to address source positioning errors, source coupling changes, and diurnal/seasonal temperature variations. A synthetic case study in a realistic synthetic model (Alexandrov et al., 2012) confirmed that virtual source redatuming could reduce non-repeatability caused by the factors listed above. In particular, source-coupling variations, modeled as random phase perturbations of the source wavelet, were completely removed. All these improvements are expected if the amplitude spectra of the source signatures remain unchanged between 4D surveys. Field data has clearly showed that such an assumption is not met in desert environments of Saudi Arabia over time periods of months to years (Bakulin et al., 2014). We present an improved redatuming technique based on multi-dimensional deconvolution (MDD) that can correct for variable source amplitude spectra between surveys or more generally correct for variable downgoing wavefield illuminating in each 4D survey. The method involves construction of virtual source gathers for all surveys, deconvolving the gathers with the corresponding PSF from the same survey and re-convolving with a common reference PSF, computed assuming a homogeneous replacement layer.
Golikov, Pavel (EXPEC Advanced Research Center) | Dmitriev, Maxim (EXPEC Advanced Research Center) | Bakulin, Andrey (EXPEC Advanced Research Center) | Neklyudov, Dmitry (Institute of Petroleum Geology and Geophysics SB RAS) | Lakeman, Rienk (Saudi Aramco)
3D land seismic data acquired in arid environments is often challenging for data processing and interpretation, due to low signal-to-noise ratio and the presence of various types of noise. Traditionally, large source and receiver arrays have been utilized for noise suppression and signal enhancement. A trend in modern seismic data acquisition is to reduce the size of the source and receiver arrays, aiming to record broadband signals for imaging and inversion purposes. For many processing steps and velocity model building, achieving good pre-stack signal-to-noise ratio may be more important. We propose a simple but effective supergrouping technique that significantly enhances prestack data quality. We demonstrate our approach on two 3D onshore datasets from Saudi Arabia.
Land seismic data from a desert environment generally has poor signal-to-noise ratio (SNR) (Robertson and Al- Husseini, 1982). Modern seismic acquisition is trending toward recording a higher number of channels with smaller arrays of sources and receivers. In Saudi Arabia, this means acquiring huge data volumes of significantly lower prestack data quality. Naturally, every processing step that relies on pre-stack information becomes more challenging if applied to raw data with low SNR. In the past, large source and receiver arrays were utilized to improve SNR. While their popularity in acquisition has reduced, processing approaches that compensate for decreased data quality are lagging behind. Conventional group forming may help up to a certain limit but often requires too fine spatial sampling. In this study, we propose a method of enhancing the quality of conventional 3D land pre-stack data using a supergrouping technique that combines elements of grouping and stacking. With increased emphasis on low frequencies and proliferation of hierarchical techniques (applied from progressively low to high frequencies) from statics to velocity model building, we expect that adaptive supergrouping may fill the missing gap for different frequency bands. Supergrouping builds on a foundation of the group forming, but goes beyond to deal with large source/receiver intervals, using simple assumptions and smart summation techniques that prove to work well for field data of different complexity. In this paper, we briefly describe the methodology and illustrate it with examples using two 3D field datasets.
For 4D acquisition with buried receivers we propose a simple and robust 4D binning scheme based on direct early arrivals. With buried receivers, the near-field downgoing energy can be recorded. Shots with poorly repeatable early arrivals are rejected to exclude gathers with the most unrepeatable reflections. The method has been applied to a field 4D dataset from Saudi Arabia with 11 repeat vintages. We confirm that both image quality and repeatability can be improved.
For marine acquisition, seismic repeatability is often tied to reproducing geometry of the shots and/or receivers (Calvert, 2005). On land, there are other significant sources of non-repeatability (in addition to geometry) that are not present in marine environments (Jervis et al., 2012). In this study, we focus on buried receiver acquisition with surface vibroseis sources (Bakulin et al., 2012). While there are some geometry errors associated with repositioning surface vibrators, the tolerances are much smaller than in marine surveys (typically around 1-2 m). Attempts to see if geometry-based rejection may improve repeatability were not very successful. It turns out that the benefit of data rejection was quickly outweighed by reduction in fold, leading to deteriorating signal to noise ratio (SNR) and thus repeatability. Nevertheless, other factors related to variable source coupling and near-surface variations still remain significant sources of non-repeatability on land data despite well repeated shot geometry. Unlike acquisition geometry, these factors are hard to quantify based on simple metrics as generally they require assessment of the pre-stack traces, which have notoriously poor SNR in the Arabian Peninsula. For buried receiver data we have the luxury to record the downgoing arrivals that are used to illuminate the reservoir. The correlation between repeatability of these early arrivals and deep reflection data were reported in a previous study (Bakulin et al., 2014). Here we make use of this relationship and design a rejection scheme based purely on the pre-stack direct arrival NRMS and demonstrate that it can improve repeatability of the imaged reflection data.
Bakulin, Andrey (Geophysics Technology, EXPEC Advanced Research Center, Saudi Aramco) | Jervis, Mike (Geophysics Technology, EXPEC Advanced Research Center, Saudi Aramco) | Colombo, Daniele (Geophysics Technology, EXPEC Advanced Research Center, Saudi Aramco) | Tsingas, Constantine (Geophysics Technology, EXPEC Advanced Research Center, Saudi Aramco) | Luo, Yi (Geophysics Technology, EXPEC Advanced Research Center, Saudi Aramco)
Surface geophysics has good coverage, but is limited in vertical resolution and quality, especially in areas with complex overburden. To realize high-fidelity reservoir characterization and monitoring, we foresee the need to bring geophysics closer to the reservoir to transfer geophysical measurements more in line with the resolution obtainable by logs. Sensors, and probably sources, need to be deployed tens or hundreds of meters below the surface. We envision this happening for targeted applications in areas from 10 to 100 km2.
Setting the stage
Reservoir engineers need geophysics to fill in the gap in information between the wells to help with reservoir management and improve recovery. This translates into a need for ultra-high vertical resolution for reliable reservoir properties, and monitoring. The realities of surface geophysics in the Middle East and many other areas with complex overburden are rather unappealing. Surface seismic on the Arabian peninsula can provide structural information with perhaps 50-100 ft vertical resolution. Mature areas are covered by multiple legacy surveys each escalating in cost, but from an engineer’s perspective with little added value. Engineers build billion-cell reservoir simulation models with 25x25 m grids which are not routinely populated using seismic data. As a geophysicists, we never stop trying to improve data fidelity and quality and we have had some successes using higher channel counts and wide azimuth acquisition, but far too often what we extract is only incrementally better than legacy data. A revolution is needed in geophysical data acquisition to achieve our goals. As we discovered in 4D trials with buried receivers (Bakulin et al., 2012) we have the beginnings of a solution. All of us are familiar with the concept of resolution and associated trade-offs (Figure 1): to see big picture – one needs to be high above the target and to see details one needs to be close. We have largely concentrated on these two extremes for a very long time; we perfected our surface geophysics to image large volumes but at low resolution and we use logging to see incredible details but very locally. Yet developments in these two areas did not fulfill the engineers needs that we outlined earlier. We think this can be changed by bringing geophysics closer to the reservoir, i.e. by literally burying sensors and perhaps sources below near-surface complexities, in deeper boreholes and in producing boreholes.
We describe specific challenges of onshore seismic monitoring with buried sensors and surface vibrators in a desert environment. In particular we focus on various 4D metrics applied to pre- and post-stack data. We observe clear trends suggesting that repeatability degrades with time with the best repeatability achieved between surveys separated by days, and the worst repeatability for surveys separated by years. The similarity in trends observed for stacked data and early arrivals suggests that most of these changes are likely associated with variations in the very near surface.