Taha, Taha (Emerson Automation Solutions) | Ward, Paul (Emerson Automation Solutions) | Peacock, Gavin (Emerson Automation Solutions) | Heritage, John (Emerson Automation Solutions) | Bordas, Rafel (Emerson Automation Solutions) | Aslam, Usman (Emerson Automation Solutions) | Walsh, Steve (Emerson Automation Solutions) | Hammersley, Richard (Emerson Automation Solutions) | Gringarten, Emmanuel (Emerson Automation Solutions)
This paper presents a case study in 4D seismic history matching using an automated, ensemble-based workflow that tightly integrates the static and dynamic domains. Subsurface uncertainties, captured at every stage of the interpretative and modelling process, are used as inputs within a repeatable workflow. By adjusting these inputs, an ensemble of models is created, and their likelihoods constrained by observations within an iterative loop. The result is multiple realizations of calibrated models that are consistent with the underlying geology, the observed production data, the seismic signature of the reservoir and its fluids. It is effectively a digital twin of the reservoir with an improved predictive ability that provides a realistic assessment of uncertainty associated with production forecasts.
The example used in this study is a synthetic 3D model mimicking a real North Sea field. Data assimilation is conducted using an Ensemble Smoother with multiple data assimilations (ES-MDA). This paper has a significant focus on seismic data, with the corresponding result vector generated via a petro-elastic model. 4D seismic data proves to be a key additional source of measurement data with a unique volumetric distribution creating a coherent predictive model. This allows recovery of the underlying geological features and more accurately models the uncertainty in predicted production than was possible by matching production data alone.
A significant advantage of this approach is the ability to utilize simultaneously multiple types of measurement data including production, RFT, PLT and 4D seismic. Newly acquired observations can be rapidly accommodated which is often critical as the value of most interventions is reduced by delay.
The petro-elastic model (PEM) represents an integral component in the closed-loop calibration of integrated four-dimensional (4D) solutions incorporating time-lapse seismic, elastic and petrophysical rock property modeling, and reservoir simulation. Calibration of the reservoir simulation model is needed so that it is not only consistent with production history but also with the contemporaneous subsurface description as characterized by time-lapse seismic. The PEM requires dry rock properties in its description, which are typically derived from mechanical rock tests. In the absence of those mechanical tests, a small data challenge is posed, whereby all necessary data is not available but the value of reconciling seismic attributes to simulated production remains. A seismic inversion-constrained n-dimensional metaheuristic optimization technique is employed directly on three-dimensional (3D) geocellular arrays to determine elastic and density properties for the PEM embedded in the commercial reservoir simulator.
Ill-posed dry elastic and density property models are considered in a field case where the seismic inversion and petrophysical property model constrained by seismic inversion exist. An n-dimensional design optimization technique is implemented to determine the optimal solution of a multidimensional pseudo-objective function comprised of multidimensional design variables. This study investigates the execution of a modified particle swarm optimization (PSO) method combined with an exterior penalty function (EPF) with varied constraints. The proposed technique involves using n-dimensional design optimization to solve the pseudo-objective function comprised of the PSO and EPF given limited availability of constraints. In this work, an examination of heavily and reduced-order penalized metaheuristic optimization processes, where the design variables and optimal solution are derived from 3D arrays, is conducted so that constraint applicability is quantified. While the process is examined specifically for PEM, it can be applied to other data-limited modeling techniques.
Integration of time-lapse seismic data into dynamic reservoir model is an efficient process in calibrating reservoir parameters update. The choice of the metric which will measure the misfit between observed data and simulated model has a considerable effect on the history matching process, and then on the optimal ensemble model acquired. History matching using 4D seismic and production data simultaneously is still a challenge due to the nature of the two different type of data (time-series and maps or volumes based).
Conventionally, the formulation used for the misfit is least square, which is widely used for production data matching. Distance measurement based objective functions designed for 4D image comparison have been explored in recent years and has been proven to be reliable. This study explores history matching process by introducing a merged objective function, between the production and the 4D seismic data. The proposed approach in this paper is to make comparable this two type of data (well and seismic) in a unique objective function, which will be optimised, avoiding by then the question of weights. An adaptive evolutionary optimisation algorithm has been used for the history matching loop. Local and global reservoir parameters are perturbed in this process, which include porosity, permeability, net-to-gross, and fault transmissibility.
This production and seismic history matching has been applied on a UKCS field, it shows that a acceptalbe production data matching is achieved while honouring saturation information obtained from 4D seismic surveys.
Time-lapse seismic monitoring is a powerful technique for reservoir management and the optimization of hydrocarbon recovery. In time-lapse seismic datasets, the difference in seismic properties across different vintages enables the detection of spatio-temporal changes in saturated properties and structure induced by production. The main objectives are (1) to identify bypass pay zones in time-lapse seismic data for the deepwater Amberjack field, located in the Gulf of Mexico, (2) confirm the identified bypass pay zones in the results of reservoir simulation, and (3) recommend well planning strategies to exploit these bypassed resources.
A high-fidelity seismic-to-simulation 4D workflow that incorporates seismic, petrophysics, petrophysical property modeling, and reservoir simulation was employed, which leveraged cross-discipline interaction, interpretation, and integration to extend asset management capabilities. The workflow addresses geology (well log interpretation and framework development), geophysics (seismic interpretation, velocity modeling, and seismic inversion), and petrophysical property modeling (earth models and co-located co-simulation of petrophysical properties with P-impedance from seismic inversion). An embedded petro-elastic model (PEM) in the reservoir simulator is then used to affiliate spatial dry rock properties with saturation properties to compute dynamic elastic properties, which can be related to multi-vintage P-impedance from time-lapse seismic inversion. In the absence of the requisite dry rock properties for the PEM, a small data engine is used to determine these absent properties using metaheuristic optimization techniques. Specifically, two particle swarm optimization (PSO) applications, including an exterior penalty function (EPF), are modified resulting in the development of nested and average methods, respectively. These methods simultaneously calculate the missing rock parameters (dry rock bulk modulus, shear modulus, and density) necessary for dynamic, embedded P-impedance calculation in the history-constrained reservoir simulation results. Afterward, a graphic-enabled method was devised to appropriately classify the threshold to discriminate non-reservoir (including bypassed pay) and reservoir from the P-impedance difference. Its results are compared to unsupervised learning (k-means clustering and hierarchical clustering). From seismic data, one can identify bypassed pay locations, which are confirmed from reservoir simulation after conducting a seismic-driven history match. Finally, infill wells are planned, and then modeled in the reservoir simulator.
The Programme Committee of the 2019 SPE Russian Petroleum Technology Conference invites you to submit a paper proposal and contribute to this event. The paper proposal categories below are used to direct the paper proposals to the appropriate subject matter experts for evaluation. Please choose one or several categories and submit your paper.
Geophysical Reservoir Monitoring GRM systems such 4D seismic are increasingly used in the oil and gas industry because they provide unique and useful information on fluid movement within the reservoir. This information is relevant for many reservoir management decisions; including new well placement, well intervention, and reservoir model updating.
Unfortunately, it has been difficult to estimate the value creation of any data acquisition scheme due to the fact that a multidisciplinary approach is required to model the value that future measurements will imply in future decisions. This assessment requires a common decision making simulation frame work that can integrate the input from geo-modelers, geophysicist and reservoir engineers.
This work presents an example of how a Close Loop Reservoir Management (CLRM) simplification can be used as a framework for simulating NPV changes due to assimilation of production and saturations in a simple toy model. It combines state-of-the-art data assimilation and uncertainty modeling methods with a robust optimization genetic algorithm to calculate NPV improvements due to model update and its relationship with the NPV obtained from the synthetic reservoir.
In this context a simple synthetic model is presented. It recreates a segment of green field under a strong aquifer influence with two discovery wells. The reservoir development requires the selection of 4 well locations at fixed drilling times. The development strategy selection is obtained with the use of a genetic algorithm within the CLRM framework. Subsequently two cases are presented: one of assimilating only production after the first two wells have been drilled, just before deciding the locations of the last two wells; and a second case, in which production and saturation are assimilated at the same time. The saturation map assimilated is assumed to be output of a 4D seismic acquisition. The model update imposes the need of optimally relocate the last two wells which results in a NPV change.
The results show how the obtained NPVs is incremented by the relocation of the last two wells in both cases. A bigger increment is obtained when both, production and saturation are assimilated. In addition, the ensemble improved its forecast capability the most, when saturation assimilation is included. Nevertheless, the ensemble expected NPV decreases after assimilation from the value obtained from the first development strategy optimization; this indicates an optimistic early NPV valuation due to the initial ensemble distributions spread.
The study presents an asset simulation framework that could be used to evaluate data acquisition investments through the systematic modeling of reservoir uncertainties with in a decision oriented focus. This could include the inclusion of additional uncertain model parameters, the insertion of water injector and well conversions, the assimilation of saturations at different intervals, the change on the quality of the saturation maps assimilated, in addition to sensitivity studies of other economic constrains.
Wheeler, Mary F. (The University of Texas at Austin, USA) | Srinivasan, Sanjay (Pennsylvania State University, USA) | Lee, Sanghyun (Florida State University, USA) | Singh, Manik (Pennsylvania State University, USA)
Optimal design of hydraulic fractures is controlled by the distribution of natural fractures in the reservoir. Due to sparse information, there is uncertainty associated with the prediction of the natural fracture system. Our objective here is to: i) Quantify uncertainty associated with prediction of natural fractures using micro-seismic data and a Bayesian model selection approach, and ii) Use fracture probability maps to implement a finite element phase-field approach for modeling interactions of propagating fractures with natural fractures.
The proposed approach employs state-of-the-art numerical modeling of natural and hydraulic fractures using a diffusive adaptive finite element phase-field approach. The diffusive phase field is defined using the probability map describing the uncertainty in the spatial distribution of natural fractures. That probability map is computed using a model selection procedure that utilizes a suite of prior models for the natural fracture network and a fast proxy to quickly evaluate the forward seismic response corresponding to slip events along fractures. Employing indicator functions, diffusive fracture networks are generated utilizing an accurate computational adaptive mesh scheme based on a posteriori error estimators.
The coupled algorithm was validated with existing benchmark problems which include prototype computations with fracture propagation and reservoir flows in a highly heterogeneous reservoir with natural fractures. Implementation of a algorithm for computing fracture probability map based on synthetic micro-seismic data mimicking a Fort Worth basin data set reveals consistency between the interpreted fracture sets and those observed in the reference. Convergence of iterative solvers and numerical efficiencies of the methods were tested against different examples including field-scale problems. Results reveal that the interpretation of uncertainty pertaining to the presence of fractures and utilizing that uncertainty within the phase field approach to simulate the interactions between induced and natural fracture yields complex structures that include fracture branching, fracture hooking etc.
The novelty of this work lies in the efficient integration of the phase-field fracture propagation models to diffusive natural fracture networks with stochastic representation of uncertainty associated with the prediction of natural fractures in a reservoir. The presented method enables practicing engineers to design hydraulic fracturing treatment accounting for the uncertainty associated with the location and spatial variations in natural fractures. Together with efficient parallel implementation, our approach allows for cost-efficient approach to optimizing production processes in the field.
We have developed two simple deterministic methods to extract the parameters of viscoelastic models from seismic data. One is for the Zener model using phase velocity dispersion observations and the other is for the single fractional Zener model (Cole-Cole model) using attenuation versus frequency observations. The observations here represent either the arbitrary frequency-dependent dispersion behaviour from actual measurements or from some physical dissipation mechanism(s). In this contribution, it is also proved that similar to Zener model, the attenuation factor curve for the Cole-Cole model, on a logarithmic frequencyaxis, symmetric about the frequency corresponding to the peak attenuation value, the peak frequency itself is equals to the inverse square root of the product of the two (stress and strain) relaxation times. The Cole-Cole model has a broad dispersion response over an appreciable frequency range, but is not very suitable for replicating complicated seismic attenuation dispersion curves which exhibit multiple peaks. In this case, we use the General Zener (GZ) model comprising multiple Zener elements and the General Fractional Zener (GFZ) model comprising multiple Cole-Cole elements to approximate the attenuation observations. Their parameters, including relaxation times and fractional derivative orders, are determined using a simulated annealing inversion method. Instead of searching for the relaxation times directly, we search for the Zener peak attenuation points (attenuation value and corresponding frequency, each of which corresponds to a pair of relaxation times. There are distinct advantages of such an approach.
Wang, Wendong (China University of Petroleum) | Zhang, Kaijie (China University of Petroleum) | Su, Yuliang (China University of Petroleum) | Tang, Meirong (PetroChina Oil & Gas Technology Research Institute of Changqing Oil field) | Zhang, Qi (China University of Geosciences) | Sheng, Guanglong (China University of Petroleum)
In the development of shale oil and gas reservoir, hydraulic fracture treatments may induce complex network configuration, which is very challenging to characterize. The existing fracture properties interpretation methods mostly rely on simplifying assumptions and are typically empirical in nature. The aim of this work is therefore to introduce an integrated framework involving fractal theory, inverse analysis of micro-seismic events (MSE), and rate-transient analysis to map the heterogeneity and distribution of fracture properties. In this work, a general framework is proposed to characterize both the geometry configuration and the owing properties of the complex fracture network (CFN). The CFN characterization framework is naturally divided into two stages: characterize the fracture geometry network by microseismic data and characterize the fracture dynamic properties by production data. In the fracture configuration characterization stage, a stochastic fractal fracture model based on an L-system fractal geometry is applied to describe the CFN geometry. Moreover, the genetic algorithm (GA) as a mixed integer programming (MIP) algorithm are applied to find the most probable fracture configuration based on the microseismic data. As to the owing properties characterization stage, we introduced embedded discrete fracture model (EDFM) for the computational concern and a Bayesian framework is used to quantify these fracture dynamical properties e.g., conductivity, porosity and pressure dependent multiplier by assimilating the production data. In addition, rate-transient analysis is also applied to calibrate the total fracture length and estimate effective stimulated-reservoir volume (ESRV). In order to validate this framework, a synthetic numerical case is developed. The result indicates that our integrated framework is able to characterize both CFN configuration and properties by assimilating microseismic and production data sequentially. The proposed workflow shows that the characterized CFN model would yield reasonable probability predictions in unconventional production rate.
Greenhalgh, Stewart (King Fahd University of Petroleum and Minerals) | Al-Lehyani, Ayman (King Fahd University of Petroleum and Minerals) | Schmelzbach, Cedric (ETH Zürich) | Sollberger, David (ETH Zürich)
The bearing and elevation (azimuth and inclination) of a seismic event can be estimated directly from measurements at a single triaxial station. There are instances in which the angular resolution secured by triaxial polarization analysis is better than that obtained by beamforming with an extended scalar array. In these situations, one depends totally on understanding the inter-relationships between the triaxial records that make up a seismic wavetrain. There are many approaches to seismic direction finding (SDF). Monte-Carlo techniques of triaxial seismic direction finding seek to maximise signal power by examining the seismic wavefield in many rotated co-ordinate frames. There are variants on this approach, which entail null seeking in an inverse space. Instead of searching all possible directions for the one which best fits the polarization model of a single arrival, it is possible to carry out an eigen-decomposition of the (complex or real) covariance matrix formed from the three-component data. The eigenvector corresponding to the principal eigenvalue yields the polarization direction automatically, with significant savings in computational effort.
Numerical experiments undertaken for different levels of random noise superimposed on a pure mode signal show that there are no significant advantages in using the Monte-Carlo techniques over eigendecompsoition. Confidence measures of event detection may be obtained by examining eigenvalue ratios when using the eigendecompsoition method. A time-domain formulation (covariance or coherency matrix) is preferable to a frequency-domain formulation (cross-spectral matrix) when there are multiple transient events present. The analysis window should be as long as possible (at least half the dominant period of the signal) without causing separate events to interfere.
In practise, the direction-of-arrival estimates deteriorate with increasing levels of random noise, and are generally unacceptable for a SNR of less than 1. Special care is needed to avoid direction errors associated with systematic noise, such as sensor gain misalignment between channels, coupling variations between receiver components, velocity inhomogeneity and anisotropy, the free-surface effect, and multiple event interference.