|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Model validation using available history data demands that the model reflects relevant behaviour occurring in the subsurface. Reservoir models lacking in proper modelling of essential reservoir behaviour are poor representatives of reality. Usually, calibrating these models to history data require excessive use of parameter fudging to achieve meaningful match results. To combat this issue, an efficient approach to improve reservoir understanding is presented that investigates hidden reservoir behaviour at field, region and well level.
The model screening approach uses static grid properties to explore unmatchable models for hidden reservoir behaviour. Permeability and/or porosity is defined as an input for the minimization problem with preset uncertainty limits ranging from zero to values greater than the default maximum values. The adjoint-state method is employed to condition simulation models to observation data. Unrealistic match results are realized at this stage which cannot be used for performance forecasting. Logical interpretation are generated for the extreme property updates. Feedback from interpretations are then used to enhance reservoir understanding. The PUNQ-S3 model is used to test proposed screening methodology.
A variant of the adjoint-state method executed in an unrestrained manner proved its capability to reveal hidden reservoir behaviour. A handful of patterns that draws attention to hidden reservoir behaviour were observed in permeability and/or porosity distribution. In the case of a hidden fault, the permeability distribution around influenced wells experiences a sharp contrasting change; the high permeability region seeks more oil while the low permeability region constrains oil production. Also, in the case of a hidden aquifer, we observed an extreme vertical porosity and permeability update travelling down through all layers into the oil-water contact zone. Hidden channels are represented as high permeability streaks connecting affected wells. So far, the model screening approach suffers from one limitation – an inability to differentiate between a reservoir boundary and a fault close to the boundary.
To guarantee that geological consistency is not sacrificed at the expense of history matching, local or global updates made to the starting model are restricted to true model variogram. The most crucial finding is that the production history must account for the physical mechanisms occurring in the representative reservoir. On the contrary, noisy production data or errors in field measurements limit the effectiveness of the adjoint-state method to provide insights on hidden reservoir behaviour.
This paper presents a history matching scheme that has been applied to production data and time lapse seismic data. The production data objective function is calculated using the conventional least squares method between the historical production data and simulation predictions, while the seismic objective function uses the Hamming distance between two binary images of the gas distribution (presence of gas (1) or absence of gas (0)) sequenced over the different acquisition times. The technique is applied to a UKCS (United Kingdom Continental Shelf) field that has deep-water tertiary turbidite sands and multiple stacked reservoirs defining some degree of compartmentalisation. Thirty five parameters are perturbed in this history match, they can be classified as volumetric parameters (net-to-gross, pore volume), transmissibility parameters (permeability, transmissibility), and end points of the relative permeability curves (critical saturation points). An initial ensemble of fluid flow simulation models is created where the full range of uncertain parameters are acknowledged using experimental design methods, and an evolutionary algorithm is used for optimization in the history matching process. It is found that permeability and critical gas saturation are key parameters for achieving a good history match, and that the volumetric parameters are not significant for this match in this particular reservoir. We also observe that matching only to production data marginally improves the seismic match, whilst matching to only seismic data improves the fit to production data. Combining both sets of data delivers an improvement for the production data and seismic data, as well as an overall reduction in the uncertainties. A unique feature of this technique is the use of the Hamming distance metric for seismic data history matching analysis, as this circumvents the use of the uncertain petroelastic model. This approach is easy to implement, and also helps achieve an effective global history match.
In petroleum engineering, simulation models are used in reservoir performance prediction and in the decision-making process. These models are complex systems, typically characterized by a vast number of input parameters. Typically, the physical state of the reservoir is highly uncertain and, thus, the appropriate parameters of the input choices are also highly uncertain. 4D seismic data can reduce significantly the uncertainty of the reservoir because it has a high area resolution, as opposed to the observed well rates and pressure. However, two main challenges are faced to calibrate the simulation model using 4D seismic data. The process can be time consuming because most models go through a series of iterations before being considered sufficiently accurate to give an adequate representation of the physical system. The consideration of 4D seismic data as an observed parameter in the form of maps would lead to an unfeasibly large number of variables to be matched. To overcome such issues, the construction of an emulator that represents the simulation model and the use of the canonical correlation technique to incorporate 4D seismic data can be used. The present study constructed a stochastic representation of the computer model called an emulator to quantify the reduction in the parameter input space. 4D seismic data was incorporated in the procedure through the canonical correlation technique. The water saturation map derived from seismic data was converted into seven canonical functions. Such functions represent the observable characteristics to be matched in the uncertainty reduction process. A high number of evaluations was necessary to identify the range of input parameters whose outputs matched the historical data (4D seismic data). The large number of evaluations justifies the use of an emulator and the reduction of uncertainties with areal characteristics shows that 4D seismic data was successfully incorporated. The emulator methodology represents a powerful tool in the analysis of complex physical problems such as history matching. The incorporation of 4D seismic data as an observable output to be matched leads to a difficult problem to be solved. However, the canonical correlation permitted a successful incorporation of such data into the problem.
An approach is explored for estimating critical and maximum gas saturation using 4D seismic data from multiple surveys shot during gas exsolution and dissolution in a producing hydrocarbon reservoir. To guide this process, hydrocarbon gas properties and behaviour are studied, and their relation to the fluid-flow physics is understood using numerical simulation and seismic modelling. This understanding is then used to interpret observed seismic data, which has surveys repeated every 12 to 24 months, from a turbidite field in the United Kingdom Continental Shelf (UKCS). Furthermore, the field reservoir simulation model is then history matched to the production data and the gas saturation effects observed on the 4D seismic data. The 4D seismic response is a function of pressure changes, fluid (oil/water/gas) changes and noise. The effects of the gas mechanism are extracted from the seismic data based on its unique relationship to the seismic amplitudes. It is found that these changes can be represented by a binary model (presence or absence of gas) which enables the use of a logical objective function to compute the misfit between the observed data and simulated data, and thus guide the parameterisation process of the history matching exercise. This approach circumvents full physics modelling in a joint history matching workflow that includes conditioning to both production data and multiple time lapse seismic data. It is concluded that for seismic surveys repeated at intervals of six months or more, the gas saturation distribution during either liberation or dissolution exists at two fixed saturations defined by the critical and the maximum gas saturation. From analysing only the 4D seismic data, we find a low critical gas saturation and a maximum gas saturation that is relatively unconstrained. The history matching exercise also gives us similar low values for the critical gas saturation, and highlights the importance of the vertical permeability in getting an extensively corroborated model. This paper explores a direct link between 4D seismic and the fluid flow parameters, a link between the gas saturation distribution and seismic response, as well as a quantitative analysis using multiple 4D seismic surveys for history matching.
Reservoir surveillance using 4D seismic has become a valuable resource for managing decisions under uncertainty. This paper highlights an integrated workflow to preserve geological consistency while calibrating a reservoir model using 4D seismic and production data. We demonstrate a successful application of this approach on our North Sea chalk reservoir undergoing waterflood, where a number of repeat seismic surveys have been acquired over time and leveraged as a quantitative source of information for describing the spatial distribution of reservoir properties and compaction. This seismic monitoring data has resulted in the ability to better manage the waterflood by providing fluid movement insights and subsequent improvement of infill well placement. To capture geologic variability and ensure model predictability, geostatistical parameterization techniques using multiple-point statistics are used to represent the uncertainty in the reservoir model. Additionally, the workflow employs a rock physics model to generate a synthetic 4D seismic response from flow simulation. Inconsistencies between the predicted and observed 4D differences are used to classify the reservoir model shortcomings. The uncertain geological parameters are updated in an optimization loop through the minimization of a misfit function comprised of both production and 4D seismic misfit formulations. The closed-loop workflow is managed by an in-house computer-assisted history matching tool using a stochastic optimization algorithm. The integrated approach yields improved reservoir management by encouraging multi-disciplinary collaboration between geological, geomechanical, geophysical and reservoir engineering disciplines.
The use of 4D seismic has proven to be a valuable resource for managing decisions under uncertainty. The task of reservoir history matching is generally an ill-posed problem because there exist as a number of potential parameter combinations that generate similar simulated reservoir pressure and production responses. The fundamental reason this occurs is history matching spatially distributes uncertain geological or dynamic parameters (e.g. porosity, permeability, etc.) using localized well information from production data. The geological parameters in between the wells remain largely uncertain or unknown. To reduce inter-well uncertainty, the spatially more dense 4D seismic data can be used to mitigate the non-uniqueness by reducing the number of acceptable models.
A fully integrated workflow has been developed to ensure the predictability in our model forecasts by preserving the geologic and seismic consistency. This workflow dynamically couples all elements of the seismic to simulation workflow by using a computer-assisted history matching procedure.
History matching and optimization problems often involve several, potentially conflicting, objectives. For example, we might seek to minimize a misfit function involving differences in reservoir pressure, multiphase production history and 4D time-lapse seismic data and these differences do not always change in tandem. It is a common practice to treat these differences as a single objective optimization problem by aggregating all objectives into a scalar function (weighted-sum), resulting in incomplete exploration of the solution space. The problem is particularly severe if the objectives are conflicting. In this paper we propose to use a Pareto-based multi-objective evolutionary algorithm (MOEA) focusing on finding a set of optimal solutions called Pareto optima. The MOEA makes direct use of the dominance relation for fitness assignment instead of a fitness score in one-dimensional objective space. The dominance concept can define levels of optimality without reduction of objective dimensionality to sort populations accordingly, and the given populations constitute typically several ranks (fronts) of classification for individuals. Because it uses a population of solutions in the search process and optimizes such that the ranks are minimized, the Pareto optima can be found in a single simulation run. We show how the MOEA identifies optimal solutions by examining the trade-off between multiple objectives from a set of plausible solutions. Specifically, we demonstrate that it outperforms the commonly used weighted-sum approach. For practical applications, we provide a novel history matching workflow with a Grid Connectivity-based Transformation (GCT) basis coefficients as parameters for calibration using the gradient-free evolutionary optimization algorithms. The basis functions are obtained from a spectral decomposition of the grid connectivity Laplacian and avoid ad hoc redefinitions of regions while preserving the geologic heterogeneity. We demonstrate the power and utility of the proposed workflow using multiple examples. These include 2D synthetic examples for validation and a 3D field application for matching production and seismic data with uncertainty and conflicting information.
Time-lapse seismic data provide information on the dynamics of multiphase reservoir fluid flow in places where no production data from wells are available. This information, in principle, could be used to estimate unknown reservoir properties. However, the amount, resolution, and character of the data have long posed significant challenges for quantitative use in assisted-history matching workflows. Previous studies, therefore, have generally investigated methods for updating single models with reduced parameter-uncertainty space. Recent developments in ensemble-based history-matching methods have shown the feasibility of multimodel history and matching of production data while maintaining a full uncertainty description. Here, we introduce a robust and flexible reparameterization for interpreted fluid fronts or seismic attribute isolines that extends these developments to seismic history matching. The seismic data set is reparameterized, in terms of arrival times, at observed front positions, thereby significantly reducing the number of data while retaining essential information. A simple 1D example is used to introduce the concepts of the approach. A synthetic 3D example, with spatial complexity that is typical for many waterfloods, is examined in detail. History-matching cases based on both separate and combined use of production and seismic data are examined. It is shown that consistent multimodel history matches can be obtained without the need for reduction of the parameter space or for localization of the impact of observations. The quality of forecasts based on the history-matched models is evaluated by simulating both expected production and saturation changes throughout the field for a fixed operating strategy. It is shown that bias and uncertainty in the forecasts of production both at existing wells and in the flooded area are reduced considerably when both production and seismic data are incorporated. The proposed workflow, therefore, enables better decisions on field developments that require optimal placement of infill wells.
Jin, Long (Shell International E&P) | van den Hoek, Paul J. (Shell International E&P) | Alpak, Faruk O. (Shell Nigeria Exploration and Production Company) | Pirmez, Carlos (Shell Nigeria Exploration and Production Company) | Fehintola, Tope (Shell Nigeria Exploration and Production Company) | Tendo, Fidelis (Shell Nigeria Exploration and Production Company) | Olaniyan, Elozino E. (Shell Nigeria Exploration and Production Company)
Quantitative integration of spatial and temporal information provided by time-lapse (4D) -seismic surveys to dynamic reservoir models calls for efficient and effective data-integration algorithms. We carry out a comprehensive comparison of stochastic optimization methods using both a synthetic and a field case.
Our first case is a challenging synthetic test problem known as the Imperial College Fault Model (ICFM). The methods of very-fast simulated annealing (VFSA), particle-swarm optimization (PSO), and the neighborhood algorithm (NA) are compared in terms of convergence characteristics, data-match quality, and posterior model-parameter distributions. On the basis of the knowledge developed from the ICFM problem, we isolate VFSA and PSO and evaluate their performance further on a field case involving an offshore west African deepwater turbidite reservoir undergoing waterflooding. The field case has a reasonably long production history and good-quality 3D- and 4D-seismic data, allowing the construction of a geologically consistent model by means of dynamic calibration. As such, it constitutes a relevant field test for joint seismic/production history matching. We assess the data-match characteristics and the quality of dynamic forecasts delivered by VFSA and PSO in the field case.
Practical guidelines are developed over the course of these studies for selecting a "fit-for-purpose" optimal method for joint history-matching workflows. Our results show that PSO, a population-based method, incurs relatively more computational expense at a given iteration but exhibits good convergence characteristics and provides multiple history-matched models. The PSO method has emerged as more effective compared with the NA and VFSA methods in the ICFM problem. It was also quite effective on the field application. On the other hand, the VFSA method requires comparatively more iterations to converge because of its sequential nature, but it has advantageous features when moderate computing resources are available.
This presentation outlines an integrated workflow that incorporates 4D seismic data into the Ekofisk field reservoir model history matching process. Successful application and associated benefits of the workflow benefits are also presented. A seismic monitoring programme has been established at Ekofisk with 4D seismic surveys that were acquired over the field in 1989, 1999, 2003, 2006 and 2008. Ekofisk 4D seismic data is becoming a quantitative tool for describing the spatial distribution of reservoir properties and compaction. The seismic monitoring data is used to optimize the Ekofisk waterflood by providing water movement insights and subsequently improving infill well placement.
Reservoir depletion and water injection in Ekofisk lead to reservoir rock compaction and fluid substitution. These changes are revealed in space and time through 4D seismic differences. Inconsistencies between predicted 4D differences (calculated from reservoir model output) and actual 4D differences are therefore used to identify reservoir model shortcomings. This process is captured using the following workflow: (1) prepare and upscale a geologic model, (2) simulate fluid flow and associated rockphysics using a reservoir model, (3) generate a synthetic 4D seismic response from fluid and rock physics forecasts, and (4) update the reservoir model to better match actual production/injection data and/or the 4D seismic response.
The above-mentioned Seismic History Matching (SHM) workflow employs rock-physics modeling to quantitatively constrain the reservoir model and develop a simulated 4D seismic response. Parameterization techniques are then used to constrain and update the reservoir model. This workflow updates geological parameters in an optimization loop through minimization of a misfit function. It is an automated closed loop system, and optimization is performed using an in-house computer-assisted history matching tool using evolutionary algorithm.
In summary, the Ekofisk 4D SHM workflow is a multi-disciplinary process that requires collaboration between geological, geomechanical, geophysical and reservoir engineering disciplines to optimize well placement and reservoir management.
The Ekofisk Field is located in the Norwegian Sector of the North Sea. It was discovered in 1969 and began production in 1971. The field is one of the largest fields on the Norwegian Continental Shelf with initial oil in place estimate of 7.1 billion STB of oil. The produced volumes are extracted from two fractured chalk formations. These reservoir formations are characterized by very high porosities and low matrix permeabilities. Formation productivity is enhanced by the natural fracture systems that allow commercial production from the field.
The first field development phase was natural depletion production. The first pilot water injection was initiated in 1981, and large scale water injection was initiated in 1987.
Expected recovery factor have increased from an initial estimate of 17% OHIP (Original Hydrocarbon In Place) to a current estimate of more than 50% OHIP through continuous improvements in field development plans, implementation of IOR, application of new technology and investments in new and existing facilities. It is also believed that a significant upside exists in further development optimization.
Future development plans at Ekofisk include an active drilling program. The program includes replacement of mechanically failed wells coupled with new infill wells to optimize recovery. Conducting a successful drilling programme in a mature chalk field is challenging. A single wellbore can experience large reservoir pressure and water saturation differences. Furthermore, compaction can alter the target interval depth, thickness, and reservoir properties as a function of time.
Jin, Long (Shell) | Alpak, Faruk Omer (Shell Intl E&P Co) | van den Hoek, Paul (Shell) | Pirmez, Carlos (Shell Intl E&P Co) | Fehintola, Tope (Shell U.K. Limited) | Tendo, Fidelis (Shell Nigeria E&P) | Olaniyan, Elozino Enite
Quantitative integration of spatial and temporal information provided by time-lapse (4D) seismic surveys to dynamic reservoir models calls for efficient and effective data-integration algorithms. We carry out a comprehensive comparison of stochastic optimization methods using both a synthetic and a field case.
Our first case is a challenging synthetic test problem known as the Imperial College Fault Model (ICFM). Three methods, namely, Very Fast Simulated Annealing (VFSA), Particle Swarm Optimization (PSO), and Neighborhood algorithm (NA) are compared in terms of convergence characteristics, data-match quality, and posterior model parameter distributions. Based on the knowledge developed on the ICFM problem, we isolate VFSA and PSO and further evaluate their performance on a field case involving an offshore West African deepwater turbidite reservoir undergoing waterflooding. The field case has a reasonably long production history and good-quality 3D and 4D seismic data allowing the construction of a geologically-consistent model via dynamic calibration. As such, it constitutes a relevant field test for joint seismic-production history matching. We assess the data-match characteristics and the quality of dynamic forecasts delivered by VFSA and PSO in the field case.
Practical guidelines are developed over the course of the studies for selecting a "fit-for-purpose?? optimal method for joint history-matching workflows. Our results show that PSO, a population-based method, incurs relatively more computational expenses at a given iteration, but exhibits good convergence characteristics and provides multiple history-matched models. The PSO method has emerged more effective compared to the NA and VFSA methods in the ICFM problem. It was also quite effective on the field application. On the other hand, the VFSA method requires comparatively more iterations to converge due to its sequential nature, but it has advantageous features when moderate computing resources are available.