Detomo, Rocco (Shell Nigeria Exploration and Production Company) | Quadt, Edwin (Shell Nigeria Exploration and Production Company) | Pirmez, Carlos (Shell Nigeria Exploration and Production Company) | Mbah, Reginald (Shell Nigeria Exploration and Production Company) | Olotu, Samuel (Shell Nigeria Exploration and Production Company)
Quantitative integration of spatial and temporal information provided by time-lapse (4D) seismic surveys to dynamic reservoir models calls for an efficient and effective workflow. To solve this issue, we propose a novel workflow which uses a Bayesian/MCMC approach and experimental design-based proxies for selected 4D seismic observables to update dynamic reservoir models. This methodology includes the following steps: (1) create probability maps to select locations where 4D seismic data is assimilated; (2) run a sensitivity analysis; (3) create high-order proxy models; and (4) run an MCMC inversion to determine a set of models that best fit the 4D seismic data and quantify uncertainty. This new workflow has been applied in 3 cases including two synthetic models and one field case. This first synthetic example is called the Imperial College Fault Model (ICFM).The second synthetic model is a fluvial reservoir model with 10 uncertain parameters. The field example is a
deepwater turbidite reservoir undergoing a waterflood with a reasonably long production history and high-quality 4D seismic data. Following the four steps of this workflow, all the models are successfully history matched by conditioning to 4D seismic data. Uncertainty quantification was also provided as part of the MCMC inversion. We also compare different scenarios using production data and/or 4D seismic data in the model updating process to show the value of the 4D seismic data. For our field case, the updated models can be used for production forecasting, reserves booking and identification of further development opportunities.
Jin, Long (Shell International E&P) | van den Hoek, Paul J. (Shell International E&P) | Alpak, Faruk O. (Shell Nigeria Exploration and Production Company) | Pirmez, Carlos (Shell Nigeria Exploration and Production Company) | Fehintola, Tope (Shell Nigeria Exploration and Production Company) | Tendo, Fidelis (Shell Nigeria Exploration and Production Company) | Olaniyan, Elozino E. (Shell Nigeria Exploration and Production Company)
Quantitative integration of spatial and temporal information provided by time-lapse (4D) -seismic surveys to dynamic reservoir models calls for efficient and effective data-integration algorithms. We carry out a comprehensive comparison of stochastic optimization methods using both a synthetic and a field case.
Our first case is a challenging synthetic test problem known as the Imperial College Fault Model (ICFM). The methods of very-fast simulated annealing (VFSA), particle-swarm optimization (PSO), and the neighborhood algorithm (NA) are compared in terms of convergence characteristics, data-match quality, and posterior model-parameter distributions. On the basis of the knowledge developed from the ICFM problem, we isolate VFSA and PSO and evaluate their performance further on a field case involving an offshore west African deepwater turbidite reservoir undergoing waterflooding. The field case has a reasonably long production history and good-quality 3D- and 4D-seismic data, allowing the construction of a geologically consistent model by means of dynamic calibration. As such, it constitutes a relevant field test for joint seismic/production history matching. We assess the data-match characteristics and the quality of dynamic forecasts delivered by VFSA and PSO in the field case.
Practical guidelines are developed over the course of these studies for selecting a "fit-for-purpose" optimal method for joint history-matching workflows. Our results show that PSO, a population-based method, incurs relatively more computational expense at a given iteration but exhibits good convergence characteristics and provides multiple history-matched models. The PSO method has emerged as more effective compared with the NA and VFSA methods in the ICFM problem. It was also quite effective on the field application. On the other hand, the VFSA method requires comparatively more iterations to converge because of its sequential nature, but it has advantageous features when moderate computing resources are available.
Jin, Long (Shell) | Gao, Guohua (Shell) | Vink, Jeroen C. (Shell Intl. E&P Co.) | Chen, Chaohui (Shell International EP) | Weber, Daniel (Shell Intl. E&P Co.) | Alpak, Faruk Omer (Shell Intl. E&P Co.) | van den Hoek, Paul (Shell) | Pirmez, Carlos (Shell Intl. E&P Co.)
Quantitative integration of 4D seismic data with production data into reservoir models is a challenging task. One important issue is how to properly quantify the uncertainty, or the posterior probability distribution (PPD). The Very Fast Simulated Annealing (VFSA) is a stochastic searching method, whereas the Simultaneous Perturbation and Multivariate Interpolation (SPMI) is a model-based local searching method. The stochastic features of the VFSA provide the feasibility of identifying possible multiple peaks of a PPD, but it converges very slowly. On the other hand, the model-based SPMI method has the advantages of effectively utilizing the smooth features of an objective function, and thus can converge to local optimum very quickly. More importantly, the Hessian of the objective function, or the covariance matrix of the PPD, can be estimated by the SPMI method with satisfactory accuracy. However, it is very difficult to identify multiple optima by applying the SPMI method alone. In this paper, we propose an efficient joint inversion workflow by appropriately integrating the two derivative
free optimization (DFO) methods. The complementary features of the two methods can further improve both applicability and efficiency of this joint inversion workflow. We tested the workflow with a 3D synthetic model and a real field case. Our results show that the integrated method is efficient and can deliver good results for jointly assimilating 4D seismic and production data.
Chen, Chaohui (Shell International EP) | Jin, Long (Shell) | Gao, Guohua (Shell) | Weber, Daniel (Shell Intl. E&P Co.) | Vink, Jeroen C. (Shell Intl. E&P Co.) | Hohl, Detlef (Shell Intl. E&P BV) | Alpak, Faruk Omer (Shell Intl. E&P Co.) | Pirmez, Carlos (Shell Exploration and Production Company)
Gradient-based optimization algorithms can be very efficient in history matching problems. Since many commercial reservoir simulators do not have an adjoint formulation built in, exploring capability and applicability of derivative-free optimization (DFO) algorithms is crucial. DFO algorithms treat the simulator as a black box and generate new searching points using objective function values only. DFO algorithms usually require more function evaluations, but this obstacle can be overcome by exploiting parallel computing.
This paper tests three DFO algorithms, Very Fast Simulated Annealing (VFSA), Simultaneous Perturbation and Multivariate Interpolation (SPMI) and Quadratic Interpolation Model-based (QIM) algorithm. Both SPMI and QIM are model-based methods. The objective function is approximated by a quadratic model interpolating points evaluated in previous iterations, and new search points are obtained by minimizing the quadratic model within a trust region. VFSA is a stochastic search method. These algorithms were tested with two synthetic cases (IC fault model and Brugge model) and one deepwater field case. Principal Component Analysis is applied to the Brugge case to parameterize the reservoir model vector to less than 40 parameters.
We obtained good matches with all three derivative-free methods. In terms of number of iterations used for converging and the final converged value of the objective function, SPMI outperforms the others. Since SPMI generates a large number of perturbation and search points simultaneously in one iteration, it requires more computer resources. QIM does not generate as many interpolation points as SPMI, and it converges more slowly in terms of time. VFSA is a sequential method and usually requires hundreds of iterations to converge.