The examples below are motivated by a set of frequently asked questions (FAQs), in turn highlighting common errors seen in forecasting, and are summarized by learning points that demonstrate why a consistent forecast definition is a pre-requisite for a lean forecasting process, applicable to resource estimation, business planning and decision making. It is not a requirement to use these definitions or the proposed forecasting principles but it is considered best practice; the examples will show that, the closer a company applies these definitions and principles, the leaner the overall forecasting, resource estimation and business planning process will be. Lean in this context means "getting it right the first time" and avoiding waste and unnecessary re-work. There are many situations, where the model objectives dictate another objective function than ultimate recovery; however, the forecaster should always plan for making a P10/P50/P90 forecast that is consistent with the resource estimates in addition to the primary objectives of the study. This should be done whether the customer asks for it or not. A reservoir engineer was requested to provide forecasting support for an exploration lease sale. A number of offshore blocks were on offer and he made a Monte Carlo simulation, based on seismically derived volumes, reservoir property trends, range of well count, development/operating costs and infrastructure requirements to point of sale. The objective functions were NPV and EMV for a significant number of prospects in these offshore blocks. This was exactly the information the exploration department had requested to determine the optimal bid value of these block.
Integration of time-lapse seismic data into dynamic reservoir model is an efficient process in calibrating reservoir parameters update. The choice of the metric which will measure the misfit between observed data and simulated model has a considerable effect on the history matching process, and then on the optimal ensemble model acquired. History matching using 4D seismic and production data simultaneously is still a challenge due to the nature of the two different type of data (time-series and maps or volumes based).
Conventionally, the formulation used for the misfit is least square, which is widely used for production data matching. Distance measurement based objective functions designed for 4D image comparison have been explored in recent years and has been proven to be reliable. This study explores history matching process by introducing a merged objective function, between the production and the 4D seismic data. The proposed approach in this paper is to make comparable this two type of data (well and seismic) in a unique objective function, which will be optimised, avoiding by then the question of weights. An adaptive evolutionary optimisation algorithm has been used for the history matching loop. Local and global reservoir parameters are perturbed in this process, which include porosity, permeability, net-to-gross, and fault transmissibility.
This production and seismic history matching has been applied on a UKCS field, it shows that a acceptalbe production data matching is achieved while honouring saturation information obtained from 4D seismic surveys.
You, Junyu (Petoleum Recovery Research Center) | Ampomah, William (Petoleum Recovery Research Center) | Kutsienyo, Eusebius Junior (Petoleum Recovery Research Center) | Sun, Qian (Petoleum Recovery Research Center) | Balch, Robert Scott (Petoleum Recovery Research Center) | Aggrey, Wilberforce Nkrumah (KNUST) | Cather, Martha (Petoleum Recovery Research Center)
This paper presents an optimization methodology on field-scale numerical compositional simulations of CO2 storage and production performance in the Pennsylvanian Upper Morrow sandstone reservoir in the Farnsworth Unit (FWU), Ochiltree County, Texas. This work develops an improved framework that combines hybridized machine learning algorithms for reduced order modeling and optimization techniques to co-optimize field performance and CO2 storage.
The model's framework incorporates geological, geophysical, and engineering data. We calibrated the model with the performance history of an active CO2 flood data to attain a successful history matched model. Uncertain parameters such as reservoir rock properties and relative permeability exponents were adjusted to incorporate potential changes in wettability in our history matched model.
To optimize the objective function which incorporates parameters such as oil recovery factor, CO2 storage and net present value, a proxy model was generated with hybridized multi-layer and radial basis function (RBF) Neural Network methods. To obtain a reliable and robust proxy, the proxy underwent a series of training and calibration runs, an iterative process, until the proxy model reached the specified validation criteria. Once an accepted proxy was realized, hybrid evolutionary and machine learning optimization algorithms were utilized to attain an optimum solution for pre-defined objective function. The uncertain variables and/or control variables used for the optimization study included, gas oil ratio, water alternating gas (WAG) cycle, production rates, bottom hole pressure of producers and injectors. CO2 purchased volume, and recycled gas volume in addition to placement of new infill wells were also considered in the modelling process.
The results from the sensitivity analysis reflect impacts of the control variables on the optimum results. The predictive study suggests that it is possible to develop a robust machine learning optimization algorithm that is reliable for optimizing a developmental strategy to maximize both oil production and storage of CO2 in aqueous-gaseous-mineral phases within the FWU.
Gao, Guohua (Shell Global Solutions (US)) | Vink, Jeroen C. (Shell Global Solutions International) | Chen, Chaohui (Shell International Exploration and Production) | Araujo, Mariela (Shell Global Solutions (US)) | Ramirez, Benjamin A. (Shell International Exploration and Production) | Jennings, James W. (Shell International Exploration and Production) | El Khamra, Yaakoub (Shell Global Solutions (US)) | Ita, Joel (Shell Global Solutions (US))
Uncertainty quantification of production forecasts is crucially important for business planning of hydrocarbon-field developments. This is still a very challenging task, especially when subsurface uncertainties must be conditioned to production data. Many different approaches have been proposed, each with their strengths and weaknesses. In this work, we develop a robust uncertainty-quantification work flow by seamless integration of a distributed-Gauss-Newton (GN) (DGN) optimization method with a Gaussian mixture model (GMM) and parallelized sampling algorithms. Results are compared with those obtained from other approaches.
Multiple local maximum-a-posteriori (MAP) estimates are determined with the local-search DGN optimization method. A GMM is constructed to approximate the posterior probability-density function (PDF) by reusing simulation results generated during the DGN minimization process. The traditional acceptance/rejection (AR) algorithm is parallelized and applied to improve the quality of GMM samples by rejecting unqualified samples. AR-GMM samples are independent, identically distributed samples that can be directly used for uncertainty quantification of model parameters and production forecasts.
The proposed method is first validated with 1D nonlinear synthetic problems with multiple MAP points. The AR-GMM samples are better than the original GMM samples. The method is then tested with a synthetic history-matching problem using the SPE01 reservoir model (Odeh 1981; Islam and Sepehrnoori 2013) with eight uncertain parameters. The proposed method generates conditional samples that are better than or equivalent to those generated by other methods, such as Markov-chain Monte Carlo (MCMC) and global-search DGN combined with the randomized-maximum-likelihood (RML) approach, but have a much lower computational cost (by a factor of five to 100). Finally, it is applied to a real-field reservoir model with synthetic data, with 235 uncertain parameters. AGMM with 27 Gaussian components is constructed to approximate the actual posterior PDF. There are 105 AR-GMM samples accepted from the 1,000 original GMM samples, and they are used to quantify the uncertainty of production forecasts. The proposed method is further validated by the fact that production forecasts for all AR-GMM samples are quite consistent with the production data observed after the history-matching period.
The newly proposed approach for history matching and uncertainty quantification is quite efficient and robust. The DGN optimization method can efficiently identify multiple local MAP points in parallel. The GMM yields proposal candidates with sufficiently high acceptance ratios for the AR algorithm. Parallelization makes the AR algorithm much more efficient, which further enhances the efficiency of the integrated work flow.
A well-designed pilot is instrumental in reducing uncertainty for the full-field implementation of improved oil recovery (IOR) operations. Traditional model-based approaches for brown-field pilot analysis can be computationally expensive as it involves probabilistic history matching first to historical field data and then to probabilistic pilot data. This paper proposes a practical approach that combines reservoir simulations and data analytics to quantify the effectiveness of brown-field pilot projects.
In our approach, an ensemble of simulations are first performed on models based on prior distributions of subsurface uncertainties and then results for simulated historical data, simulated pilot data and ob jective functions are assembled into a database. The distribution of simulated pilot data and ob jective functions are then conditioned to actual field data using the Data-Space Inversion (DSI) technique, which circumvents the difficulties of traditional history matching. The samples from DSI, conditioned to the observed historical data, are next processed using the Ensemble Variance Analysis (EVA) method to quantify the expected uncertainty reduction of ob jective functions given the pilot data, which provides a metric to ob jectively measure the effectiveness of the pilot and compare the effectiveness of different pilot measurements and designs. Finally, the conditioned samples from DSI can also be used with the classification and regression tree (CART) method to construct signpost trees, which provides an intuitive interpretation of pilot data in terms of implications for ob jective functions.
We demonstrate the practical usefulness of the proposed approach through an application to a brown-field naturally fractured reservoir (NFR) to quantify the expected uncertainty reduction and Value of Information (VOI) of a waterflood pilot following more than 10 years of primary depletion. NFRs are notoriously hard to history match due to their extreme heterogeneity and difficult parameterization; the additional need for pilot analysis in this case further compounds the problem. Using the proposed approach, the effectiveness of a pilot can be evaluated, and signposts can be constructed without explicitly history matching the simulation model. This allows ob jective and efficient comparison of different pilot design alternatives and intuitive interpretation of pilot outcomes. We stress that the only input to the workflow is a reasonably sized ensemble of prior simulations runs (about 200 in this case), i.e., the difficult and tedious task of creating history-matched models is avoided. Once the simulation database is assembled, the data analytics workflow, which entails DSI, EVA, and CART, can be completed within minutes.
To the best of our knowledge, this is the first time the DSI-EVA-CART workflow is proposed and applied to a field case. It is one of the few pilot-evaluation methods that is computationally efficient for practical cases. We expect it to be useful for engineers designing IOR pilot for brown fields with complex reservoir models.
In this paper we present our results, challenges and learnings, over a two-year period wherein robust multiobjective optimization was applied at the Mariner asset which is being currently developed. Many different problems were solved with different objectives. These problems were formulated based on the phases of planning and development at the asset. The optimization problems include drilling order and well trajectory optimization as the main objectives with reduction in water cut and reduction of gas production to minimize flaring as secondary objectives. We use the efficient stochastic gradient technique, StoSAG, to achieve optimization incorporating geological and petrophysical uncertainty. For some problems computational limitations introduced challenges while for other problems operational constraints introduced challenges for the optimization. Depending on the problems significant increases between 5% and 20% in the expected value of the objective function were achieved. For the multi-objective optimization cases we show that nontrivial optimal strategies are obtained which significantly reduce (40% decrease) gas production with minimal loss (less than 1%) in the economic objective. Our results illustrate the importance of flexible optimizations workflows to achieve results of significant practical value at different stages of the planning and development cycle at an operational asset.
Chemical enhanced oil recovery (EOR) methods have received increased attention in recent years since they have the ability to recover the capillary trapped oil. Successful chemical flooding application requires accurate numerical models and reliable forecast across multiple scales: core scale, pilot scale, and field scale. History matching and optimization are two key steps to achieve this goal.
For history matching chemical floods, we propose a general workflow for multi-stage model calibration using an Evolutionary Algorithm. A comprehensive chemical flooding simulator is used to model important physical mechanisms including phase behavior, cation exchange, chemical and polymer adsorption and capillary desaturation. First, we identify dominant reservoir and process parameters based on a sensitivity analysis. The history matching is then carried out in a stage-wise manner whereby the most dominant parameters are calibrated first and additional parameters are incorporated sequentially until a satisfactory data misfit is achieved. Next, a diverse subset of history matched models is selected for optimization using a Pareto-based multi-objective optimization approach. Based on the concept of dominance, Pareto optimal solutions are generated representing the trade-off between increasing oil recovery while improving the efficiency of chemical usage. These solutions are searched using a Non-dominated Sorting Genetic Algorithm (NSGA-II). Finally we implement a History Matching Quality Index (HMQI) with Moving Linear Regression Analysis to evaluate simulation results from history matching process. The HMQI provides normalized values for all objective functions having different magnitude and leads to a more consistent and robust approach to evaluate the updated models through model calibration.
Development and management of oilfields involve several sources of uncertainty that complicate an already challenging decision-making process. Two main sources of uncertainty are related to geologic description of reservoirs and future development scenarios. While geologic uncertainty has been widely studied and robust optimization methods have been developed to account for it, the uncertainty in future development plans has not been considered in optimization problems. Future development strategies have been included as decision variables in field development optimization problems. However, in practice, future field development plans tend to deviate from the solutions obtained in past optimization problems. Therefore, a more prudent and realistic approach toward oilfield optimization is to consider the uncertainty in both geology and future development plans to obtain robust solutions. We develop a closed-loop stochastic field development optimization formulation to account for the uncertainty in geologic description and future infill drilling scenarios. The proposed approach optimizes the decision variables for current stage of planning (e.g. well locations and operational settings) while accounting for geologic and future development uncertainties, where the former uncertainty is represented by using several reservoir model realizations while the latter uncertainty is represented through drilling scenario trees and probabilistic description of future drilling events/parameters. In the developed method, prior to each decision-making stage the reservoir is operated based on the current optimal strategy until dynamic data becomes available to calibrate the geological models. After each data assimilation step, a new optimization is performed to adjust controllable decision variables for the current well configuration (e.g., well rates or BHPs) using the updated models and potentially revised future development scenarios. Using a multi-stage stochastic optimization workflow this process is repeated after each decision stage. Several numerical experiments are presented to discuss various aspects of the proposed closed-loop stochastic optimization formulation and to compare the solutions from different methods adopted for treatment of future development plans. The results indicate that stochastic treatment of future development events (1) can hedge against uncertain future development activities by obtaining optimization solutions that are robust against changes in future decisions, and (2) considerably reduces the performance losses that can result from field development when uncertainty is disregarded.
de Zeeuw, W. (Department of Applied Mathematics, Delft University of Technology) | de Moraes, R. J. (Department of Geoscience and Engineering, Delft University of Technology) | Heemink, A. W. (Petrobras Research and Development Center - CENPES) | Jansen, J. D. (Department of Applied Mathematics, Delft University of Technology)
We propose a novel adaptive, adjoint-based, iterative multiscale finite volume (i-MSFV) method. The method aims to reduce the computational cost of the smoothing stage of the original i-MSFV method by selectively choosing fine-scale sub-domains (or sub-set of primary variables) to solve for. The selection of fine-scale primary variables is obtained from a goal-oriented adjoint model. An adjoint-based indicator is utilized as a criterion to select the primary variables having the largest errors. The Lagrange multipliers from the adjoint model can be interpreted as sensitivities of the objective function value with respect to deviations from the constraints. In case of adjoining the porous media flow equations with Lagrange multipliers, this implies that the multipliers are the sensitivities of the objective function with respect to the residuals of the flow equations, i.e., to the residual error that remains after approximately solving linear equations with the aid of an iterative solver. This allow us to recognize at which locations the solution contains more errors. More specifically, we propose a modification to the i-MSFV method to adaptively reduce the size of the fine-scale system that must be smoothed. The aim is to make the fine-scale smoothing stage less computationally demanding. To that end, we introduce a goal-oriented, adjoint-based fine-scale system reduction criterion. We demonstrate the performance of our method via single-phase, incompressible flow simulation models with challenging geological settings and using a history-matching like misfit objective function as the goal. The performance of the newly introduced method is compared to the original i-MSFV method. We investigate the adaptivity versus accuracy of the method and demonstrate how the solution accuracy varies by varying the number of unknowns selected to be smoothed. It is shown that the method can provide accurate solutions at reduced computational cost. The proof-of-concept applications indicate that the method deserves further investigations.
Maximization of the yield of existing assets becomes more important than ever as the petroleum companies need to win in any business environment. In this context, model-based optimization technology plays an important role in managing efficiently the subsurface flow and can add significant values by maximizing the potential of reservoirs without a large capital investment. Yet, conventional optimization methods did not sufficiently respect expert knowledge and engineering requirements, which severely undermines their business impact in practice. This paper presents a novel interactive workflow that permits injection of expert knowledge into optimization process and ensures the final optimal solution executable. This workflow is unique because it allows to (i) interact with stakeholders, e.g., production engineers and operators, to capture engineering and economic requirements and constraints, (ii) interact with software to identify, screen, and maximize the opportunity, (iii) interact with reservoir to understand the physics for meaningful solutions, and (iv) interact with candidate solutions for the most rigorous one. Data analytics is used in this interactive workflow, boosting the optimization progress to reach the most trustworthy result. An offshore waterflooding example is used to illustrate the workflow proposed. Results show that the optimal solution generated significantly improves, compared to the existing strategy, the estimated short-term and long-term oil recovery (by more than 2% and 6%, respectively). Moreover, the water production volume is largely reduced. The proposed solution is feasible in engineering (meet engineering requirements and engineers’ judgements and expectations), meaningful in physics, optimal (convergence is guaranteed), robust (multiple uncertainties are considered), stable (immune to potential implementation errors), trustworthy (backed by data analytics), and thus executable in practice.