|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
The growing popularity of model-based optimization work flows has resulted in an increase in their application to field cases. This paper presents an unbiased stochastic data-driven work flow in which surface and subsurface uncertainties are accounted for and their effects on facilities design and operational decisions are quantified. Three-dimensional reservoir models are best created with a combination of well logs and 3D-seismic data. However, the effective integration of these results is not easy because of limited seismic resolution.
Azari, Vahid (Heriot-Watt University) | Vazquez, Oscar (Heriot-Watt University) | Mackay, Eric (Heriot-Watt University) | Sorbie, Ken (Heriot-Watt University) | Jordan, Myles (Champion X) | Sutherland, Louise (Champion X)
The application of chemical scale inhibitors (SI) in a squeeze treatment is one of the most commonly used techniques to prevent downhole scale formation. This paper presents a sensitivity analysis of the treatment design parameters, to assist with the automated optimization of squeeze treatments in single wells in an offshore field.
Two wells were studied with different constraints on total SI neat volume (VSI) and total injected volume (VT) including main pill and overflush volumes, followed by a field case squeeze optimization to demonstrate the sensitivity to lifetime and the cost function per treated volume of water. A purpose-designed squeeze software model was used to simulate the squeeze treatments and perform the sensitivity analysis. In the course of this optimization procedure, a "Pareto Front" is calculated which represents cases that
It was demonstrated at fixed values of VSI and VT (resulting in almost a fixed total cost for squeeze), the squeeze lifetime can be improved by increasing the scale inhibitor concentration in the main treatment slug; however, the increase in squeeze lifetime is greatly reduced at very high concentrations. Four generic scale inhibitors were used with different adsorption isotherms to validate these calculations. In cases where either VSI or VT is fixed, it is shown that the squeeze life does not monotonically increase by the other parameter and the cost function can be used to determine the optimum design.
Well squeeze optimization was performed and these recommendations were applied in the field. It was shown that a well-executed sensitivity study can prevent misleading results that miss the global optimum. A lesson learned was that the optimal designs entail injecting as much of the inhibitor as possible as early in the squeeze design as possible - provided formation damage effects are avoided. Also, our semi-analytical construction of the Pareto Front greatly helps to simplify and streamline the overall squeeze optimization process.
Santos, Letícia Siqueira dos (UNICAMP) | Santos, Susana Margarida da Graça (UNICAMP) | Santos, Antonio Alberto de Souza dos (UNICAMP) | Schiozer, Denis José (UNICAMP) | Silva, Luis Otávio Mendes da (UNICAMP)
The Expected Value of Information (EVoI) is a criterion to analyze the feasibility of acquiring new information to deal with uncertainties and improve decisions at any stage of an oil field. Here, we evaluate the influence of the use of representative models (RM) on the EVoI estimation and on the decision to develop the petroleum field. These RM are used to represent a large set of models that honor production data (FM), considering uncertainties in reservoir, fluid and economic parameters, enabling the following processes: (1) optimize production strategies (specialized for each RM and robust to all RM), (2) risk analysis, (3) select the strategy to develop the field based on risk analysis, and (4) estimate the EVoI. We evaluated the influence of the number of RM on these processes, observing the impacts on the results of reducing computational costs. For the EVoI, we applied a Complete (EVoI_FM) and a Simplified (EVoI_RM) methodology, where EVoI_FM was evaluated with all models (FM) while EVoI_RM used different groups with different numbers of RM (GR1, GR2 and GR3, ranging from 9 to 150 models in each group). To assess the quality of the results, we used the complete estimate (EVoI_FM) as a reference. The study was conducted on UNISIM-I-D, a benchmark oil reservoir in the development phase, taking an appraisal well as a source of information to clarify a structural uncertainty. Using RM to optimize specialized production strategies proved useful, since optimizing strategies for all FM would require high computational costs. Moreover, the RM could be used to represent risk curves and select production strategies under uncertainty, but less precisely, affecting directly the results of the EVoI (which is the difference between the expected values of the two curves). The precision of EVoI_RM results varied according to the number and group of RM employed, also varying the best strategies selected for field development. The choice of using simplifications or not will depend on the accuracy required or available resources. Variations in EVoI_RM may be tolerable when compared to the time saved, being the decision maker free for choosing the best estimation method.
Optimal field development and control aim to maximize the economic profit of oil and gas production. This, however, results in a complex optimization problem with a large number of correlated control variables at different levels (e.g. well locations, completions and controls) and a computationally expensive objective function (i.e. a simulated reservoir model). The typical limitations of the existing optimization frameworks are: (1) single-level optimization at a time (i.e. ignoring correlations among control variables at different levels); and (2) providing a single solution only whereas operational problems often add unexpected constraints likely to reduce the ‘optimal’, inflexible solution to a sub-optimal scenario.
The developed framework in this paper is based on sequential iterative optimization of control variables at different levels. An ensemble of close-to-optimum solutions is selected from each level (e.g. for well location) and transferred to the next level of optimization (e.g. to control settings), and this loop continues until no significant improvement is observed in the objective value. Fit-for-purpose clustering techniques are developed to systematically select an ensemble of solutions, with maximum differences in control variables but close-to-optimum objective values, at each level of optimization. The framework also considers pre-defined constraints such as the minimum well spacing, irregular reservoir boundaries, and production/injection rate limits.
The proposed framework has been tested on a benchmark case study, known as the Brugge field, to find the optimal well placement and control in two development scenarios: with conventional (surface control only) and intelligent wells (with additional zonal control using Interval Control Valves). Multiple solutions are obtained in both development scenarios, with different well locations and control settings but close-to-optimum objective values. We also show that suboptimal solutions from an early optimization level can approach and even outdo the optimal one at the higher-level optimization, highlighting the value of the here-developed multi-solution framework in exploring the search space as compared to the traditional single-solution approaches. The development scenario with intelligent completion installed at the optimal well location and optimally controlled during the production period achieved the maximum added value. Our results demonstrate the advantage of the developed multi-solution optimization framework in providing the much-needed operational flexibility to field operators.
Relative permeability and capillary pressure are the key parameters of the multiphase flow in a reservoir. To ensure an accurate determination of these functions in the areas of interest, the core flooding and centrifuge experiments on the relevant core samples need to be interpreted meticulously. In this work, relative permeability and capillary pressure functions are determined synchronously by history matching of multiple experiments simultaneously in order to increase the precision of results based on additional constraints coming from extra measurements. To take into account the underlying physics without making crude assumptions, the Special Core Analysis (SCAL) experiments are chosen to be simulated instead of using well know simplified analytical or semianalytical solutions. Corresponding numerical models are implemented with MRST (Lie, 2019) library. The history matching approach is based on the adjoint gradient method for the constrained optimization problem. Relative permeability and capillary pressure curves, which are the objectives of history matching, within current implementation can have a variety of representations as Corey, LET, B-Splines and NURBS. For the purpose of analyzing the influence of correlations on the history matching results in this study, the interpretation process with assumed analytical correlations is compared to history matching based on generic NURBS representation of relevant functions.
Various physico-chemical processes are affecting Alkali Polymer (AP) Flooding. Core floods can be performed to determine ranges for the parameters used in numerical models describing these processes. Because the parameters are uncertain, prior parameter ranges are introduced and the data is conditioned to observed data. It is challenging to determine posterior distributions of the various parameters as they need to be consistent with the different sets of data that are observed (e.g. pressures, oil and water production, chemical concentration at the outlet).
Here, we are applying Machine Learning in a Bayesian Framework to condition parameter ranges to a multitude of observed data.
To generate the response of the parameters, we used a numerical model and applied Latin Hypercube Sampling (2000 simulation runs) from the prior parameter ranges.
To ensure that sufficient parameter combinations of the model comply with various observed data, Machine Learning can be applied. After defining multiple Objective Functions (OF) covering the different observed data (here six different Objective Functions), we used the Random Forest algorithm to generate statistical models for each of the Objective Functions.
Next, parameter combinations which lead to results that are outside of the acceptance limit of the first Objective Function are rejected. Then, resampling is performed and the next Objective Function is applied until the last Objective Function is reached. To account for parameter interactions, the resulting parameter distributions are tested for the limits of all the Objective Functions.
The results show that posterior parameter distributions can be efficiently conditioned to the various sets of observed data. Insensitive parameter ranges are not modified as they are not influenced by the information from the observed data. This is crucial as insensitive parameters in history could become sensitive in the forecast if the production mechanism is changed.
The workflow introduced here can be applied for conditioning parameter ranges of field (re-)development projects to various observed data as well.
Polymer flooding offers the potential to recover more oil from reservoirs but requires significant investments which necessitate a robust analysis of economic upsides and downsides. Key uncertainties in designing a polymer flood are often reservoir geology and polymer degradation. The objective of this study is to understand the impact of geological uncertainties and history matching techniques on designing the optimal strategy and quantifying the economic risks of polymer flooding in a heterogeneous clastic reservoir.
We applied two different history matching techniques (adjoint-based and a stochastic algorithm) to match data from a prolonged waterflood in the Watt Field, a semi-synthetic reservoir that contains a wide range of geological and interpretational uncertainties. An ensemble of reservoir models is available for the Watt Field, and history matching was carried out for the entire ensemble using both techniques. Next, sensitivity studies were carried out to identify first-order parameters that impact the Net Present Value (NPV). These parameters were then deployed in an experimental design study using a Latin Hypercube to generate training runs from which a proxy model was created. The proxy model was constructed using polynomial regression and validated using further full-physics simulations. A particle swarm optimisation algorithm was then used to optimize the NPV for the polymer flood. The same approach was used to optimise a standard water flood for comparison. Optimisations of the polymer flood and water flood were performed for the history matched model ensemble and the original ensemble.
The sensitivity studies showed that polymer concentration, location of polymer injection wells and time to commence polymer injection are key to optimizing the polymer flood. The optimal strategy to deploy the polymer flood and maximize NPV varies based on the history matching technique. The average NPV is predicted to be higher in the stochastic history matching compared to the adjoint technique. The variance in NPV is also higher for the stochastic history matching technique. This is due to the ability of the stochastic algorithm to explore the parameter space more broadly, which created situations where the oil in place is shifted upwards, resulting in higher NPV. Optimizing a history matched ensemble leads to a narrow variance in absolute NPV compared to history matching the original ensemble. This is because the uncertainties associated with polymer degradation are not captured during history matching. The result of cross comparison, where an optimal polymer design strategy for one ensemble member is deployed to the other ensemble members, predicted a decline in NPV but surprisingly still shows that the overall NPV is higher than for an optimized water food. This indicates that a polymer flood could be beneficial compared to a water flood, even if geological uncertainties are not captured properly.
Akchiche, Meziane (Total SA) | Beauquin, Jean-Louis (Total SA) | Sochard, Sabine (Universite de Pau et des Pays de l’Adour) | Serra, Sylvain (Universite de Pau et des Pays de l’Adour) | Reneaume, Jean-Michel (Universite de Pau et des Pays de l’Adour) | Stouffs, Pascal (Universite de Pau et des Pays de l’Adour)
In the present paper, we first developed and implemented the exergy analysis within an Integrated Asset Modeling (IAM) platform to evaluate the overall energy efficiency and Greenhouse Gas (GHG) emissions of a future offshore oil production system. The use of the same state function enables a direct comparison of the different irreversibilities that occur through the entire production system (drawdown losses, lift losses, Artificial lift and boosting systems losses, etc.) and thus, provides the necessary guidance for engineers to select the best solution. Then, we modeled and evaluated the performance of the oil production system with three artificial lift and boosting techniques, including multiphase boosting, gas lift, and ESP. In all the cases, liquid production has been maximized while respecting the same constraints. The exergy analysis enabled us to quantify for the first time the physical exergy supplied naturally by the reservoir in the form of pressure (mechanical exergy) and temperature (thermal exergy). After that, we proposed a new methodology to improve the design of petroleum production systems over their whole life. This methodology permits the simultaneous evaluation of a multitude of development schemes by using generic models. Finally, the multiperiod exergoeconomic optimization results highlight a performance gap, quite logical, between the most cost-effective scheme identified and the most energy-efficient one. There are hence excellent opportunities for progress by developing the exergy approach and digital tools for the selection and development of greener and more profitable production concepts, processes, and technologies.
Khlebnikova, Elena (Los Alamos National Laboratory) | Zlotnik, Anatoly (Los Alamos National Laboratory) | Sundar, Kaarthik (Los Alamos National Laboratory) | Ewers, Mary (Los Alamos National Laboratory) | Tasseff, Byron (Los Alamos National Laboratory) | Bent, Russell (Los Alamos National Laboratory)
Pipelines perform a critical function in the petroleum business by transporting hydrocarbon commodities between oil fields, refineries, and consumer markets. We formulate an optimization problem for determining the optimal pumping modes and flow configuration for liquid pipeline operations. Based on requested flow rates and commodity prices submitted by users of the pipeline system, the system manager can solve an optimization problem to determine flow rates allocated to all customers, flows on each pipeline section, and pumps station operating settings in order to maximize economic value provided by a pipeline while adhering to limitations of pump machinery and the physics of fluid flow. In particular, the optimal solution will maximize the economic benefit for the pipeline transport network users and utilization of system capacity while also minimizing the energy expended in operation. This problem is nonconvex, and solutions are only guaranteed to be locally optimal. In this study we pose a mathematical optimization formulation with minimal modeling that captures the basic phenomena involved, and an algorithm based on general-purpose interior point optimization is proposed for efficient solution. We demonstrate our approach on a realistic case study based on a pipeline system in the United States.
One of the final goals of any reservoir characterization study is to deliver reliable production forecasts. This is definitely a challenging task as the fluid flow dynamics is governed by non-linear equations: a small perturbation in the reservoir model inputs might have a large impact on the modelling outputs, thus the forecasts. Also, depending on the maturity of a project, engineers have various amounts and types of data to deal with and to history match through an optimization process.
Considering the case of a mature asset, for which massive datasets of various types are available, the standard history matching process is based on the minimization of a single objective function (history matching criteria), computed through weighted least squares formulation. The difficulty is then to (user) define properly the weight of each data set before the summation itself into the single objective function is performed.
To avoid this difficulty, two currently available but yet prospective – in geosciences application - optimization techniques are considered. The former is based on the definition of multiple objective functions (based on data types and/or location on the field) coupled to an optimization process. If all the objective functions minimize trends together, the user has still the flexibility to assess one by one the individual objective functions minimization. On the contrary, if the objective functions are minimizing/maximizing trends in competition (e.g. because of noisy data), then the derived Pareto front (in 2D case) would identify the location of optimal compromises. The latter is a sequential optimization approach based on single-objective constrained optimizations: optimize each objective at a time with constraints on the other objectives. The thresholds defined for the constraints on the objectives are derived from the results of the previous optimization results. This pragmatic approach allows to prioritize the objectives and tunes the expected accuracy on each data type.
Both approaches are applied to a real field gas storage asset with more than 40 years of exploitation history and various data types e.g. pressures, saturations as well as gas breakthrough of control wells, leading to the definition of multiple (possibly more than 2) objective functions. Both show promising results in terms of history matching quality and in terms of flexibility as the user might be able to consider, define and update dynamically alternative history matching strategies. These approaches might be considered as alternatives to the standard one for the history matching process, preliminary to the production forecasts computation, even if the associated challenges and complexity are growing accordingly to the number of objective functions.