|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Scale Inhibitor Squeeze treatments are some of the most common techniques to prevent oilfield mineral scale deposition in oil producers. A squeeze treatment design's effectiveness and lifespan is determined by the scale inhibitor (SI) retention, which can be described using a pseudo-isotherm adsorption, commonly derived from coreflooding experiments, although in some certain circumstances a new isotherm will need to be re-derived to match the field return concentration profile, once the treatment is deployed and samples are collected to measure SI return concentration. This new isotherm is used to design the next treatment. The objective of this manuscript is to quantify the uncertainty, which depends of the number of samples analyzed. In any inverse problem, there might not be a unique solution, which is in our context a pseudo-isotherm matching the return concentration profile. As a consequence, there will be a certain level of uncertainty predicting the next squeeze treatment lifetime. Solving this inverse problem in Bayesian formulation, incorporating the prior information, and the likelihood involving the return concentration profile, it is possible to quantify the posterior distribution, and therefore calculate the uncertainty range, commonly known as P90/P50/P10, based on the Randomized Maximum Likelihood (RML) approach. The P90/P50/P10 was calculated as a function of the number of samples available, differentiating from the early production and late production.
The results suggest that there is a correlation between the P90/P50/P10 interval and the number of samples, i.e. the differences between the P10 and P90 in terms of the forecast squeeze lifetime was wider the smaller number of samples. The methodology proposed may be used to determine the number of samples required to reduce the level of uncertainty predicting the lifetime of the next squeeze treatment. Although taking more samples may increase the cost per barrel for a treatment, the ability to predict accurately treatment lifetime will be more cost effective in the long term, as production might not be affected.
Santos, Letícia Siqueira dos (UNICAMP) | Santos, Susana Margarida da Graça (UNICAMP) | Santos, Antonio Alberto de Souza dos (UNICAMP) | Schiozer, Denis José (UNICAMP) | Silva, Luis Otávio Mendes da (UNICAMP)
The Expected Value of Information (EVoI) is a criterion to analyze the feasibility of acquiring new information to deal with uncertainties and improve decisions at any stage of an oil field. Here, we evaluate the influence of the use of representative models (RM) on the EVoI estimation and on the decision to develop the petroleum field. These RM are used to represent a large set of models that honor production data (FM), considering uncertainties in reservoir, fluid and economic parameters, enabling the following processes: (1) optimize production strategies (specialized for each RM and robust to all RM), (2) risk analysis, (3) select the strategy to develop the field based on risk analysis, and (4) estimate the EVoI. We evaluated the influence of the number of RM on these processes, observing the impacts on the results of reducing computational costs. For the EVoI, we applied a Complete (EVoI_FM) and a Simplified (EVoI_RM) methodology, where EVoI_FM was evaluated with all models (FM) while EVoI_RM used different groups with different numbers of RM (GR1, GR2 and GR3, ranging from 9 to 150 models in each group). To assess the quality of the results, we used the complete estimate (EVoI_FM) as a reference. The study was conducted on UNISIM-I-D, a benchmark oil reservoir in the development phase, taking an appraisal well as a source of information to clarify a structural uncertainty. Using RM to optimize specialized production strategies proved useful, since optimizing strategies for all FM would require high computational costs. Moreover, the RM could be used to represent risk curves and select production strategies under uncertainty, but less precisely, affecting directly the results of the EVoI (which is the difference between the expected values of the two curves). The precision of EVoI_RM results varied according to the number and group of RM employed, also varying the best strategies selected for field development. The choice of using simplifications or not will depend on the accuracy required or available resources. Variations in EVoI_RM may be tolerable when compared to the time saved, being the decision maker free for choosing the best estimation method.
Various physico-chemical processes are affecting Alkali Polymer (AP) Flooding. Core floods can be performed to determine ranges for the parameters used in numerical models describing these processes. Because the parameters are uncertain, prior parameter ranges are introduced and the data is conditioned to observed data. It is challenging to determine posterior distributions of the various parameters as they need to be consistent with the different sets of data that are observed (e.g. pressures, oil and water production, chemical concentration at the outlet).
Here, we are applying Machine Learning in a Bayesian Framework to condition parameter ranges to a multitude of observed data.
To generate the response of the parameters, we used a numerical model and applied Latin Hypercube Sampling (2000 simulation runs) from the prior parameter ranges.
To ensure that sufficient parameter combinations of the model comply with various observed data, Machine Learning can be applied. After defining multiple Objective Functions (OF) covering the different observed data (here six different Objective Functions), we used the Random Forest algorithm to generate statistical models for each of the Objective Functions.
Next, parameter combinations which lead to results that are outside of the acceptance limit of the first Objective Function are rejected. Then, resampling is performed and the next Objective Function is applied until the last Objective Function is reached. To account for parameter interactions, the resulting parameter distributions are tested for the limits of all the Objective Functions.
The results show that posterior parameter distributions can be efficiently conditioned to the various sets of observed data. Insensitive parameter ranges are not modified as they are not influenced by the information from the observed data. This is crucial as insensitive parameters in history could become sensitive in the forecast if the production mechanism is changed.
The workflow introduced here can be applied for conditioning parameter ranges of field (re-)development projects to various observed data as well.
Objectives/Scope: A stable, single-well deconvolution algorithm has been introduced for well test analysis in the early 2000's, that allows to obtain information about the reservoir system not always available from individual flow periods, for example the presence of heterogeneities and boundaries. One issue, recognised but largely ignored, is that of uncertainty in well test analysis results and non-uniqueness of the interpretation model. In a previous paper (SPE 164870), we assessed these with a Monte Carlo approach, where multiple deconvolutions were performed over the ranges of expected uncertainties affecting the data (Monte Carlo deconvolution). Methods, Procedures, Process: In this paper, we use a nonlinear Bayesian regression model based on models of reservoir behaviour in order to make inferences about the interpretation model. This allows us to include uncertainty for the measurements which are usually contaminated with large observational errors. We combine the likelihood with flexible probability distributions for the inputs (priors), and we use Markov Chain Monte Carlo algorithms in order to approximate the probability distribution of the result (posterior). Results, Observations, Conclusions: We validate and illustrate the use of the algorithm by applying it to the same synthetic and field data sets as in SPE 164870, using a variety of tools to summarise and visualise the posterior distribution, and to carry out model selection. Novel/Additive Information: The approach used in this paper has several advantages over Monte Carlo deconvolution: (1) it gives access to meaningful system parameters associated with the flow behaviour in the reservoir; (2) it makes it possible to incorporate prior knowledge in order to exclude nonphysical results; and (3) it allows to quantify parameter uncertainty in a principled way by exploiting the advantages of the Bayesian approach.
History matching field performance is a time-consuming, complex and non-unique inverse problem that yields multiple plausible solutions. This is due to the inherent uncertainty associated with geological and flow modeling. The history matching must be performed diligently with the ultimate objective of availing reliable prediction tools for managing the oil and gas assets. Our work capitalizes on the latest development in ensemble Kalman techniques, namely, the Ensemble Kalman Filter and Smoother (EnKF/S) to properly quantify and manage reservoir models’ uncertainty throughout the process of model calibration and history matching.
Sequential and iterative EnKF/S algorithms have been developed to overcome the shortcomings of the existing methods such as the lack of data assimilation capabilities and abilities to quantify and manage uncertainties, in addition to the huge number of simulations runs required to complete a study. An initial ensemble of 40 to 50 equally probable reservoir models was generated with variable areal, vertical permeability and porosity. The initial ensemble captured the most influencing reservoir properties, which will be propagated and honored by the subsequent ensemble iterations. Data misfits between the field historical data and simulation data are calculated for each of the realizations of reservoir models to quantify the impact of reservoir uncertainty, and to perform the necessary changes on horizontal, vertical permeability and porosity values for the next iteration. Each generation of the optimization process reduces the data misfit compared to the previous iteration. The process continues until a satisfactory field level and well level history match is reached or when there is no more improvement.
In this study, an application of EnKF/S is demonstrated for history matching of a faulted reservoir model under waterflooding conditions. The different implementations of EnKF/S were compared. EnKF/S preserved key geological features of the reservoir model throughout the history matching process. During this study, EnKF/S served as a bridge between classical control theory solutions and Bayesian probabilistic solutions of sequential inverse problems. EnKF/S methods demonstrated good tracking qualities while giving some estimate of uncertainty as well.
The updated reservoir properties (horizontal, vertical permeability and porosity values) are conditioned throughout the EnKF/S processes (cycles), maintaining consistency with the initial geological understanding. The workflow resulted in enhanced history match quality in shorter turnaround time with much fewer simulation runs than the traditional genetic or Evolutionary algorithms. The geological realism of the model is retained for robust prediction and development planning.
Oil and gas production applications demand high reliability, which implies low failure probability over the lifetime of the components. The service environments encompass high pressure high temperature (HPHT) conditions inside and seawater with cathodic protection (CP) systems on the outside. New material combinations are being employed, including thermo-mechanically processed low alloy steels and corrosion resistant alloys (CRA). Additively manufactured (AM) alloys are also entering the market, creating further variability and directionality in properties. Material qualification involves two types of accelerated, laboratory tests - standard and fit-for-purpose. However, the acceleration vectors used in these types of tests are usually not clearly identified. Relating the materials qualification tests to service performance requires a systematic approach that considers the uncertainties in the acceleration vectors such as, loading conditions, environmental compositions, operating parameters, and material microstructures. In this paper, we describe the acceleration vectors typically used in qualification tests and a risk framework that may be used to relate the test results to long term service performance. The benefit of a risk informed approach is not only avoidance of unnecessary testing, but also investing the testing resources to attain highest reliability.
Summary We propose a new three-step methodology to perform an automated mineralogical inversion from wellbore logs. The approach is derived from a Bayesian linear-regression model with no prior knowledge of the mineral composition of the rock. The first step makes use of approximate Bayesian computation (ABC) for each depth sample to evaluate all the possible mineral proportions that are consistent with the measured log responses. The second step gathers these candidates for a given stratum and computes through a density-based clustering algorithm the most probable mineralogical compositions. Finally, for each stratum and for the most probable combinations, a mineralogical inversion is performed with an associated confidence estimate. The advantage of this approach is to explore all possible mineralogy hypotheses that match the wellbore data. This pipeline is tested on both synthetic and real data sets. Introduction One of the main goals of reservoir evaluation is the determination of petrophysical parameters such as porosity, permeability, or water saturation. To obtain an accurate estimate of these parameters, a complete characterization of the lithology or the nature of the rocks is necessary. The petrophysicist proceeds to the analysis of wellbore logs, which often requires input from an expert. Indeed, petrophysical inversion of wellbore logs yields a selection of minerals or fluids belonging to the formation usually with more unknowns (the mineralogy) than measurements (the logs). In a bulk-density/neutron-porosity crossplot, an expert can identify the presence of gas, limestone, or an exotic mineral.
In this work, we evaluate different algorithms to account for model errors while estimating the model parameters, especially when the model discrepancy (used interchangeably with “model error”) is large. In addition, we introduce two new algorithms that are closely related to some of the published approaches under consideration. Considering all these algorithms, the first calibration approach (base case scenario) relies on Bayesian inversion using iterative ensemble smoothing with annealing schedules without any special treatment for the model error. In the second approach, the residual obtained after calibration is used to iteratively update the total error covariance combining the effects of both model errors and measurement errors. In the third approach, the principal component analysis (PCA)- based error model is used to represent the model discrepancy during history matching. This leads to a joint inverse problem in which both the model parameters and the parameters of a PCA-based error model are estimated. For the joint inversion within the Bayesian framework, prior distributions have to be defined for all the estimated parameters, and the prior distribution for the PCA-based error model parameters are generally hard to define. In this study, the prior statistics of the model discrepancy parameters are estimated using the outputs from pairs of high-fidelity and low-fidelity models generated from the prior realizations. The fourth approach is similar to the third approach; however, an additional covariance matrix of difference between a PCA-based error model and the corresponding actual realizations of prior error is added to the covariance matrix of the measurement error.
The first newly introduced algorithm (fifth approach) relies on building an orthonormal basis for the misfit component of the error model, which is obtained from a difference between the PCA-based error model and the corresponding actual realizations of the prior error. The misfit component of the error model is subtracted from the data residual (difference between observations and model outputs) to eliminate the incorrect relative contribution to the prediction from the physical model and the error model. In the second newly introduced algorithm (sixth approach), we use the PCA-based error model as a physically motivated bias correction term and an iterative update of the covariance matrix of the total error during history matching. All the algorithms are evaluated using three forecasting measures, and the results show that a good parameterization of the error model is needed to obtain a good estimate of physical model parameters and to provide better predictions. In this study, the last three approaches (i.e., fourth, fifth, sixth) outperform the other methods in terms of the quality of estimated model parameters and the prediction capability of the calibrated imperfect models.
Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. A two-part theorem relating conditional probability to unconditional (prior) probability, used in value of information problems but also important to acknowledge when estimating probabilities for geologically dependent prospects.
We propose a Bayesian estimator for real-time autonomous geosteering. The Bayesian geosteering tool is capable of simultaneously estimating the stratigraphic variables and tool location. We use gamma-ray well-log measurements to perform the estimation. Given the prior information and measurements, the Bayesian estimator can rigorously compute the joint posterior probability density function of the stratigraphic and tool-location variables. Due to the inherent nonlinearity of measurements and the non-Gaussianity of the random variables involved, we propose a sequential Monte Carlo filter for performing the inference. Unlike the widely used Kalman filter and its variants, the estimation performance of sequential Monte Carlo estimator is not constrained by the nature of dynamics, measurement functions and the type of uncertainties. The computational cost of the estimation is kept manageable by making a few simplifying assumptions. The estimation performance of the proposed sequential Monte Carlo based geosteering tool is demonstrated with a simulated example involving six formation tops. The performance is evaluated in terms of the ability of the estimator to accurately track the stratigraphic boundaries and predict the correct formations. The results show that the proposed Bayesian geosteering tool can predict the stratigraphic boundaries and the type of formation in which the tool is located in a probabilistically rigorous fashion.