|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Eltahan, Esmail (The University of Texas) | Ganjdanesh, Reza (The University of Texas) | Yu, Wei (The University of Texas / SimTech LLC) | Sepehrnoori, Kamy (The University of Texas) | Drozd, Hunter (EP Energy) | Ambrose, Raymond (EP Energy)
The dynamic nature of unconventional-reservoir developments calls for availability of fast and reliable history matching methods for simulation models. Here, we apply an assisted history-matching (AHM) approach to a pair of wells in the Wolfcamp B and C formations of Midland Basin, for which production history is recorded for two periods: primary production and gas injection (Huff-n-Puff, or HNP ). The recorded history of gas injection reveals severe inter-well interactions, underscoring the importance of fracture interference modeling.
Fracture segments are modeled with embedded discrete fracture model (EDFM). Inter-well communication is modeled using long fractures that only become active during gas injection. We apply a Bayesian AHM algorithm with a neural-network-proxy sampler to quantify uncertainty and find the best model matches. For each well, we use primary production observations to invert for 13 uncertain parameters that describe fracture properties, initial conditions, and relative permeability. Subsequently, by minimizing pressure- and rate-misfit errors during the HNP period, we evaluate the size and conductivity of inter-well fractures. For each AHM study, the objective is to minimize a cost function that is a linear combination of misfit errors between simulation results and observation data for well pressure and production rates of oil, water, and gas. The selected solution samples were used to perform probabilistic forecasts and assess the potential of HNP enhanced oil recovery (EOR) in the area of interest.
From 1400 total simulation runs, the AHM algorithm generated 100 cases (solutions) that satisfy predefined selection criteria. Even though the parameter prior distributions were the same for the two wells, the marginal posteriors were dissimilar. Relative permeability curves for solution candidates can vary significantly from each other. The prospects of EOR were proven decent for the wells of interest. We reported 30% and 81% incremental recovery for the P50 predictions of wells BH and CH, respectively.
Exploitation activities of tight oil resources (with formation permeability less than 0.1 mD) have been increasing as horizontal drilling and hydraulic fracturing technologies continue to improve. In 2018, 61% of total US crude oil production was produced from tight formations (EIA 2019). A typical tight oil well will be completed over multiple stages creating hundreds of fracture clusters along a horizontal wellbore that extends for thousands of feet. This completion forms a large network of fractures that connects the wellbore to a large surface area of the shale formation. The initial well productivity could be quite high, it typically declines very rapidly and remains low during long term production. Pressure depletion occurs quickly because of the small permeability of tight pores. As a result, recovery factors are only in the 1 to 10% range of the original oil in place during primary production (EIA 2013), leaving significant amounts of unrecovered hydrocarbon in the subsurface.
ABSTRACT: The aim of this article is to understand and improve the predictive power of our previously developed empirical model (Hettema et al., 2017) to be able to better predict the seismicity of the Groningen gas reservoir. A specific characteristic of the gas production from the Groningen field has been the large seasonal variations in production. Four issues are addressed, namely the effects of delay time and production fluctuations, the application to regions in the field and understanding the causality between the production and seismicity. The model predicts inter-event volumes, which is the product of inter-event time and production rate, which decreases with increasing production. The analysis of four classes of minimum magnitudes with different data densities, requires a proper choice of the analysis window, sometimes requiring single event analysis. The causality of the model is understood and allows determination of inter-event times. Furthermore we come to the conclusion that the delay time is currently limited to days. Applying the model to different regions of the field gave reasonable descriptive results, but the applicability is still not proven.
With a GIIP of 2900 billion normal cubic meters, the Groningen field in The Netherlands is the largest onshore gas field in Europe. Continuous production since 1963 has led to induced seismicity starting in the early 90's. Production measures aimed at lowering the level of seismicity have been implemented since 2014. Based on an empirical relationship between the cumulative number of seismic events and cumulative gas production, a basic empirical predictive model has been developed (Hettema et al. 2017). The two empirical parameters and the confidence interval of the prediction have been determined by analyzing the yearly ratio of seismic activity over production versus the cumulative production. Predictions have only been made one year ahead. The present article aims to further develop the model with better predictive power and its control possibilities by increasing the confidence of the model. This requires incorporation of four refinements/extensions: the effects of delay time and production fluctuations, application to regions of the field and understanding the causality between the production and the seismicity. The effect of delay times between production and seismicity has been included by analyzing the pressure transient effects based on semi-steady-state diffusion times. Hettema et al. (2017) have shown that this delay-time increases with increasing reservoir pressure depletion. A specific characteristic of the gas production from the Groningen field has been the large seasonal variation in production. With the demand being high in winter and low in summer, the field played the role of ‘swing producer’ for the Dutch gas market. Since the previous model relates the seismic activity rate to the volume produced, the production rates are averaged within the analyzed window. Consequently, the rate fluctuations and time delays are suppressed. The present model performs better than the basic empirical model in that the goodness-of-fit shows a better confidence interval. It is desirable to be able to apply a model to different parts of the field as to optimize the production control based on localized risk. Finally, we aim to understand the physical causality of the model by considering the material balance and the well production profiles of the depleting gas reservoir. With this new model we gain confidence that better seismicity rate and inter-event time predictions can be made, with the goal to enable the decision makers to make science-based decisions to optimize the safety of the people that live in the province of Groningen.
In many oil/gas fields and hydrofracking processes induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for the induced seismic events. We conduct tests with synthetic events to validate the method, and then apply our Bayesian inversion approach to real induced seismicity events.
Presentation Date: Tuesday, September 26, 2017
Start Time: 2:15 PM
Presentation Type: ORAL
The interpretation of microseismic event distributions largely relies on the point distribution of the events in space and time. Although it is often attempted to minimize the individual location uncertainty of each event, the remaining uncertainty is ignored during the interpretation. The remaining location uncertainty leads to a random scatter that increases the apparent dimensions of the overall event distribution. Although in general it is not possible to determine the true location of an individual event, in cases where the uncertainty is truly random, the effect of the random scatter can be accounted for in the interpretation. One attempt would be to use event densities instead of individual events in the interpretation. But that does not account for different location uncertainties of the events and already assumes models that generate constant event densities. By replacing the event locations with their probability density functions, even non-linear uncertainty functions can be displayed. Once all information is represented by PDFs, Bayes theorem can be effectively used to integrate this information consistently. It also allows for the incorporation of subjective and objective information. The methodology can be applied to the estimation of realistic uncertainties for individual event locations as well as the interpretation of event distributions. In addition, the application of this methodology is not limited to the integration and interpretation of microseismic data but applicable to a wide range of information.
Microseismic monitoring has been used extensively to estimate the geometry and properties of hydraulically induced fracture networks. Almost always these estimates use the location of the microseismic events in space and time. Sometimes additional event attributes are used, e.g. magnitude, fault plane solution, moment tensor attributes, but the point location of the event almost always is a critical part in the interpretation (e.g. Cipolla et al., 2011).
It is well known that the accuracy of the event locations and its interpretation greatly depends on the quality and accuracy of the input parameters such as signal-to-noise ratio of the microseismic signal and the applied velocity model (e.g. Maxwell, 2014, Zimmer, 2011a, Zimmer, 2011b, Poliannikov, 2011, Poliannikov, 2015). In addition, other parameters such as the quality of the calibration data, the amount of information on the velocity model and the accuracy of the wellbore deviation surveys impact the achievable location accuracy. In most attempts of quantifying the location uncertainty only a small number of the sources of uncertainty are accounted for. And even if the uncertainty is quantified, the impact is usually linearized either explicitly or implicitly through tools like principal component analysis. This results in the uncertainty being quantified as an ellipsoid with the event location at its center.
We study the problem of the moment tensor inversion of a double-couple microseismic source from observed S/P amplitude ratios. The emphasis of this work is on uncertainty quantification that includes the effect of the uncertain event location. We use a Bayesian approach to quantify the uncertainty of the fault plane solution. The posterior distribution is effectively calculated by sampling from the posterior distribution of the event location, and performing a moment-tensor inversion using individual samples. The uncertainty in the reconstructed moment tensor depends on the receiver geometry, signal noise, and the true moment tensor. After a suitable transformation of the input data, the problem can be reduced to a classical least-squares estimation problem.
Presentation Date: Tuesday, October 18, 2016
Start Time: 8:50:00 AM
Presentation Type: ORAL
This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 175121, “Bayesian-Style History Matching: Another Way To Underestimate Forecast Uncertainty?” by Jeroen C. Vink, Guohua Gao, and Chaohui Chen, Shell, prepared for the 2015 SPE Annual Technical Conference and Exhibition, Houston, 28–30 September. The paper has not been peer reviewed.
Current theoretical formulations of assisted-history-matching (AHM) problems within the Bayesian framework [e.g., ensemble-Kalmanfilter (EnKF) and randomized-maximum- likelihood (RML) problems] are typically based on the assumption that simulation models can reproduce field data accurately within the measurement error. However, this assumption does not hold for AHM problems of real assets. This paper critically investigates the impact of using realistic, inaccurate simulation models. In particular, it demonstrates the risk of underestimating uncertainty when conditioning real-life models to large numbers of field data.
Improved simulation and history-matching techniques have still not cured the chronic ailment of systematically underestimating uncertainty in forecast results. This is because it is very easy to forget or ignore uncertain factors or model assumptions; also, the method used to constrain a model to historical data may excessively reduce uncertainties.
In this paper, the authors highlight the fact that stochastic, Bayesian-style history-matching methods can very easily lead to unrealistically reduced uncertainty ranges in forecast results. The complete paper discusses Bayesian-style history matching and uncertainty quantification in detail.
Large-Data DangerStochastic history matching has two contrasting stages. First, one identifies a large number of uncertain model parameters in order to be confident that future field-production results will be contained within the range of forecast results associated with the prior model uncertainty. Second, when actual field data have become available, one uses large quantities of data to reduce this prior uncertainty. The reservoir engineer must decide not only which model parameters and field data to use, but also how many. The effect of increasing the number of uncertain model parameters is intuitively clear: Every parameter that has an impact on a specific forecast result will add to the uncertainty of this result.
We present a value of information analysis for MT data for locating high steam flow regions of a geothermal resource. The high electrical conductivity feature in volcanic geothermal settings, known as the clay cap, can be indicative of geothermal alteration occurring just above the resource. We demonstrate how two alternative interpretations of the clay cap from one 3D electrical conductivity model can be used to estimate the value of geophysical information. Our results indicate that the final VOI estimate depends on the different interpretations of the clay cap and the assigned prior probability of steam flow magnitude. Additionally, we demonstrate how these VOI evaluations can be used to guide future drilling locations.
How well does geophysical data improve our geothermal prospecting decisions? How much is this information worth? These types of questions can be addressed using the value of information (VOI) method. VOI quantifies how relevant any particular information source is, given a decision with an uncertain outcome; thus, the estimated VOI can be used to justify the purchase of additional data when exploring for geothermal resources. The contributions presented in this paper are twofold. First, our work illustrates the implementation of a VOI that utilizes an existing dataset of steam flow measurements to deduce trends between steam flow and electrical conductivity. The second set of results presented here demonstrates how the VOI evaluations can serve as a guide on deciding where to drill new production wells in undeveloped areas.The Darajat Geothermal Field
Darajat is a vapor geothermal field located in West Java, Indonesia. First production from the field was started in 1994 and additional capacity was added in 2000 and 2007 to bring the total production capacity to 271 MW from three power plants. Please refer to Rejeki et al. (2010) for geologic and modeling background. Specifically, we utilize two datasets from Darajat: steam flow rates and a 3D electrical conductivity model that has been constructed from MT (magnetotellurics) data. The steam flow measurements are the average production over one year for 27 different wells. Four of these wells were drilled near to or outside of the geothermal field and are characterized by production rates of < 5kg/s. The distribution of the data is plotted on Figure 1.
Yang, Chaodong (Computer Modelling Group Ltd) | Nghiem, Long (Computer Modelling Group Ltd) | Erdle, Jim (Computer Modelling Group Ltd) | Moinfar, Ali (Computer Modelling Group Ltd) | Fedutenko, Eugene (Computer Modelling Group Ltd) | Li, Heng (Computer Modelling Group Ltd) | Mirzabozorg, Arash (Computer Modelling Group Ltd) | Card, Colin (Computer Modelling Group Ltd)
Brown fields are fields with significant production history. Probabilistic forecasting for brown fields requires multiple history-matched models that are conditioned to available field production data. This paper presents a systematic and practical workflow to generate an ensemble of simulation models that is able to capture uncertainties in forecasts, while honoring the observed production data.
The proposed workflow employs the Bayes theorem to define a posterior Probability Density Function (PDF) that represents model forecast uncertainty by incorporating the misfit between simulation results and measured production data. Previous workflows use the Markov Chain Monte Carlo (MCMC) sampling method, which requires an extremely large number (thousands) of simulation runs. To alleviate this drawback, a Proxy-based Acceptance-Rejection (PAR) sampling method is developed in this study to generate representative simulation models that characterize the posterior PDF using hundreds of simulation runs. The proposed workflow can be summarized in five key steps:
Run an initial set of reservoir simulations by simultaneously varying multiple uncertain parameters using an experimental design method.
Construct a proxy function using a Radial-Basis Function (RBF) neural network to approximate the posterior PDF calculated from the initial set of simulation results.
Sample the posterior PDF using the PAR sampling method and run an ensemble of new simulation models using the sample values.
The new simulation results obtained in step 3 are added to the training datasets generated in step 1 to improve the accuracy of the proxy function. Steps 3 and 4 are repeated until a predefined stop criterion is satisfied.
Filter all simulation runs generated in step 3 using appropriate tolerance criteria for various history-match quality indicators. The selected filtered cases constitute the final ensemble of simulation models that can be further used for uncertainty quantification in forecasting.
The proposed workflow is demonstrated on two reservoir simulation models:
Synthetic case. The forecast uncertainty of the ninth SPE Comparative Solution Project (SPE9) simulation model is investigated with a synthetic production history and 32 uncertain parameters.
Unconventional oil field case. The workflow is applied to an Eagle Ford shale oil well to determine P90 (conservative), P50 (most likely), and P10 (optimistic) cases of estimated ultimate recovery (EUR).
Results of the workflow are compared to those obtained using the Metropolis-Hasting MCMC sampling method. The comparison shows that the proposed workflow only requires 800 simulation runs to obtain results as accurate as the MCMC method with 8000 simulation runs. This translates into a 10- times speedup, which makes the proposed workflow practical for many real reservoir simulation studies.
One of the most useful features of decision analysis is its ability to distinguish between constructive and wasteful information gathering. Value-of-Information (VOI) analysis evaluates the benefits of collecting additional information before making a decision.
VOI models describe the relationship between the uncertain quantities of interest, the reliability of the information and the decision criteria. Many uncertainty quantities are continuous in nature and the probability distributions that describe their uncertainty are discretized, and presented in decision trees, to simplify analysis.
A three-point discretization of a continuous distribution is standard, and often preserves the main characteristics (central tendency, spread) of distributions that are close to symmetric. However, VOI studies rarely, if ever, include an analysis on the sensitivity of the VOI to the quality of the discretization. In this work we investigate a variety of discretization techniques, for a range of typical information gathering situations. The investigation utilizes a robust model that accurately calculates the VOI for any combination of continuous distributions. The key criterion for assessing a discretization techniques is whether or not they it has a significant impact on the decision to collect the information. The goal of the work is to provide practical guidance on the level and technique of discretization required.
Most of what petroleum engineers or geoscientists do involves “acquiring” information, with the aim of improving decision-making. “Information” is used here in a broad sense to cover such activities as acquiring of data, performing technical studies, hiring consultants, or performing diagnostic tests. In fact, other than to meet applicable regulatory requirements, the main reason for collecting any information, or doing any technical analysis, should be to make better decisions. The fundamental question for any information-gathering process is then whether the likely improvement in decision-making is worth the cost of obtaining the information. This is the question that the VOI technique is designed to answer.
The oil and gas literature includes a number of papers (e.g. Begg et al, 2002; Bratvold et al, 2008) and books (e.g. Bratvold and Begg, 2010; Newendorp and Schuyler, 2003) where VOI analysis is introduced and described in detail. It is common to use decision trees to structure and evaluate the VOI decision. If the underlying uncertainty is continuous in nature, one of the common discretization methodologies such as Extended Swanson-Megill, Extended Pearson-Tukey, or the McNamee-Celona is often used to discretize the underlying uncertainty into a few, usually 2 or 3, degrees (Bickel et al. 2011). In general, the discrete probabilities and values are selected with the aim of matching the moments (mainly the mean and variance) of the continuous representation. In this paper we refer to this approach as the Low Resolution Decision Tree (LRDT) approach.
A few papers discuss the calculation of VOI when the uncertain event of interest is continuous (Chavez & Henrion, 2004; Arild, Lohne, and Bratvold, 2008; Bickel, 2012). However, the VOI calculation approach in these papers is presented using different terminology than what is used in the LRDT approach. Furthermore, the papers are either limited to relatively simple problems or have very different representations of the reliability or quality of the information gathered. Despite the fact the LRDT approach can have significant errors, VOI calculation are rarely conducted using the full continuous representation of the uncertain event of interest. We suspect that the reason lies in the issue discussed above.
Precisely prediction of rock facies leads to adequate reservoir characterization by improving the porosity-permeability relationships to estimate the properties in non-cored intervals. It also helps to accurately identify the spatial facies distribution to perform an accurate reservoir model for optimal future reservoir performance.
In this paper, the facies estimation has been done through Multinomial logistic regression (MLR) with respect to the well logs and core data in a well in upper sandstone formation of South Rumaila oil field. The entire independent variables are gamma rays, formation density, water saturation, shale volume, log porosity, core porosity, and core permeability.
Firstly, Robust Sequential Imputation Algorithm has been considered to impute the missing data. This algorithm starts from a complete subset of the dataset and estimates sequentially the missing values in an incomplete observation by minimizing the determinant of the covariance of the augmented data matrix. Then, the observation is added to the complete data matrix and the algorithm continues with the next observation with missing values.
The MLR has been chosen to estimate the maximum likelihood and minimize the standard error for the nonlinear relationships between facies & core and log data. The MLR is used to predict the probabilities of the different possible facies given each independent variable by constructing a linear predictor function having a set of weights that are linearly combined with the independent variables by using a dot product.
Beta distribution of facies has been considered as prior knowledge and the resulted predicted probability (posterior) has been estimated from MLR based on Baye's theorem that represents the relationship between predicted probability (posterior) with the conditional probability and the prior knowledge. To assess the statistical accuracy of the model, the bootstrap should be carried out to estimate extra-sample prediction error by randomly drawing datasets with replacement from the training data. Each sample has the same size of the original training set and it can be conducted N times to produce N bootstrap datasets to re-fit the model accordingly to decrease the squared difference between the estimated and observed categorical variables (facies) leading to decrease the degree of uncertainty.