|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
ABSTRACT The industry is facing significant challenges due to the recent downturn in oil prices, particularly for the development of tight reservoirs. It is more critical than ever to 1) identify the sweet spots with less uncertainty and 2) optimize the completion-design parameters. The overall objective of this study is to quantify and compare the effects of reservoir quality and completion intensity on well productivity. We developed a supervised fuzzy clustering (SFC) algorithm to rank reservoir quality and completion intensity, and analyze their relative impacts on wells' productivity. We collected reservoir properties and completion-design parameters of 1,784 horizontal oil and gas wells completed in the Western Canadian Sedimentary Basin. Then, we used SFC to classify 1) reservoir quality represented by porosity, hydrocarbon saturation, net pay thickness and initial reservoir pressure; and 2) completion-design intensity represented by proppant concentration, number of stages and injected water volume per stage. Finally, we investigated the relative impacts of reservoir quality and completion intensity on wells' productivity in terms of first year cumulative barrel of oil equivalent (BOE). The results show that in low-quality reservoirs, wells' productivity follows reservoir quality. However, in high-quality reservoirs, the role of completion-design becomes significant, and the productivity can be deterred by inefficient completion design. The results suggest that in low-quality reservoirs, the productivity can be enhanced with less intense completion design, while in high-quality reservoirs, a more intense completion significantly enhances the productivity. Keywords Reservoir quality; completion intensity; supervised fuzzy clustering, approximate reasoning,tight reservoirs development
Aslam, Usman (Emerson Automation Solutions) | Burgos, Jorge (Occidental Oil and Gas) | Williams, Craig (Occidental Oil and Gas) | McCloskey, Shawn (Occidental Oil and Gas) | Cooper, James (Occidental Oil and Gas) | Mirzaei, Mohammad (Occidental Oil and Gas) | Briz, Eduardo (Occidental Oil and Gas)
Abstract Reservoir production forecasts are inherently uncertain due to the lack of quality data available to build predictive reservoir models. Multiple data types, including historical production, well tests (RFT/PLT), and time-lapse seismic data, are assimilated into reservoir models during the history matching process to improve predictability of the model. Traditionally, a ‘best estimate’ for relative permeability data is assumed during the history matching process, despite there being significant uncertainty in the relative permeability. Relative permeability governs multiphase flow in the reservoir; therefore, it has significant importance in understanding the reservoir behavior as well as for model calibration and hence for reliable production forecasts. Performing sensitivities around the ‘best estimate’ relative permeability case will cover only part of the uncertainty space, with no indication of the confidence that may be placed on these forecasts. In this paper, we present an application of a Bayesian framework for uncertainty assessment and efficient history matching of a Permian CO2 EOR field for reliable production forecast. The study field has complex geology with over 65 years of historical data from primary recovery, waterflood, and CO2 injection. Relative permeability data from the field showed significant uncertainty, so we used uncertainties in the saturation endpoints as well as in the curvature of the relative permeability in multiple zones, by employing generalized Corey functions for relative permeability parameterization. Uncertainty in the relative permeability is used through a common platform integrator. An automated workflow generates the first set of relative permeability curves sampled from the prior distribution of saturation endpoints and Corey exponents, called ‘scoping runs’. These relative permeability curves are then passed to the reservoir simulator. The assumptions of uncertainties in the relative permeability data and other dynamic parameters are quickly validated by comparing the scoping runs and historical observations. By creating a mismatch or likelihood function, the Bayesian framework generates an ensemble of history matched models calibrated to the production data which can then be used for reliable probabilistic forecasting. Several iterations during the manual history match did not yield an acceptable solution, as uncertainty in the relative permeability was ignored. An application of the Bayesian inference accelerated by a proxy model found the relative permeability data to be one of the most influential parameters during the assisted history matching exercise. Incorporating the uncertainty in relative permeability data along with other dynamic parameters not only helped speed up the model calibration process, but also led to the identification of multiple history matched models. In addition, results show that the use of the Bayesian framework significantly reduced uncertainty in the most important dynamic parameters. The proposed approach allows incorporating previously ignored uncertainty in the relative permeability data in a systematic manner. The user-defined mismatch function increases the likelihood of obtaining an acceptable match and the weights in the mismatch function allow both the measurement uncertainty and the effect of simulation model inaccuracies. The Bayesian framework considers the whole uncertainty space and not just the history match region, leading to the identification of multiple history matched models.
Abstract Leveraging publicly available data is a crucial stepfor decision making around investing in the development of any new unconventional asset.Published reports of production performance along with accurate petrophysical and geological characterization of the areashelp operators to evaluate the economics and risk profiles of the new opportunities. A data-driven workflow can facilitate this process and make it less biased by enabling the agnostic analysis of the data as the first step. In this work, several machine learning algorithms are briefly explained and compared in terms of their application in the development of a production evaluation tool for a targetreservoir. Random forest, selected after evaluating several models, is deployed as a predictive model thatincorporates geological characterization and petrophysical data along with production metricsinto the production performance assessment workflow. Considering the influence of the completion design parameters on the well production performance, this workflow also facilitates evaluation of several completion strategies toimprove decision making around the best-performing completion size. Data used in this study include petrophysical parameters collected from publicly available core data, completion and production metrics, and the geological characteristics of theNiobrara formation in the Powder River Basin. Historical periodic production data are used as indicators of the productivity in a certain area in the data-driven model. This model, after training and evaluation, is deployed to predict the productivity of non-producing regions within the area of interest to help with selecting the most prolific sections for drilling the future wells. Tornado plots are provided to demonstrate the key performance driversin each focused area. A supervised fuzzy clustering model is also utilized to automate the rock quality analyses for identifying the "sweet spots" in a reservoir. The output of this model is a sweet-spot map that is generated through evaluating multiple reservoir rock properties spatially. This map assists with combining all different reservoir rock properties into a single exhibition that indicates the average "reservoir quality"of the formation in different areas. Niobrara shale is used as a case study in this work to demonstrate how the proposed workflow is applied on a selected reservoir formation whit enough historical production data available.
Soares, Ricardo Vasconcellos (NORCE Norwegian Research Centre and University of Bergen (Corresponding author) | Luo, Xiaodong (email: firstname.lastname@example.org)) | Evensen, Geir (NORCE Norwegian Research Centre) | Bhakta, Tuhin (NORCE Norwegian Research Centre and Nansen Environmental and Remote Sensing Center (NERSC))
Summary In applications of ensemble-based history matching, it is common to conduct Kalman gain or covariance localization to mitigate spurious correlations and excessive variability reduction resulting from the use of relatively small ensembles. Another alternative strategy not very well explored in reservoir applications is to apply a local analysis scheme, which consists of defining a smaller group of local model variables and observed data (observations), and perform history matching within each group individually. This work aims to demonstrate the practical advantages of a new local analysis scheme over the Kalman gain localization in a 4D seismic history-matching problem that involves big seismic data sets. In the proposed local analysis scheme, we use a correlation-based adaptive data-selection strategy to choose observations for the update of each group of local model variables. Compared to the Kalman gain localization scheme, the proposed local analysis scheme has an improved capacity in handling big models and big data sets, especially in terms of computer memory required to store relevant matrices involved in ensemble-based history-matching algorithms. In addition, we show that despite the need for a higher computational cost to perform model update per iteration step, the proposed local analysis scheme makes the ensemble-based history-matching algorithm converge faster, rendering the same level of data mismatch values at a faster pace. Meanwhile, with the same numbers of iteration steps, the ensemble-based history-matching algorithm equipped with the proposed local analysis scheme tends to yield better qualities for the estimated reservoir models than that with a Kalman gain localization scheme. As such, the proposed adaptive local analysis scheme has the potential of facilitating wider applications of ensemble-based algorithms to practical large-scale history-matching problems.
Abstract In the event of offshore oilfield blow-out, real-time quantification of both spilled volume, recovered oil and environmental damage is essential. It is due to costly recovery and restoration process. In order to develop a robust and accurate quantification, we need to consider numerous parameters, which are sometimes tricky to be identified and captured. In this work, we present a new modeling technique under uncertainty, which accommodates numerous parameters and interaction among them. We begin the model by identifying possible parameters that contributes to the process: grouped into (1) subsurface, (2) surface and (3) operations. Subsurface e.g. well and reservoir characteristics. Surface e.g. ocean, wind, soil. (3) Operations e.g. oil spill treatment blow-out rate, oil characteristics, reservoir characteristics, ocean current speed, meteorological aspects, soil properties, and oil-spill treatment (oil booms and skimmers). We assign prior distribution for each parameter based on available data to capture the uncertainties. Before progressing to uncertainty propagation, we construct objective response (amount of recovered oil) through mass conservation equation in data-driven and non-intrusive way, using design of experiment and regression-based method. We propagate uncertainties using Monte Carlo simulation approach, where the result is presented in a distribution form, summarized by P10, P50, and P90 values. This work shows how to robustly calculate the amount of recovered oil under uncertainty in the event of offshore blow out. There are several notable challenges within the approach: 1) determining the uncertainty range in blow-out rate in case of rupture occurs in the well, 2) obtaining data for wind and ocean current speed since there is an interplay between local and global climate, and 3) accuracy of capturing the shoreline geometry. Despite the challenges, the results are in-line with the physics and several recorded blow-out cases. Define what is blow out rate (important as has highest sensitivity). Through sensitivity analysis with Sobol decomposition (define this …), we can define the heavy hitters. These heavy hitters give us knowledge on which parameters should be aware of. In real-time quantification, this analysis can provide an insight on what treatment method should be performed to efficiently recover the spill. We also highlight about the sufficiency of the model to obtain several parameters’ range, for example blow-out rate. The model should at least capture the physics in high details and incorporate multiple scenarios. In the case of blow-out rate, we extensively model the well completion and consider leaking due to unprecedented fractures or crater formation around the wellbore. We introduce a new framework of modeling to perform real-time quantification of offshore oil spills. This framework allows inferring the causality of the process and illustrating the risk level.
Sajjad, Farasdaq (Pertamina Hulu Energi) | Jaenudin, Jemi (Pertamina Hulu Energi) | Chandra, Steven (Institut Teknologi Bandung) | Wirawan, Alvin (Pertamina Hulu Energi) | Prawesti, Annisa (Pertamina Hulu Energi) | Muksin, M. Gemareksha (Pertamina Hulu Energi) | Nugroho, Wisnu Agus (Pertamina Hulu Energi) | Mujib, Ecep Muhammad (Pertamina Hulu Energi) | Naja, Savinatun (Pertamina Hulu Energi)
Abstract Optimizing multiple assets under uncertain techno-economic conditions and tight government policies is challenging. Operator needs to establish flexible Plan of Development (POD)s and put priority in developing multiple fields. The complexity of production and the profit margin should be simultaneously evaluated. In this work, we present a new workflow to perform such a rigorous optimization under uncertainty using the case study of PHE ONWJ, Indonesia. We begin the workflow by identifying the uncertain parameters and their prior distributions. We classify the parameters into three main groups: operations-related (geological complexity, reserves, current recovery, surface facilities, and technologies), company-policies-related (future exploration plan, margin of profit, and oil/gas price), and government-related (taxes, incentives, and fiscal policies). A unique indexing technique is developed to allow numerical quantification and adapt with dynamic input. We then start the optimization process by constructing time-dependent surrogate model through training with Monte Carlo sampling. We then perform optimization under uncertainty with multiple scenarios. The objective function is the overall Net Present Value (NPV) obtained by developing multiple fields. This work emphasizes the importance of the use of time-dependent surrogate approach to account risk in the optimization process. The approach revises the prior distribution with narrow-variance distribution to make reliable decision. The Global Sensitivity Analysis (GSA) with Sobol decomposition on the posterior distribution and surrogate provides parameters’ ranking and list of heavy hitters. The first output from this workflow is the narrow-variance posterior distribution. This result helps to locate the sweet spots. By analyzing them, operator can address specific sectors, which are critical to the NPV. PHE ONWJ, as the biggest operator in Indonesia, has geologically scattered assets, therefore, this first output is essential. The second output is the list of heavy hitters from GSA. This list is a tool to cluster promising fields for future development and prioritize their development based on the impact towards NPV. Since all risks are carried by the operator under the current Gross Split Contract, this result is advantageous for decision-making process. We introduce a new approach to perform time-dependent, multi-asset optimization under uncertainty. This new workflow is impactful for operators to create robust decision after considering the associated risks.
Naufal, Ahmad Naufal (PETRONAS CARIGALI SDN BHD) | Samy, Samy Abdelhamid (PETRONAS CARIGALI SDN BHD) | Nenisurya, Nenisurya Hashim (PETRONAS CARIGALI SDN BHD) | Zaharuddin, Zaharuddin Muhammad (PETRONAS CARIGALI SDN BHD) | Eddy, Eddy Damsuri (PETRONAS CARIGALI SDN BHD) | Amir, Amir Ali (PETRONAS CARIGALI SDN BHD) | Hilmi, Mohd Hilmi (Universiti Teknologi PETRONAS) | Izzatdin, Izzatdin A (Universiti Teknologi PETRONAS) | Jafreezal, Jafreezal Jaafar (Universiti Teknologi PETRONAS) | Norshakirah, Norshakirah A (Universiti Teknologi PETRONAS) | Emelia, Emelia Akashah (Universiti Teknologi PETRONAS) | Amirul, Ku Amirul (Universiti Teknologi PETRONAS) | Hajar, Hajar M (Universiti Teknologi PETRONAS) | Wahyu, Ade Wahyu (Universiti Teknologi PETRONAS) | Syakirah, Nur Syakirah (Universiti Teknologi PETRONAS)
Abstract Equipment failure, unplanned downtime operation, and environmental damage cost represent critical challenges in overall oil and gas business from well reservoir identification and drilling strategy to production and processing. Identifying and managing the risks around assets that could fail and cause redundant and expensive downtime are the core of plant reliability in oil and gas industry. In the current digital era; there is an essential need of innovative data-driven solutions to address these challenges, especially, monitoring and diagnosis of plant equipment operations, recognize equipment failure; avoid unplanned downtime; repair costs and potential environmental damage; maintaining reliable production, and identifying equipment failures. Machine learning-artificial intelligence application is being studied to develop predictive maintenance (PdM) models as innovative analytics solution based on real-data streaming to get to an elevated level of situational intelligence to guide actions and provide early warnings of impending asset failure that previously remained undetected. This paper proposes novel machine learning predictive models based on extreme learning/support vector machines (ELM-SVM) to predict the time to failure (TTF) and when a plant equipment(s) will fail; so maintenance can be planned well ahead of time to minimize disruption. Proper visualization with deep-insights (training and validation) processes of the available mountains of historian and real-time data are carried out. Comparative studies of ELM-SVM techniques versus the most common physical-statistical regression techniques using available rotating equipment-compressors and time-failure mode data. Results are presented and it is promising to show that the new machine learning (ELM-SVM) techniques outperforms physical-statistics techniques with reliable and high accurate predictions; which have a high impact on the future ROI of oil and gas industry.
Abstract Hydrogen Sulphide (H2S) is a colourless, flammable and highly toxic gas with a strong odour of rotten eggs that is found in many reservoir fluids and aquifers in the world. This gas is commonly a result of "reservoir souring" – a process which increases the H2S concentration. Increasing amounts of this gas pose serious health, safety and environmental concerns. This can result in significant costs associated with replacement of downhole and surface equipment and increased processing costs, but more lethally a potential loss of life. Many reservoirs particularly those undergoing waterflooding face increasing levels of hydrogen sulphide (H2S) production with time. H2S is a highly toxic gas that can be fatal even at low concentrations. Being able to predict the risk potential of a particular reservoir to increasing H2S production with time would be highly valuable. The objective is to determine apriori whether a reservoir would likely see dangerously high levels of H2S being produced during the lifetime of the reservoir, and if so, be a catalyst in supporting further investigation and mitigation of H2S early in the reservoir development. There is very little published field data with regards to reservoir souring, hence a purely data driven model would not be possible to create. However, we do have a good understanding of the reaction kinetics that goes into the biological process that generates H2S. To this end the best modelling paradigm that can assimilate sparse data with first principles dynamics is fuzzy logic. A fuzzy logic model has been built around the reaction kinetics and then conditioned to the published field data. The model created matches the published field data fairly well. It is now a ready tool that can be used by engineers to make a quick assessment of their reservoirs before going into full blown expensive sampling and laboratory analysis. The novel aspect of this paper is being able to use fuzzy logic to combine the first principles chemistry together with sparse data to produce a model that can be used practically. Fuzzy Logic has been out of the news of late as machine learning and neural networks are the current hot potatoes, however it is often overlooked that fuzzy logic can still be used in low dimensional cases where only sparse data is available.
Yi, Michael (Intellicess Inc.) | Ramos, Dawson (Intellicess Inc.) | Ashok, Pradeepkumar (Intellicess Inc.) | Thetford, Taylor (Apache Corporation) | Bohlander, Spencer (Apache Corporation) | Behounek, Michael (Apache Corporation)
Abstract Detecting drilling dysfunctions from surface data is not always easy as downhole vibrations tend to get damped before they reach surface sensors. Building machine learning models to recognize patterns in the surface data requires vibration signals captured by downhole sensors for training purposes. Such datasets are not widely available and therefore a methodology to expand these datasets is highly desirable. This work explores ways to utilize data augmentation to artificially diversify and increase datasets to build better models. Stick-slip (including full-stick), bit bounce, whirl, and bit balling are the primary dysfunctions considered in this work. Bayesian networks are used as classifiers to keep the model intuitive, and address situations where some input data is missing or unavailable. Once the dysfunction events in the downhole dataset were labeled, data augmentation techniques were used to generate synthetic data for scenarios where data was sparse. The dataset used in the project consisted of nine wells (with 19 bit runs). Most of the bit runs had a downhole vibration sensor at the bit, while some had sensors along the string as well. Of these 19 bit runs, 15 were used for training and four were used to test the models. Various data augmentations techniques were applied and validated manually as appropriate synthetic data. In the case of full-stick event detection, the saw tooth pattern in the surface torque signal was captured and provided as an input to the classifier. The classifiers thus trained were able to detect the dysfunctions using data from surface sensors to a high level of accuracy and with low false alarm rates. This paper presents models to predict downhole dysfunctions from surface data alone. This paper also provides guidance on data augmentation techniques that use sparse downhole datasets to improve machine learning drilling advisory models. For identifying drilling dysfunction from surface data, the tortuosity of the well is also taken into account.
Abstract It has been demonstrated that creeping shales can form effective hydraulic well barriers. Shale barriers have been used for many years in P&A of wells in Norway. More recently, shale barriers for zonal isolation have also been used in new wells where shale creep was found to occur within days. In some cases, shale creep is activated by a reduction in annulus pressure, in other cases shale creep sets in without any active activation, possibly by time-dependent formation-pressure changes. However, the presence of thixotropic fluids (drilling muds) in the annulus may prevent full closure of the annulus as it requires large pressure differentials to squeeze the fluid out of a microannulus. Furthermore, elastic rebound of an actively activated shale barrier could result in a microannulus and hence a possible leakage pathway. Improved logging technology is needed for identifying shale barriers and the presence of micro-annuli in shale-barrier zones. We use cement bond log data and standard bond logging criteria to evaluate the quality of the shale well barriers (Williams et al., 2009). In addition, in order to detect microannuli on the outside of the casing, a new inversion algorithm for the bond logging data was developed and tested on field data. Later, we had the chance to apply the inversion algorithm to bond-log data obtained in the laboratory with a miniature bond-logging tool inside a cased hollow-cylinder shale-core sample place. It turned out that both the micro-annulus widths and shale velocities determined by the inversion technique were too high. By constraining the shale velocities to more realistic values, the updated microannulus widths were smaller and more consistent with the experimental results. Small microannuli may not cause any measurable leakage along the well, especially if filled with a thixotropic fluid. However, more studies are needed to quantify the impact of microannuli on the sealing capacity of shale barriers.