|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
History matching is a critical step for dynamic reservoir modeling to establish a reliable, predictive model. Numerous approaches have emerged over decades to accomplish a robust history-matched reservoir model. As geological and completion complexity of oil and gas fields increase, building a fully representative predictive reservoir model can be arduous to almost impossible. The complete paper outlines an approach to history matching that uses artificial intelligence (AI) with an artificial neural network (ANN) and data-driven analytics.
Summary Hydrocarbon (re‐)development projects need to be evaluated under uncertainty. Forecasting oil and gas production needs to capture the ranges of the multitude of uncertain parameters and their impact on the forecast to maximize the value of the project for the company. Several authors showed, however, that the oil and gas industry has challenges in adequately assessing the distributions of hydrocarbon production forecasts. The methods for forecasting hydrocarbon production developed with digitalization from using analytical solutions to numerical models with an increasing number of gridblocks (“digital twins”) toward ensembles of models covering the uncertainty of the various parameters. Analytical solutions and single numerical models allow calculation of incremental production for a single case. However, neither the uncertainty of the forecasts nor the question in which the distribution of various outcomes the single model is located can be determined. Ensemble‐based forecasts are able to address these questions, but they need to be able to cover a large number of uncertain parameters and the amount of data that is generated accordingly. Theory‐guided data science (TGDS) approaches have recently been used to overcome these challenges. Such approaches make use of the scientific knowledge captured in numerical models to generate a sufficiently large data set to apply data science approaches. These approaches can be combined with economics to determine the desirability of a project for a company (expected utility). Quantitative decision analysis, including a value of information (VoI) calculation, can be done addressing the uncertainty range but also the risk hurdles as required by the decision‐maker (DM). The next step is the development of learning agent systems (agent: autonomous, goal‐directed entity that observes and acts upon an environment) that are able to cope with the large amount of data generated by sensors and to use them for conditioning models to data and use the data in decision analysis. Companies need to address the challenges of data democratization to integrate and use the available data, organizational agility, and the development of data science skills but making sure that the technical skills, which are required for the TGDS approach, are kept.
The digital transformation journey provides new opportunities for running simulations through cloud computing, with flexibility in hardware resources and availability of a wide array of software tools that can enhance decision-making. This work reviews the benefits of running reservoir simulations in a cloud environment and demonstrates the efficiency and cost savings. Additionally, a workflow for uncertainty analysis and history matching that integrates data analysis and machine-learning tools is presented. First, the hardware architecture must be designed to meet parallel reservoir simulation needs: significant message passing occurs between computer nodes, and for satisfactory performance, these nodes must be connected by a low-latency network, rather than be randomly located. Second, to ensure portability and easy replication across multiple cloud sites and platforms, the software performing the simulations must be containerized. Third, to reduce the time required to start a new simulation run, the Kubernetes platform is used to optimize resource allocation. Finally, reservoir simulation in the cloud is no longer merely the running of the simulation model, but it is integrated with data management and data analysis tools for decision-making. The cloud-based simulation services discussed herein exhibit good results during scale up, when a simulation operation requires a larger number of central processing units and/or greater memory, and also during scale out, when thousands of operation scenarios are necessary for history matching. The "pay as you go" pricing model reduces the time and capital costs of acquiring the new computing infrastructure to nearly zero, and the effectively unlimited scale-out capability can reduce the elapsed time for history matching by 80%. The availability of data centers in different regions is good for team collaborations. It serves the data management tool well to track history data, perform data mining, extract more information, and make decisions. Compared to traditional reservoir simulation, the cloud-based reservoir simulation software as a service model simplifies the process and reduces hardware acquisition and maintenance costs. Integrating intelligent data analysis with simulation helps quantify the uncertainty in the model and enables improved decisions.
Abstract When we are evaluating reservoirs of very high hydraulic communication, as in the case of several Brazilian pre-salt fields, the identification of the effects of a well, be it source or sink, in other observer wells becomes very complex to be observed. It becomes even more difficult when we do not have control of the volumes that are injected in each zone of interest, uncertainty in the reported flows (mainly of the producers) and difficulty to define a perfect observer point. This work proposes to use the large volume of pressure and flow data that we have available to, through a linear optimization process, identify the hydraulic communication index of each well (producer or injector) at each point of observation. To achieve this objective the author resorts to physical-based data-driven methods, and through linear optimization, reach hydraulical interference coefficients between wells. Those coefficients may delivers relevant, and even unexpected information on how wells are communicated, if there are fractures or vuges unseen by geological methods, and allow the reservoir managing team to anticipates water and/or gas breakthrough, a well is more responsive to which other, etc. Furthermore the methodology may give important information to subsidize the history matching process. The paper shows that the methodology is widely applicable in reservoirs where either the hydraulical communication or the wells densification is high enough to avoid any conclusive assessments from usual methods and has as greatest advantage a strong physical background behind it, unlike several machine learning data driven methods. It will be presented through several examples, applying both in controlled (obtained by synthetically generated data from reservoir flow models) and uncontrolled systems (hard data obtained from Brazilian pre-salt reservoirs).
Abstract Data-Driven subsurface modeling technology has been proven, for the past few years, to yield technical and commercial success in several oil fields worldwide. A data-driven model is constructed for the first time for an oil field onshore Abu Dhabi, and used for evaluation of a reservoir with substantial reserves and comprehensive development plan; for the purpose of predicting production rates, dynamic reservoir pressure and water saturation, improving reservoir understanding, supporting field development optimization and identifying optimum infill well locations. The objective is to provide the asset with a decision-support tool to make better field development planning and management. The subject reservoir is a low permeability carbonate reservoir and characterized by lateral and vertical variations in its reservoir rocks and fluid properties. More than 8 years of Phase-I development and production/injection data and extensive amount of well tests and log data (SCAL, PVT, MDT) from more than 37 wells were used to construct the Data Driven Model for this asset. This new modeling technology, (TDM), integrates reservoir engineering analytical techniques with Artificial Intelligence, Machine Learning & Data Mining in order to formulate an empirical and spatiotemporally calibrated full field model. In this work, it is leveraged with other conventional reservoir modeling and management tools such as streamline modeling, isobaric maps and flooding conformance. Several analyses were performed using the full field data-driven model; complementing the existing conventional numerical model. The accomplishments of the data-driven reservoir model for this project included, but not limited to, comprehensive history matching (including blind validation) and then forecast of Oil rate, GOR, WC, reservoir pressure and water saturation, injection optimization, and choke size optimization. The results generated by the data-driven model proved to be quite eye-opening for the asset management; as the model was able to identify potential areas of improving field efficiency and cost reduction. When combined with numerical techniques, the calibrated data-driven model assist to obtain a reliable short term forecast in a shorter time and help make quick decisions on day-to-day operational optimization aspects. The use of facts (all field measurements) instead of human biases, pre-conceived notions, and gross approximations distinguishes data-driven modeling from other existing modeling technologies. Its innovative combination of Artificial Intelligence and Machine Learning (the technologies that are transforming all industries in the 21 century) with reservoir engineering, reservoir modeling and reservoir management clearly demonstrates the potentials that these pattern recognition technologies offer to the upstream oil and gas industry for its realistic digital transformation.
Abstract A well-designed pilot is instrumental in reducing uncertainty for the full-field implementation of improved oil recovery (IOR) operations. Traditional model-based approaches for brown-field pilot analysis can be computationally expensive as it involves probabilistic history matching first to historical field data and then to probabilistic pilot data. This paper proposes a practical approach that combines reservoir simulations and data analytics to quantify the effectiveness of brown-field pilot projects. In our approach, an ensemble of simulations are first performed on models based on prior distributions of subsurface uncertainties and then results for simulated historical data, simulated pilot data and ob jective functions are assembled into a database. The distribution of simulated pilot data and ob jective functions are then conditioned to actual field data using the Data-Space Inversion (DSI) technique, which circumvents the difficulties of traditional history matching. The samples from DSI, conditioned to the observed historical data, are next processed using the Ensemble Variance Analysis (EVA) method to quantify the expected uncertainty reduction of ob jective functions given the pilot data, which provides a metric to ob jectively measure the effectiveness of the pilot and compare the effectiveness of different pilot measurements and designs. Finally, the conditioned samples from DSI can also be used with the classification and regression tree (CART) method to construct signpost trees, which provides an intuitive interpretation of pilot data in terms of implications for ob jective functions. We demonstrate the practical usefulness of the proposed approach through an application to a brown-field naturally fractured reservoir (NFR) to quantify the expected uncertainty reduction and Value of Information (VOI) of a waterflood pilot following more than 10 years of primary depletion. NFRs are notoriously hard to history match due to their extreme heterogeneity and difficult parameterization; the additional need for pilot analysis in this case further compounds the problem. Using the proposed approach, the effectiveness of a pilot can be evaluated, and signposts can be constructed without explicitly history matching the simulation model. This allows ob jective and efficient comparison of different pilot design alternatives and intuitive interpretation of pilot outcomes. We stress that the only input to the workflow is a reasonably sized ensemble of prior simulations runs (about 200 in this case), i.e., the difficult and tedious task of creating history-matched models is avoided. Once the simulation database is assembled, the data analytics workflow, which entails DSI, EVA, and CART, can be completed within minutes. To the best of our knowledge, this is the first time the DSI-EVA-CART workflow is proposed and applied to a field case. It is one of the few pilot-evaluation methods that is computationally efficient for practical cases. We expect it to be useful for engineers designing IOR pilot for brown fields with complex reservoir models.
Abstract The field of data-driven analytics and machine learning is rapidly evolving today and slowly beginning to reshape the petroleum sector with transformative initiatives. This work describes a heuristic approach combining mathematical modeling and associated data-driven workflows for estimating reservoir pressure surfaces through space and time using measured data. This procedure has been implemented successfully in a giant offshore field with a complex history of active pressure management by water and gas. This practical workflow generates present-day pressure maps that can be used directly in reservoir management by locating poorly supported areas and planning for mitigation activities. It assists and guides the history matching process by offering a benchmark against which simulated pressures can be compared. Combined with data-based streamlines computation, this workflow improves the understanding of fluid flow movements, help to identify baffles and assists in field sectorization. The distinctive feature of this data-driven approach is the unbiased reliance on field-observed data that complements complex modeling and compute-intensive schemes typically found in reservoir simulation. Conventional dynamic simulation and the tracing of streamlines require adequate static (e.g. permeability tensor) and dynamic models (e.g. pressures for each active cell in the system). Alternatively data-driven streamlines are readily available and calibrated. This paper presents innovative algorithms and workflows to the relatively limited existing body of literature on data-driven methods for pressure mapping. In this case study, new insights are effectively revealed such as inter-reservoir communication, enabled a better understanding of the gas movement and supported the change in production strategy. The paper is organized as follow. After a general overview of the field studied, this paper describes in detail the workflows used to interpolate pressures in space and time along with cross-validation results. Various applications of the pressure predictions are presented in the sections thereafter.
Abstract Due to computational advances in reservoir simulation utilizing high performance computing, it is now possible to simulate more than thousands of reservoir simulation cases in a practical time frame. This opens a new avenue to reservoir simulation studies, enabling exhaustive exploration of subsurface uncertainty and development/depletion options. However, analyzing the results of a large number of simulation cases still remains a challenging and overwhelming task. We propose a new method that enables the efficient analysis of massive reservoir simulation results, often consisting of thousands of cases, by discovering interesting patterns of relationships among variables in large datasets. The method uses a well-known data mining method, called association rule mining, together with a high-dimensional visualization technique. We demonstrate the capability of the proposed method by utilizing it to analyze the reservoir simulation results from the SAIGUP (Sensitivity Analysis of the Impact of Geological Uncertainty on Production) project, which is an interdisciplinary reservoir modeling project carried out by Manzocchi et al. earlier. To investigate the influence of geological features on oil recovery in shallow marine reservoirs, numerous reservoir models were built and flow-simulated in the SAIGUP project. In this paper, we analyze the simulation results from an ensemble of 9072 models, which cover all possible combinations of several structural and sedimentological parameters individually varied to describe geological uncertainty. To be able to analyze the simulation results from such exhaustive sampling from high-dimensional model parameter space, it is crucial to efficiently decompose complex interactions between model parameters and discover hidden impacts on flow response. By coupling the association rule mining algorithm and high-dimensional visualization, such interactions and impacts are rapidly extracted and visualized in such a way that engineers and geoscientists can interpret meaningful sensitivities “at a glance”. This methodology provides a novel way to rapidly interpret flow response from large ensemble of reservoir models without being overwhelmed by massive data. It is also applicable for the analysis of production data from unconventional wells consisting of more than thousands.
Abstract Several industry technologies support integrated operations, such as intelligent completions, real-time systems, surface-sub- surface models, workflow automation systems, etc. Each of these technologies provides relevant data pertaining to one specific part of the asset. The integration, correlation, and analysis of this data (current and historic) help the operator to understand the current state of the asset, as well as make inferences about future behavior. Such capabilities are provided by a set of tools and techniques known within the industry as analytics. Operating and service companies are using new and improved analytics to support oil and gas operations and management processes. Additionally, several analytics commonly used in the foreign market are being applied to industry operations and management workflows. This has led to more robust and effective solutions for oil and gas production operations. However, the analytics value added is limited if implemented in an isolated fashion; the real value is obtained when analytics are immersed within comprehensive production workflows, which aid in analysis, processing, and modeling of the production process. Workflows enhanced using analytics can transform integrated operations into intelligent operations. This paper presents an analysis of the primary analytics techniques and how they have been applied to support intelligent operations. To support this analysis, several application examples of analytics in oil and gas intelligent operations are described, and several case studies of real applications of analytics are referenced.
Abstract The technology to process and analyze simulation model outcomes have improved exponentially in the past few years and gave engineers the ability to analyze results of simulation runs efficiently and effectively. Reservoir simulation engineers need to quickly analyze simulation runs based on difference among models calculated data and measured data to determine the quality of the simulation models. With the help of business intelligence tools, engineers are able to do certain quality checks of the model that enhances reservoir fluid flow understanding. History match quality check dashboard provides the required means to perform qualitative and quantitative analysis for simulation runs. The developed reservoir engineering business intelligence tool helps engineers to extract statistical information of the simulation runs to quality check how close the model mimics historical performance. The tool provides means to quantitatively and qualitatively check critical well performance properties that include water cut, pressure, GOR and oil rate against the measured data. Using this tool, engineers will be able to identify wells (or cluster of wells) with issues in those parameters, allowing the engineer to rank simulation runs according to their history match quality. This paper will discuss the algorithms behind the history match quality check dashboard that utilizes advanced data mining and visual analytics. A case study will exemplify identifying problematic wells in the history matched pressure, water cut, and oil production rate for one of Saudi Arabia field.