Content of PetroWiki is intended for personal use only and to supplement, not replace, engineering judgment. SPE disclaims any and all liability for your use of such content. A form of mathematical programming in which the objective function is a linear combination of the independent variables. The solution technique is called the simple method because it can be viewed as a search along the edges of a hypercube.
The oil and gas industry is becoming more technologically advanced every day. As automation, artificial intelligence (AI) and robotics improve, it may be increasingly tempting to employ automatic means to accomplish industry goals. The most comprehensive list was developed by Thomas B. Sheridan and W. L. Verplank. Levels of automation range from complete human control to complete computer control. Parasuraman, Sheridan, and Wickens went on to introduce the idea of associating levels of automation to functions.
The generic term "intelligent well" is used to signify that some degree of direct monitoring and/or remote control equipment is installed within the well completion. The first computer-assisted operations optimized gas lifted production by remote control near the tree and assisted with pumping well monitoring and control. Permanent downhole pressure and temperature gauges are commonly run as part of the completion system and combined with data transmission infrastructure. With the development, successful implementation, and improving reliability of a variety of permanently installed sensors, it was perceived that the potential to exercise direct control of inflow to the wellbore would provide significant and increased economic benefit.
The last few decades have seen an evolution in robotics in drilling systems. But more recently fully automatic and fully robotized systems are being introduced into the industry. Robotics traditionally used to duplicate manual labor but now more companies focus on fully automatic or robotized systems. Robotics itself has been around since as early as 4 BC, with the first programmable mechanism created by Muslim inventor Al-Jazari in 1206. The earliest industrial robot came about in 1937, but until the invention of the computer robotics did not truly take off.
Drilling automation differs from rig automation. Today, this involves the linking of surface and downhole measurements with near real-time predictive models to improve the safety and efficiency of the drilling process. SPE volunteers formed the Drilling Systems Automation Technical Section (DSATS) in 2008. The purpose of DSATS is to accelerate the development and implementation of drilling systems automation in well construction by supporting initiatives which communicate the technology, recommend best practices, standardize nomenclature and help define the value of drilling systems automation. Appropriate initiatives include workshops, forums, lectures, and technical and white papers, among others, and DSATS actively encourages participation of automation experts from outside the drilling industry.
Copyright 2019 held jointly by the Society of Petrophysicists and Well Log Analysts (SPWLA) and the submitting authors. ABSTRACT Today, many machine learning techniques are regularly employed in petrophysical modelling such as cluster analysis, neural networks, fuzzy logic, self-organising maps, genetic algorithm, principal component analysis etc. While each of these methods has its strengths and weaknesses, one of the challenges to most of the existing techniques is how to best handle the variety of dynamic ranges present in petrophysical input data. Mixing input data with logarithmic variation (such as resistivity) and linear variation (such as gamma ray) while effectively balancing the weight of each variable can be particularly difficult to manage. DTA is conceived based on extensive research conducted in the field of CFD (Computational Fluid Dynamics). This paper is focused on the application of DTA to petrophysics and its fundamental distinction from various other statistical methods adopted in the industry. Case studies are shown, predicting porosity and permeability for a variety of scenarios using the DTA method and other techniques. The results from the various methods are compared, and the robustness of DTA is illustrated. The example datasets are drawn from public databases within the Norwegian and Dutch sectors of the North Sea, and Western Australia, some of which have a rich set of input data including logs, core, and reservoir characterisation from which to build a model, while others have relatively sparse data available allowing for an analysis of the effectiveness of the method when both rich and poor training data are available. The paper concludes with recommendations on the best way to use DTA in real-time to predict porosity and permeability. INTRODUCTION The seismic shift in the data analytics landscape after the Macondo disaster has produced intensive focus on the accuracy and precision of prediction of pore pressure and petrophysical parameters.
The petro-elastic model (PEM) represents an integral component in the closed-loop calibration of integrated four-dimensional (4D) solutions incorporating time-lapse seismic, elastic and petrophysical rock property modeling, and reservoir simulation. Calibration of the reservoir simulation model is needed so that it is not only consistent with production history but also with the contemporaneous subsurface description as characterized by time-lapse seismic. The PEM requires dry rock properties in its description, which are typically derived from mechanical rock tests. In the absence of those mechanical tests, a small data challenge is posed, whereby all necessary data is not available but the value of reconciling seismic attributes to simulated production remains. A seismic inversion-constrained n-dimensional metaheuristic optimization technique is employed directly on three-dimensional (3D) geocellular arrays to determine elastic and density properties for the PEM embedded in the commercial reservoir simulator.
Ill-posed dry elastic and density property models are considered in a field case where the seismic inversion and petrophysical property model constrained by seismic inversion exist. An n-dimensional design optimization technique is implemented to determine the optimal solution of a multidimensional pseudo-objective function comprised of multidimensional design variables. This study investigates the execution of a modified particle swarm optimization (PSO) method combined with an exterior penalty function (EPF) with varied constraints. The proposed technique involves using n-dimensional design optimization to solve the pseudo-objective function comprised of the PSO and EPF given limited availability of constraints. In this work, an examination of heavily and reduced-order penalized metaheuristic optimization processes, where the design variables and optimal solution are derived from 3D arrays, is conducted so that constraint applicability is quantified. While the process is examined specifically for PEM, it can be applied to other data-limited modeling techniques.
Integration of time-lapse seismic data into dynamic reservoir model is an efficient process in calibrating reservoir parameters update. The choice of the metric which will measure the misfit between observed data and simulated model has a considerable effect on the history matching process, and then on the optimal ensemble model acquired. History matching using 4D seismic and production data simultaneously is still a challenge due to the nature of the two different type of data (time-series and maps or volumes based).
Conventionally, the formulation used for the misfit is least square, which is widely used for production data matching. Distance measurement based objective functions designed for 4D image comparison have been explored in recent years and has been proven to be reliable. This study explores history matching process by introducing a merged objective function, between the production and the 4D seismic data. The proposed approach in this paper is to make comparable this two type of data (well and seismic) in a unique objective function, which will be optimised, avoiding by then the question of weights. An adaptive evolutionary optimisation algorithm has been used for the history matching loop. Local and global reservoir parameters are perturbed in this process, which include porosity, permeability, net-to-gross, and fault transmissibility.
This production and seismic history matching has been applied on a UKCS field, it shows that a acceptalbe production data matching is achieved while honouring saturation information obtained from 4D seismic surveys.
Nandi Formentin, Helena (Durham University and University of Campinas) | Vernon, Ian (Durham University) | Avansi, Guilherme Daniel (University of Campinas) | Caiado, Camila (Durham University) | Maschio, Célio (University of Campinas) | Goldstein, Michael (Durham University) | Schiozer, Denis José (University of Campinas)
Reservoir simulation models incorporate physical laws and reservoir characteristics. They represent our understanding of sub-surface structures based on the available information. Emulators are statistical representations of simulation models, offering fast evaluations of a sufficiently large number of reservoir scenarios, to enable a full uncertainty analysis. Bayesian History Matching (BHM) aims to find the range of reservoir scenarios that are consistent with the historical data, in order to provide comprehensive evaluation of reservoir performance and consistent, unbiased predictions incorporating realistic levels of uncertainty, required for full asset management. We describe a systematic approach for uncertainty quantification that combines reservoir simulation and emulation techniques within a coherent Bayesian framework for uncertainty quantification.
Our systematic procedure is an alternative and more rigorous tool for reservoir studies dealing with probabilistic uncertainty reduction. It comprises the design of sets of simulation scenarios to facilitate the construction of emulators, capable of accurately mimicking the simulator with known levels of uncertainty. Emulators can be used to accelerate the steps requiring large numbers of evaluations of the input space in order to be valid from a statistical perspective. Via implausibility measures, we compare emulated outputs with historical data incorporating major process uncertainties. Then, we iteratively identify regions of input parameter space unlikely to provide acceptable matches, performing more runs and reconstructing more accurate emulators at each wave, an approach that benefits from several efficiency improvements. We provide a workflow covering each stage of this procedure.
The procedure was applied to reduce uncertainty in a complex reservoir case study with 25 injection and production wells. The case study contains 26 uncertain attributes representing petrophysical, rock-fluid and fluid properties. We selected phases of evaluation considering specific events during the reservoir management, improving the efficiency of simulation resources use. We identified and addressed data patterns untracked in previous studies: simulator targets,
We advance the applicability of Bayesian History Matching for reservoir studies with four deliveries: (a) a general workflow for systematic BHM, (b) the use of phases to progressively evaluate the historical data; and (c) the integration of two-class emulators in the BHM formulation. Finally, we demonstrate the internal discrepancy as a source of error in the reservoir model.
The current cycle for reservoir management requires several months to years to update static and dynamic models as additional data from the field [logs, production, pressures, core, four-dimensional (4D), etc.] are obtained. These delays in updating the models result in increased risk and contribute to a significant loss of economic value. The ultimate goal for next-generation reservoir management is to reduce the cycle from several months to a few days. The current challenges for developing a proactive/real-time reservoir management solution include but are not limited to the time and manual intervention involved in conditioning and interpreting the logging-while-drilling (LWD) and well log data acquired during and after drilling a well; updating three-dimensional (3D) petrophysical/static models; and the computational cost and time involved in generating reservoir models from static and production data (history matching). However, the current widespread use of machine-learning and cloud-computing capabilities leads to faster and more accurate models, enabling real-time or near-real-time decision making. Using machine learning, one of these challenges—updating the 3D static models—was successfully addressed, namely, updating porosity prediction in a 3D model after new information comes to light, such as logging from a newly drilled well. The conventional geostatistical approach does not always honor geological variations in the subsurface formations, because only one or two seismic attributes can be used for co-simulation, and only with first-order interactions. Additionally, and most important, generating hundreds of realizations on a 3D grid is computationally intense and time consuming. Typically, several weeks are necessary to generate these static models before feeding them into the reservoir model. The proposed solution is a machine-learning-based approach that integrates 3D spatial availability of seismic data with petrophysical properties. One important goal of reservoir management is to understand reservoir uncertainties before they adversely affect field development. This machine learning solution proved to be computationally less costly, more accurate, and much faster than the conventional geostatistical approach.