|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Rosenhagen, Nicolas M. (Colorado School of Mines) | Nash, Steven D. (Anadarko Petroleum Corporation) | Dobbs, Walter C. (Anadarko Petroleum Corporation) | Tanner, Kevin V. (Anadarko Petroleum Corporation)
Abstract The volume of stimulation fluid injected during hydraulic fracturing is a key performance driver in the horizontal development of the Niobrara formation in the Denver-Julesburg (DJ) Basin, Colorado. Oil production per well generally increases with stimulation fluid volume. Often, operators normalize both production and fluid volume based on stimulated lateral length and investigate relationships using "per-ft" variables. However, data from well-based approaches commonly display such wide distributions that no useful relationships can be inferred. To improve data correlations, multivariate analysis normalizes for parameters such as thermal maturity, depth, depletion, proppant intensity, drawdown, geology and completion design. Although advancements in computing power have decreased cycle times for multivariate analysis, preparing a clean dataset for thousands of wells remains challenging. A proposed analytical method using publicly available data allows interpreters to see through the noise and find informative correlations. Using a data set of over 5000 wells, we aggregate cumulative oil production and stimulation fluid volumes to a per-section basis then normalize by hydrocarbon pore volume (HCPV) per section. Dimensionless section-level Cumulative Oil versus Stimulation Fluid Plots ("Normalization" or "N-Plot") present data distributions sufficiently well-defined to provide an interpretation and design basis of well spacing and stimulation fluid volumes for multi-well development. When coupled with geologic characterization, the trends guide further refinement of development optimization and well performance predictions. Two example applications using the N-Plot are introduced. The first involves construction of predictive production models and associated evaluation of alternative development scenarios with different combinations of well spacing and completion fluid intensity. The second involves "just-in-time" modification of fluid intensity for drilled but uncompleted wells (DUC's) to optimize cost-forward project economics in an evolving commodity price environment.
Chesapeake Energy is partnering with RS Energy Group to improve operational efficiency and capital discipline by employing advanced analytics and machine learning. RS Energy is a Calgary-based energy research firm founded in 1998 covering more than 150 operators in the major North American and international oil and gas regions, including the US shale plays. It provides technical analysis of basins, including completions and production, as well as asset evaluations for operators considering acreage additions. All of this is done within the context of shifting capital markets. Chesapeake announced the pact fresh off its $4-billion merger with WildHorse Resource Development, which bolstered its position in the Eagle Ford Shale of South Texas.
Abstract A new approach in fractured reservoir characterization and simulation that integrates geomechanics, geology, and reservoir engineering is proposed and illustrated with actual oil reservoirs. This approach uses a neural network to find the relationship between, reservoir structure, bed thickness and the well performance used as an indicator of fracture intensity. Once the relation established, the neural network can be used to forecast primary production, or for mapping the reservoir fracture intensity. The resulting fracture intensity distribution can be used to represent the subsurface fracture network. Using the fracture intensity map and fracture network, directional fracture permeabilities and fracture pore volume can be estimated via a history matching process where only two parameters are adjusted. Introduction Conventional reservoir simulation has benefited from important research during the last few years. The use of geostatistics is slowly moving from the production of grayscale maps" with dubious value and multi-million cell realizations with little practical value to useful input data for reservoir simulators. Although there is still much to be done before these geostatistical realizations will be able to reproduce the past performance of a reservoir, the recent trend shows clearly that major advances have been made in conventional reservoir description. On the other hand, naturally fractured reservoir (NFR) characterization has not enjoyed a similar benefit from any major research effort. Until this work, there is no quantitative methodology to "fill the NFR simulator gridblocks". Most of the current fractured reservoir characterization rely on a qualitative description of the fractures. This is achieved by using mainly structure properties, seismic velocity anisotropy observed with shear or S-waves, and more recently compression or P-waves. However, a reservoir engineer struggling to numerically simulate a fractured reservoir needs more than just the location of "sweet spots." The objective of this paper is to provide a reservoir description methodology that leads to a computer input file for a fractured reservoir simulator which can be used for performance forecasting. This methodology relies on the use of geomechanical concepts derived from reservoir structure and artificial intelligence (AI) tools. Neural networks During the past few years, the petroleum industry enthusiastically supported the concept of "integrated systems." Integration of everything is everywhere. From a reservoir engineering point of view, the concept of integration is a necessity not fashion. The necessity exists because of the scarcity of reservoir information and the wide range of scales over which this information is measured. Therefore, a reliable reservoir description must somehow integrate all the existing information at all the scales. The application of stochastic global optimization methods, e.g. simulated annealing, in reservoir description provided new tools for achieving a certain level of integration. However, stochastic global optimization methods were developed in an artificial intelligence context and are more than just simple mathematical optimization methods, as believed by some users. Within the artificial intelligence framework, other tools exist and can be used to integrate various information into a complex reservoir model. The most practical of these integration tools can be found in neurocomputing. There are various ways of looking at a neural network. The most common application is a pattern recognition tool where from a given amount of known information, a neural network is able to be trained to recognize some patterns. In this case, the output of the neural network is very often a binary variable where a value 0 means NO, and a value 1 means YES. P. 425
Hirschmiller, John (GLJ Petroleum Consultants Ltd.) | Biryukov, Anton (Verdazo Analytics) | Groulx, Bertrand (Verdazo Analytics) | Emmerson, Brian (Verdazo Analytics) | Quinell, Scott (GLJ Petroleum Consultants Ltd.)
Abstract This machine learning study incorporates geoscience and engineering data to characterize which geological, reservoir and completion data contribute most significantly to well production performance. A better understanding of the key factors that predict well performance is essential in assessing the commercial viability of exploration and development, in the optimization of capital spending to increase rates of return, and in reserve and resource evaluations. Machine learning models provide an objective, analytical means to interpret large, complex datasets. Generally, such models demand large databases of consistently evaluated data. As geological data is interpretive, often varying from one geologist to another, or from one pool to another, it can be difficult to incorporate geological data into regional machine learning models. Consequently, efforts to use machine learning in the oil and gas industry to predict well performance are often focused exclusively on engineering completion technology. However, this case study has utilized a regional geological Spirit River database with consistent petrophysical evaluation methodology across the entire play. This geological database is complemented with public completion and fracture data and production data to build predictive models using inputs from all subsurface disciplines. Redundancies in the data were identified and removed. Features explaining a significant proportion of the variance in production were also removed if their effect was captured by more fundamental, correlated features that were more straightforward to interpret. The dataset was distilled to 13 key features providing predictions with a similar precision to those obtained using the full-featured dataset. The thirteen features in this case study are a combination of geological, reservoir and completion data, underlining that an approach integrating both geoscience and engineering data is vital to predicting and optimizing well performance accurately for future wells.
Abstract The topic of digital rock physics including experimental imaging techniques such as computed tomography (CT) is on the rise in oil and gas research. The aim is to map the pore architecture and subsequently to model multiphase flow with the prospect of improving and developing enhanced oil recovery methods. The initial computational step to construct the pore network architecture is the so-called segmentation of CT-tomograms into the constituent phases. We have probed two different packages, Avizo and MANGO to address pros and cons of the packages and the segmentation process itself. Segmentation involves a substantial amount of manual assignment of grayscale intensity thresholds to the given phases. This is a significant challenge in particular as the process is based on qualitative judgment and hence subjected to a substantial human bias. Importantly, in carbonates, where the very small pores are unresolvable under the experimental conditions, segmentation becomes particularly challenging. In order to be able to make consistent, unbiased and quantitative segmentation of a full high-resolution tomogram we have developed a fully deterministic autosegmentation algorithm. The segmentation results in quantification of the macroporosity, microporosity and of specific minerals such as pyrite. The autosegmentation algorithm is based on an advanced statistical analysis of the global and local intensity distributions of the tomogram. The method involves local normal distribution fitting, differential curve intersection analysis, density function estimation and subsequent derivative analysis. The statistics-based definition of microporosity represents in itself a novel approach to the characterize microporosity, where microporosity identification and quantitation are based on properties of the well-defined region of intensity distributions of the present grains and macroporosity. Identification of special minerals such as pyrite is based on statistical outlier analysis. In the paper we present (1) an algorithm to perform automated CT-image segmentation, (2) a method to quantitatively distinguish microporosity from macroporosity, (3) a method to identify special minerals and (4) a method to estimate the relative presence of grain, macroporosity, microporosity and selected special minerals.