|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Summary We report here a study of lithology-controlled stress variations observed in the Woodford shale (WDFD) in north-central Oklahoma. In a previous study, we showed that the magnitude of the minimum horizontal stress S hmin systematically varied with the abundance of clay plus kerogen in three distinct WDFD lithofacies. We believe that the application of the workflow described here in the context of viscoplastic stress relaxation can facilitate the understanding of layer-to-layer stress variations with lithology and thus contribute to improved HF effectiveness. Introduction Development of extremely low permeability unconventional oil and gas reservoirs requires multistage HF in horizontal wells. It is wellestablished that the magnitude of the three principal stresses and their relative differences significantly influence the initiation, propagation, and containment of hydraulic fractures (cf., Economides and Nolte 2000; Fisher and Warpinski 2012; Desroches et al. 2014; Xu et al. 2017; Zoback and Kohli 2019). More specifically, layer-to-layer stress variations influence optimal landing zone selection, vertical hydraulic fracture growth, and proppant placement (cf., Fu et al. 2019; Singh et al. 2019). Thus, to effectively stimulate the formations being produced from, one has to understand the physical properties of the formations being stimulated as well as the state of stress within, above, and below the producing units. Ma and Zoback (2017) reported variations of stress magnitudes obtained from HF stages in two horizontal wells that encountered three distinct lithofacies of the WDFD in central Oklahoma. They hypothesized that the abundance of compliant components (principally clay and organic matter, or kerogen) brought about the observed stress variations between the three WDFD lithofacies.
Summary Recent experience in applying recurrent neural networks (RNNs) to interpreting permanent downhole gauge records has highlighted the potential utility of machine learning algorithms to learn reservoir behavior from data. The power of the RNN resides in its ability to retain information in a form of memory of previous patterns and information contained in the previous behavior of phenomena being modeled. This memory plays a role of informing the decision at the present time by using what happened in the past. This property suggests the RNN as a suitable choice to model sequences of reservoir information, even when the reservoir modeler is faced with incomplete knowledge of the underlying physical system. Convolutional neural networks (CNNs) are another variant of the machine learning algorithm that have shown promise in sequence modeling domains, such as audio synthesis and machine translation. In this study, RNNs and CNNs were applied to tasks that traditionally would be modeled by a reservoir simulator. This was achieved by formulating the relationship between physical quantities of interest from subsurface reservoirs as a sequence mapping problem. In addition, the performance of a CNN layer as compared with an RNN was evaluated systematically to investigate their capabilities in a variety of tasks of interest to the reservoir engineer. Preliminary results suggest that CNNs, with specific design modifications, are as capable as RNNs in modeling sequences of information, and as reliable when making inferences to cases that have not been seen by the algorithm during training. Design details and reasons pertaining to the way these two seemingly different architectures process information and handle memory are also discussed. Introduction According to research conducted by McKinsey & Company, the effective use of digital technologies and automation in the oil and gas industry is expected to reduce the capital expenditure by up to 20% and will better position the companies that implement these technologies for growth opportunities (Choudhry et al. 2016).
In general, a probabilistic framework for a modeling process involves two uncertainty spaces: model parameters and state variables (or predictions). The two uncertainty spaces in reservoir simulation are connected by the governing equations of flow and transport in porous media in the form of a reservoir simulator. In a forward problem (or a predictive run), the reservoir simulator directly maps the uncertainty space of the model parameters to the uncertainty space of the state variables. Conversely, an inverse problem (or history matching) aims to improve the descriptions of the model parameters by using the measurements of state variables. However, we cannot solve the inverse problem directly in practice. Numerous algorithms, including Kriging-based inversion and the ensemble Kalman filter (EnKF) and its many variants, simplify the system by using a linear assumption.
The purpose of this paper is to improve the integration of measurement errors in the history-matching algorithms that rely on the linear assumption. The statistical moment equation (SME) approach with the Kriging-based inversion algorithm is used to illustrate several practical examples. In the Motivation section, an example of pressure conditioning has a measurement that contains no additional information because of its significant measurement error. This example highlights the inadequacy of the current method that underestimates the conditional uncertainty for both model parameters and predictions. Accordingly, we derive a new formula that recognizes the absence of additional information and preserves the unconditional uncertainty. We believe this to be the consistent behavior to integrate measurement errors.
Other examples are used to validate the new formula with both linear and nonlinear (i.e., the saturation equation) problems, with single and multiple measurements, and with different configurations of measurement errors. For broader applications, we also develop an equivalent formula for algorithms in the Monte Carlo simulation (MCS) approach, such as EnKF and ensemble smoother (ES).
Intelligent multilateral well completions provide downhole flow rate, pressure, and temperature measurements at multiple well segments, which allows for a continuous spatiotemporal data stream. Such an extensive data input poses a challenging task to decide on the optimal strategy of manipulating the inflow control valve (ICV) settings over time for best performance. In this study, we investigate using machine learning to analyze and predict well performance given different ICV settings to ultimately maximize the well output.
A commercial reservoir simulator was used to generate two synthetic reservoir models: homogeneous (Case A) and heterogeneous (Case B). These synthetic data were used to train, validate, and test machine learning models. The reservoir cases were generated on the basis of a segmented, trilateral producer completed with three ICV devices installed at tie-in segments. The data used were measurements of wellhead and downhole flow rates across ICV segments over a period of 4,000 days. A total of 1,330 experiments were conducted with an 8-day timestep, generating a total of 667,660 sample data points for each of Case A and Case B. Fully connected neural networks were used to fit the data, while model generalizability was enhanced using regularization techniques, namely L2 regularization and early stopping.
Both random sampling and Latin hypercube sampling (LHS) methods were evaluated in constructing the training, validation, and testing splits. Trained with different sample sizes drawn from the 1,330 simulated data histories for the two reservoir models, the proposed neural network showed excellent results. Given only 10 simulated choices of ICV settings for training, the network proved capable of predicting oil/water production profiles at surface for both homogeneous and heterogeneous reservoir models with greater than 0.95 coefficient of determination (R2) when evaluated at unseen, test ICV settings. Extending the problem to downhole flow performance prediction, approximately 40 training simulated settings were necessary to achieve 0.95 R2. We observed that LHS was superior to random sampling in both R2 average and confidence interval. We also found that increasing the training and validation sample sizes increased the test R2 when testing against unseen cases. Study results suggest the applicability of machine reinforcement learning to estimate the well output at different ICV settings, where the neural network model depends fully on the real-time well feedback and production measurements.
By using a machine learning approach during the operation of a well with multiple ICV settings, it would be feasible to estimate the lateral-by-lateral output at unseen scenarios. Hence, it becomes possible to maximize the well output by using an optimization algorithm to determine the optimal ICV settings.
Every year, millions of dollars in man hours are used for reporting by wellsite personnel. Input peripherals are evolving exponentially as the latest technologies drive what is possible. One of these technologies is a voice interface system.
Consider the case where a system of smart microphones records what is said at the wellsite. This system would also be connected to the rig’s data and control system. At the end of the day, a report could be generated by a system that is smart enough to understand the context of conversations and merge it with sensor data. In addition, as automation becomes more prevalent for drilling operations and as head count on the rig decreases, drillers will be able to spend less time documenting daily activities or hunting down data. The system could also be used to control a drilling system via voice. This is the future that is being built today.
Working with third party partners and using their technology as a base, a proof of concept interface for the rig with voice as an input was built. Testing of multiple scenarios including context simulation, data transfer, data logging, specific log extraction, data feedback and control of a test rig was executed here have been some challenges which are unique to our drilling environment such as noise, distinguishing the origin of an audio source, and new context generation. For these, work was done with local startups as well as some major players from outside the industry to define what solutions already exist or could exist in the future. It is possible to then implement these as an additional layer. The solution also applies to operations outside drilling for other domains in the oil and gas industry. Testing has also been done on the voice interface for production monitoring on demo pumps.
This paper will describe a future that could only be imagined previously. Technologies that exist outside our industry have been merged from various sources to create a unique solution to these problems. The paper will discuss what has been done so far and show the potential of this system for which a real-time demo was created. There will be discussion on challenges along with the strategies that can be employed to solve them. Of these, some have already been tested—these will also be showcased.
Kim, Tae Wook (Stanford University) | Ross, Cynthia M. (Stanford University) | Guan, Kelly M. (Stanford University) | Burnham, Alan K. (Stanford University) | Kovscek, Anthony R. (Stanford University)
Source rocks (oil shale) were matured artificially via pyrolysis under geologically realistic triaxial stresses using a unique coreholder that is compatible with X-ray computed tomography (CT) scanning. This study focuses on characterization of porosity and permeability as well as the evolution of shale fabric during pyrolysis. Experiments were conducted using 1-in. diameter vertical and horizontal core samples from the Green River Formation. Prior to pyrolysis, the properties of the source rock were characterized [e.g., porosity, bulk density, mineralogy, and total organic carbon (TOC)]. Samples were then heated from room temperature to 350°C under a nitrogen environment to obtain conversion of organic matter (OM) to oil and gas via pyrolysis. Porosity and permeability after maturation were measured. Micro-CT visualization was applied to investigate the fracture network developed throughout the core. Scanning electron microscopy (SEM) images were also used to compare the shale fabric and porosity evolution (pre- to post-maturation) at greater resolution.
In-situ observations reveal a decrease in the average CT number (i.e., density) within some volumetric regions of the cores after maturation. In these regions, OM (kerogen and bitumen) was converted into hydrocarbons. Changes in the source rock depend on the original TOC fraction, hydrogen index (HI), and temperature. The permeability prior to pyrolysis for vertical samples is in the undetectable to nanodarcy range. The permeability of all cores increased to the microdarcy range post-maturation. In particular, the permeability of the horizontal sample increased from 0.14 to 50 µd. This improvement in permeability occurred due to the generation of open porosity and fractures (dilation, generation, and/or drainage). Additionally, the porosity after Soxhlet extraction increased proportionally by 20% from pre- to post-pyrolysis depending on pyrolysis time and TOC fraction. Longitudinal deformation depends on the orientation of the sample with respect to the triaxial stress during pyrolysis. The deformation of vertically oriented samples with isostress conditions is larger than that of horizontally oriented samples with isostrain. The measured 3D in-situ porosity distribution indicates that OM has transformed into hydrocarbons by pyrolysis. The development of a fracture and matrix porosity system under stress provides an explanation for transport of hydrocarbon away from its point of origin.
Zhao, Xiaoxi (University of Southern California ) | Popa, Andrei S. (Chevron Corporation) | Ershaghi, Iraj (University of Southern California) | Aminzadeh, Fred (University of Southern California) | Li, Yuanjun (Stanford University) | Cassidy, Steve D. (Chevron Corporation)
This paper presents a methodology for the geostatistical estimation of reservoir properties to handle uncertainties in observation and modeling. Given certain known well-log data in a geological region, the Kriging methodology is used to estimate or predict spatial phenomena at nonsampled locations from the estimated random function. The approach assumes that the data are accurate and precise, and the random function is generated from a thorough descriptive analysis of the known data set. Regarding the assumptions considered in classic Kriging, it is realistic to assume that spatial data contain a certain amount of imprecision, mostly because of measurement errors, and information is lacking to properly assess a unique random-function model. A methodology is presented for the geostatistical estimation of reservoir properties to handle uncertainties in observation and modeling. A combination of regular, or classic, Kriging and the fuzzy-logic method is proposed. As such, imprecise input data and variogram parameters are modeled on the basis of fuzzy-logic theory, while the predictions and variances are computed from Kriging analysis characterized by membership functions. Last, an optimization method is included to solve the constrained fuzzy-nonlinear-equation system. The proposed methodology was implemented, and a user-friendly integrated tool was developed, which enables the user to create a grid structure on the basis of the input data, conduct statistical analysis, and run fuzzy Kriging for various problems. We used the tool to run a test case using the SPE 10 (SPE Comparative Solution Project, Model-II 2000) porosity data. With the fuzzy-Kriging methodology, two maps are generated with upper-bound values and lower-bound values. Compared with true data, the upper-bound map trends to include higher values better, while the lower-bound map trends to include lower-value parts better. In addition, a case study has been conducted using measured core-permeability data in a heterogeneous reservoir to demonstrate the viability of the technology.
In this study, we explore using multilevel derivative-free optimization (DFO) for history matching, with model properties described using principal-component-analysis (PCA) -based parameterization techniques. The parameterizations applied in this work are optimization-based PCA (O-PCA) and convolutional-neural-network (CNN) -based PCA (CNN-PCA). The latter, which derives from recent developments in deep learning, is able to accurately represent models characterized by multipoint spatial statistics. Mesh adaptive direct search (MADS), a pattern-search method that parallelizes naturally, is applied for the optimizations required to generate posterior (history-matched) models. The use of PCA-based parameterization considerably reduces the number of variables that must be determined during history matching (because the dimension of the parameterization is much smaller than the number of gridblocks in the model), but the optimization problem can still be computationally demanding. The multilevel strategy introduced here addresses this issue by reducing the number of simulations that must be performed at each MADS iteration. Specifically, the PCA coefficients (which are the optimization variables after parameterization) are determined in groups, at multiple levels, rather than all at once. Numerical results are presented for 2D cases, involving channelized systems (with binary and bimodal permeability distributions) and a deltaic-fan system using O-PCA and CNN-PCA parameterizations. O-PCA is effective when sufficient conditioning (hard) data are available, but it can lead to geomodels that are inconsistent with the training image when these data are scarce or nonexistent. CNN-PCA, by contrast, can provide accurate geomodels that contain realistic features even in the absence of hard data. History-matching results demonstrate that substantial uncertainty reduction is achieved in all cases considered, and that the multilevel strategy is effective in reducing the number of simulations required. It is important to note that the parameterizations discussed here can be used with a wide range of history-matching procedures (including ensemble methods), and that other derivative-free optimization methods can be readily applied within the multilevel framework.
Rognmo, Arthur U. (University of Bergen) | Al-Khayyat, Noor (University of Bergen) | Heldal, Sandra (University of Bergen) | Vikingstad, Ida (University of Bergen) | Eide, Øyvind (University of Bergen) | Fredriksen, Sunniva B. (University of Bergen) | Alcorn, Zachary P. (University of Bergen) | Graue, Arne (University of Bergen) | Bryant, Steven L. (University of Calgary) | Kovscek, Anthony R. (Stanford University) | Fernø, Martin A. (University of Bergen)
Summary The use of nanoparticles for CO 2 -foam mobility is an upcoming technology for carbon capture, utilization, and storage (CCUS) in mature fields. Silane-modified hydrophilic silica nanoparticles enhance the thermodynamic stability of CO 2 foam at elevated temperatures and salinities and in the presence of oil. The aqueous nanofluid mixes with CO 2 in the porous media to generate CO 2 foam for enhanced oil recovery (EOR) by improving sweep efficiency, resulting in reduced carbon footprint from oil production by the geological storage of anthropogenic CO 2 . Our objective was to investigate the stability of commercially available silica nanoparticles for a range of temperatures and brine salinities to determine if nanoparticles can be used in CO 2 -foam injections for EOR and underground CO 2 storage in high-temperature reservoirs with high brine salinities. The experimental results demonstrated that surface-modified nanoparticles are stable and able to generate CO 2 foam at elevated temperatures (60 to 120 C) and extreme brine salinities (20 wt% NaCl). We find that (1) nanofluids remain stable at extreme salinities (up to 25 wt% total dissolved solids) with the presence of both monovalent (NaCl) and divalent (CaCl 2) ions; (2) both pressure gradient and incremental oil recovery during tertiary CO 2 -foam injections were 2 to 4 times higher with nanoparticles compared with no-foaming agent; and (3) CO 2 stored during CCUS with nanoparticlestabilized CO 2 foam increased by more than 300% compared with coinjections without nanoparticles. Introduction The energy trilemma faced by the global community includes energy security (plentiful and reliable), energy affordability, and environmental sustainability. In this regard, the intergovernmental panel on climate change points to carbon capture and storage as one contributing technology to mitigate the CO 2 -emission challenge (IPCC 2014). For profit-maximizing corporations, the economic incentives for storing CO 2 in a pure carbon capture and storage case are limited and new technologies to increase profitability are desirable.
A reduced-order-modeling (ROM) framework is developed and applied to simulate coupled flow/quasistatic-geomechanics problems. The reduced-order model is constructed using POD-TPWL, in which proper orthogonal decomposition (POD), which enables representation of the solution unknowns in a low-dimensional subspace, is combined with trajectory-piecewise linearization (TPWL), where solutions with new sets of well controls are represented by means of linearization around previously simulated (training) solutions. The overdetermined system of equations is projected into the low-dimensional subspace using a least-squares Petrov-Galerkin (LSPG) procedure, which has been shown to maintain numerical stability in POD-TPWL models. The states and derivative matrices required by POD-TPWL, generated by an extended version of the Stanford University Automatic Differentiation General Purpose Research Simulator (AD-GPRS), are provided in an offline (preprocessing or training) step. Offline computational requirements correspond to the equivalent of five to eight full-order simulations, depending on the number of training runs used. Run-time (online) speedups of O(100) or more are typically achieved for new POD-TPWL test-case simulations. The POD-TPWL model is tested extensively for a 2D coupled problem involving oil/water flow and geomechanics. It is shown that POD-TPWL provides predictions of reasonable accuracy, relative to full-order simulations, for well-rate quantities, global pressure and saturation fields, global maximum- and minimum-principal-stress fields, and the Mohr-Coulomb rock-failure criterion, for the cases considered. A systematic study of POD-TPWL error is conducted using various training procedures for different levels of perturbation between test and training cases. The use of randomness in the well-bottomhole-pressure (BHP) profiles used in training is shown to be beneficial in terms of POD-TPWL solution accuracy. The procedure is also successfully applied to a prototype 3D example case.