|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Abstract In this case study, we apply a novel fracture imaging and interpretation workflow to take a systematic look at hydraulic fractures captured during thorugh fracture coring at the Hydraulic Fracturing Test Site (HFTS) in Midland Basin. Digital fracture maps rendered using high resolution 3D laser scans are analyzed for fracture morphology and roughness. Analysis of hydraulic fracture faces show that the roughness varies systematically in clusters with average cluster separation of approximately 20' along the core. While isolated smooth hydraulic fractures are observed in the dataset, very rough fractures are found to be accompanied by proximal smoother fractures. Roughness distribution also helps understand the effect of stresses on fracture distribution. Locally, fracture roughness seems to vary with fracture orientations indicating possible inter-fracture stress effects. At the scale of stage lengths however, we see evidence of inter-stage stress effects. We also observe fracture morphology being strongly driven by rock properties and changes in lithology. Identified proppant distribution along the cored interval is also correlated with roughness variations and we observe strong positive correlation between proppant concentrations and fracture roughness at the local scale. Finally, based on the observed distribution of hydraulic fracture properties, we propose a conceptual spatio-temporal model of fracture propagation which can help explain the hydraulic fracture roughness distribution and ties in other observations as well.
Guo, Yifei (The University of Texas at Austin) | Ashok, Pradeepkumar (The University of Texas at Austin) | van Oort, Eric (The University of Texas at Austin) | Patterson, Ross (Hess Corporation) | Zheng, Dandan (Hess Corporation) | Isbell, Matthew (Hess Corporation) | Riopelle, Austin (Marathon Oil Corporation)
Abstract Well interference, which is commonly referred to as frac hits, has become a significant factor affecting production in fractured horizontal shale wells with the increase in infill drilling in recent years. Today, there is still no clear understanding on how frac hits affect production. This paper aims to develop a process to automatically identify the different types of frac hits and to determine the effect of stage-to-well distance and frac hit intensity on long-term parent well production. First, child well completions data and parent well pressure data are processed by a frac hit detection algorithm to automatically identify different frac hit intensities and duration within each stage. This algorithm classifies frac hits based on the magnitude of the differential pressure spikes. The frac stage to parent well distance is also calculated. Then, we compare the daily production trend before and after the frac hits to determine the severity of its influence on production. Finally, any evident correlations between the stage-to-well distance, frac hit intensity and production change are identified and investigated. This work utilizes 3 datasets covering 22 horizontal wells in the Bakken Formation and 37 horizontal wells in the Eagle Ford Shale Formation. These sets included well trajectories, child well completions data, parent well pressure data and parent well production data. The frac hit detection algorithm developed can accurately detect frac hits in the available dataset with minimal false alerts. The data analysis results show that frac hit severity (production response) and intensity (pressure response) are not only affected by the distance between parent and child wells, but also affected by the directionality of the wells. Parent wells tend to experience more frac hits from the child frac stages with smaller direction angles and shorter stage-to-parent distances. Formation stress change with time is another factor that affects frac hit intensity. Depleted wells are more susceptible to frac hits even if they are further from the child wells. Also, we observe frac hits in parent wells due to a stimulation of a child well in a different shale formation. This paper presents a novel automated frac hit detection algorithm to quickly identify different types of frac hits. This paper also presents a novel way of carrying out production analysis to determine whether frac hits in a well have positive or negative influence long-term production. Additionally, the paper introduces the concept of the stage-to-well distance as a more accurate metric for analyzing the influence of frac hits on production.
Abstract In multi-stage plug-and-perf horizontal well completions, there are a multitude of moving parts and variables to consider when evaluating performance drivers. Properly identifying performance drivers allows an operator to focus their efforts to maximize the rate of return of resource development. Typically, well-to-well comparisons are made to help identify performance drivers, but in many cases the differences are not clear. Identifying these drivers may require a better understanding of performance variability along a single lateral. Data analytics can help to identify performance drivers using existing data from development activities. In the case study below, multiple diagnostics are utilized to identify performance drivers. A combination of completion diagnostics including oil and water tracers, stimulation data, reservoir data, 3D seismic, and borehole image logs were collected on a set of wells in the early appraisal phase of a field. Using oil tracers as the best indication of stage level performance along the laterals, data analytics is applied to uncover the relationships between the tracers and the numerous diagnostics. After smoothing was applied to the dataset, trends between oil tracer recovery, several independent variables and features seen in image logs and 3D seismic were identified. All the analyses pointed to decreasing tracer recovery, and likely decreased oil production, near faulted areas along each lateral. A random forest model showed a moderate prediction power, where the model's predicted tracer recovery on blind stages was able to explain 54% of the variance seen in the tracer response (r=0.54). This analysis suggests the identification of certain faulted areas along the wellbore could lead to ways of improving individual well economics by adjusting completion design in these areas.
Abstract As operators shift their focus toward operating within cashflow, understanding the true potential of these unconventional resources is becoming increasingly important. Simultaneously, accurate modeling of EURs in shale wells is becoming increasingly complicated. There are multiple factors at play for this increase in complexity, key amongst them, are well interactions. Well interactions or interference have increased with the concentration of field development in core areas of various basins and have completely changed with production behavior in shale wells. The present paper handles this multi-variable problem by incorporating well design, completion and petrophysical variables in a prediction model. Furthermore, the analysis is presented from a viewpoint of parent, child, parent/child and co-completed wells to accurately understand the variability in the driving factors. Terminal decline rate in shale wells is the decline rate wells settle at once the pressure transient reaches the boundary of the well. At this point, the well transitions to a boundary dominated flow regime and continues to drain from a fixed area. Estimating the rate of terminal decline is critical in accurate EUR modeling because changes in transition point can have a significant impact on production behavior of the well and in-turn EUR. The present paper attempts to predict the transition point using an ACE Non-Linear Regression model which is trained on a large multi-variate dataset. Variables incorporated in this analysis include terminal decline month, gas-oil-ratio based of the first three months of production, horizontal length, oil EUR, proppant per foot, average distance from the base of the producing zone, nearest neighbor mean spacing, and hydrocarbon in-place. In order to determine spacing status and nearest wellbore distances, a segment-wise analytical distance approach was taken. These distances and spacing status flags were incorporated into a multi-variate model in-order to model terminal decline rates. The transformations observed from the model showed high dependence on terminal decline month and oil EUR. However, this was less pronounced in parent/child and child wells. In parent/child and child wells completion metrics and HCIP more significantly influenced production behavior. Specifically, child wells saw a higher dependence on first three-month GOR and lateral length compared to parent/child wells which had a higher dependence on proppant per foot and average distance from the base of the producing formation. Additionally, spacing showed a moderate impact on transition point and associated terminal decline rates, but overall increased spacing caused a delayed transition point and consequently a lower terminal decline rate. Understanding how cause-and-effect relationships between parent and child wells differ offers a unique perspective into production behavior and consequently provides better insights into infill wells placement and production prediction. The present paper offers a unique perspective in looking at a key decline variable, transition point, for shale reservoirs. By using multivariate analysis, it incorporates the incremental complexity of the modeling effort and attempts to provide best practices in understanding the impact on production behavior. Furthermore, by incorporating a segment-wise analytical distance approach to determine spacing, the paper adds to the existing body of literature by providing a new perspective for a well interaction standpoint and defines the cause and effect relationships within.
Abstract Determining the closure pressure is crucial for optimal hydraulic fracturing design and successful execution of fracturing treatment. Historically, the use of diagnostic tests before the main fracturing treatment has significantly advanced to gain more information about the pattern of fracture propagation and fluid performance to optimize the designs. The goal is to inject a small volume of fracturing fluid to breakdown the formation and create small fracture geometry, then once pumping is stopped the pressure decline is analyzed to observe the fracture closure. Many analytical methods such as G-Function, square root of time, etc. have been developed to determine the fracture closure pressure. There are cases in which there is difficulty in determining the fracture closure pressure, as well as personal bias and field experiences make it challenging to interpret the changes in the pressure derivative slope and identify fracture closure. These conditions include: High permeability reservoirs where fracture closure occurs very fast due to the quick fluid leakoff. Extremely low permeability reservoir, which requires a long shut-in time for the fluid to leak off and determine the fracture closure pressure. The non-ideal fluid leak-off behavior under complex conditions. The objective of this study is to apply machine learning methods to implement a predesigned algorithm to execute the required tasks and predict the fracture closure pressure while minimizing the shortcomings in determining the closure pressure for non-ideal or subjective conditions. This paper demonstrates training different supervised machine learning algorithms to help predict fracture closure pressure. The workflow involves using the datasets to train and optimize the models, which subsequently are used to predict the closure pressure of testing data. The output results are then compared with actual results from more than 120 DFIT data points. We further propose an integrated approach to feature selection and dataset processing and study the effects of data processing on the success of the model prediction. The results from this study limit the subjectivity and the need for the experience of personal interpreting the data. We speculate that a linear regression and MLP neural network algorithms can yield high scores in the prediction of fracture closure pressure.
Abstract Leveraging publicly available data is a crucial stepfor decision making around investing in the development of any new unconventional asset.Published reports of production performance along with accurate petrophysical and geological characterization of the areashelp operators to evaluate the economics and risk profiles of the new opportunities. A data-driven workflow can facilitate this process and make it less biased by enabling the agnostic analysis of the data as the first step. In this work, several machine learning algorithms are briefly explained and compared in terms of their application in the development of a production evaluation tool for a targetreservoir. Random forest, selected after evaluating several models, is deployed as a predictive model thatincorporates geological characterization and petrophysical data along with production metricsinto the production performance assessment workflow. Considering the influence of the completion design parameters on the well production performance, this workflow also facilitates evaluation of several completion strategies toimprove decision making around the best-performing completion size. Data used in this study include petrophysical parameters collected from publicly available core data, completion and production metrics, and the geological characteristics of theNiobrara formation in the Powder River Basin. Historical periodic production data are used as indicators of the productivity in a certain area in the data-driven model. This model, after training and evaluation, is deployed to predict the productivity of non-producing regions within the area of interest to help with selecting the most prolific sections for drilling the future wells. Tornado plots are provided to demonstrate the key performance driversin each focused area. A supervised fuzzy clustering model is also utilized to automate the rock quality analyses for identifying the "sweet spots" in a reservoir. The output of this model is a sweet-spot map that is generated through evaluating multiple reservoir rock properties spatially. This map assists with combining all different reservoir rock properties into a single exhibition that indicates the average "reservoir quality"of the formation in different areas. Niobrara shale is used as a case study in this work to demonstrate how the proposed workflow is applied on a selected reservoir formation whit enough historical production data available.
Abstract Missing values and incomplete observations can exist in just about ever type of recorded data. With analytical modeling, and machine learning in particular, the quantity and quality of available data is paramount to acquiring reliable results. Within the oil industry alone, priorities in which data is important can vary from company to company, leading to available knowledge of a single field to vary from place to place. With machine learning requiring very complete sets of data, this issue can require whole portions of data to be discarded in order to create an appropriate dataset. Value imputation has emerged as a valuable solution in cleaning up datasets, and as current technology has advanced new generative machine learning methods have been used to generate images and data that is all but indistinguishable from reality. Using an adaptation of the standard Generative Adversarial Networks (GAN) approach known as a Generative Adversarial Imputation Network (GAIN), this paper evaluates this method and other imputation methods for filling in missing values. Using a gathered fully observed set of data, smaller datasets with randomly masked missing values were generated to validate the effectiveness of the various imputation methods; allowing comparisons to be made against the original dataset. The study found that with various sizes of missing data percentages withing the sets, the "filled in" data could be used with surprising accuracy for further analytics. This paper compares GAIN along with several commonly used imputation methods against more standard practices such as data cropping or filling in with average values for filling in missing data. GAIN, as well as the various imputation methods described are quantified for there ability to fill in data. The study will discuss how the GAIN model can quickly provide the data necessary for analytical studies and prediction of results for future projects.
The paper examines data-driven techniques for the modeling of ship propulsion that could support a strategy for the reduction of emissions and be utilized for the optimization of a fleet’s operations. A large, high-frequency and automated collected data set is exploited for producing models that estimate the required shaft power or main engine’s fuel consumption of a container ship sailing under arbitrary conditions. A variety of statistical calculations and algorithms for data processing are implemented and state-of-the-art techniques for training and optimizing Feed-Forward Neural Networks (FNNs) are applied. Emphasis is given in the pre-processing of the data and the results indicate that with a proper filtering and preparation stage it is possible to significantly increase the model’s accuracy. Thus, increase our prediction ability and our awareness regarding the ship's hull and propeller actual condition.
Choudhary, Manish Kumar (Brunei Shell Petroleum Company Sendirian Berhad) | Mahanti, Gaurav (Brunei Shell Petroleum Company Sendirian Berhad) | Rana, Yogesh (Brunei Shell Petroleum Company Sendirian Berhad) | Garimella, Sai Venkata (Brunei Shell Petroleum Company Sendirian Berhad) | Ali, Arfan (Brunei Shell Petroleum Company Sendirian Berhad) | Li, Lin (Brunei Shell Petroleum Company Sendirian Berhad)
Abstract Field X is one of largest oil fields in Brunei producing since 1970's. The field consists of a large faulted anticlinal structure of shallow marine Miocene sediments. The field has over 500 compartments and is produced under waterflood since 1980's through 400+ conduits over 50 platforms. A comprehensive review of water injection performance was attempted in 2019 to assess remaining oil and identify infill opportunities. Large uncertainties in reservoir properties, connectivity and fluid contacts required that data across multiple disciplines is integrated to identify new opportunities. It was recognized early on that integrated analysis of surveillance data and production history over 40 years will be critical for understanding field performance. Hence, reviews were first initiated using sand maps and analytical techniques. Tracer surveys, reservoir pressures, salinity measurements, Production Logging Tool (PLT) were all analyzed to understand waterflood progression and to define connectivity scenarios. A complete review of well logs, core data from over 30 wells and outcrop studies was carried out as part of modelling workflow. This understanding was used to construct a new facies-based static model. In parallel, key dynamic inputs like PVT analysis reports and special core analysis studies were analyzed to update dynamic modelling components. Prior to initiating the full field model history matching, a comprehensive impact analysis of the key dynamic uncertainties i.e., Production allocation, connectivity and varying aquifer strength etc. were conducted. An Assisted History Matching (AHM) workflow was attempted, which helped in identifying high impacting inputs which could be varied for history matching. Adjoint techniques were also used to identify other plausible geological scenarios. The integrated review helped in identifying over 50 new opportunities which potentially can increase recovery by over 10%. The new static model identified upsides in Stock Tank Oil Initially in Place (STOIIP) which if realized could further increase ultimate recoverable. The use of AHM assisted in reducing iterations and achieve multiple history matched models, which can be used to quantify forecast uncertainty. The new opportunities have helped to revitalize the mature field and has potential to almost increase the production by over 50%. A dedicated team is now maturing these opportunities. The robust methodology of integrating surveillance data with simulation modelling as described in this paper is generic and could be useful in current day brown field development practices to serve as an effective and economic manner for sustaining oil production and maximizing ultimate recovery. It is essential that all surveillance and production history data are well analyzed together prior to attempting any detailed modelling exercise. New models should then be constructed which confirm to the surveillance information and capture reservoir uncertainties. In large oil fields with long production history with allocation uncertainties, it is always a challenge for a quantitative assessment of History match quality and infill well Ultimate Recovery (UR) estimations. Hence a composite History Match Quality Indicator (HMQI) was designed with an appropriate weightage of rate, cumulative & reservoir pressure mismatch, water breakthrough timing delays. Then HMQI parameter spatial variation maps were made for different zones over the entire field for understanding and appropriately discounting each infill well oil recovery. Also, it is critical that facies variation is properly captured in models to better understand waterfront movements and locate remaining oil. Dynamic modelling of mature field with long production history can be quite challenging on its own and it is imperative that new numerical techniques are used to increase efficiency.
Abstract Conventionally, a bit is selected from offset well bit run summaries. This method of selection is not always accurate since each bit is run under different conditions which might not be reflected in an offset study analysis. The large quantities of data generated from real time measurements in offset wells makes machine learning the ideal tool for analysis and comparison. Artificial Neural Network (ANN) is a relatively simple machine learning tool that combines inputs and calculation layers to compute a specified output layer. The ANN is fed over thousands of data points from 17-1/2 in hole sections across multiple wells. A specific model is then trained for every bit with weight on bit (WOB), rotary speed (RPM), bit hydraulics, and lithological properties as inputs and rate of penetration (ROP) as output. The model is finalized when a satisfactory statistical set of KPI's are achieved. Using a combination of Monte-Carlo analysis and sensitivity analysis, different bits are compared by varying parameters for the same bit and varying the bit under the same parameters. A bit and its optimized parameters are proposed, resulting in an average instantaneous ROP improvement of 32%. Performance benchmarked with individual drilling parameters shows improved ROP response to WOB, RPM, and bit hydraulics in the optimized run. This project solidifies machine learning as a powerful tool for bit selection and parameter optimization to improve drilling performance. Machine learning will become a significant part of well planning, design, and operations in the future. This study demonstrates how ANN's can be used to learn from previous operations and influence planning decisions to improve bit performance.