Feature
Source
Date
SPE Disciplines
Geologic Time
Journal
Conference
Publisher
Author
Concept Tag
Country
Industry
Oilfield Places
Technology
Layer | Fill | Outline |
---|
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Abstract Horizontal well technology is one of the major improvements in reservoir stimulation. Planning and execution are the key elements to drill horizontal wells successfully, especially through depleted formations. As the reservoir has been producing for a long time, pore pressure declines, resulting in weakening hydrocarbon-bearing rocks. Drilling issues such as wellbore stability, loss circulation, differential sticking, formation damage remarkably influenced by the pore pressure decline, increasing the risk of losing part or even all the horizontal interval. This paper presents an extensive review of the potential issues and solutions associated with drilling horizontal wells in depleted reservoirs. After giving an overview of the depleted reservoir characteristics, the paper systematically addresses the major challenges that influence drilling operations in depleted reservoirs and suggests solutions to avoid uncontrolled risks. Then, the paper evaluates several real infill drilling operations through depleted reservoirs, which were drilled in different oilfields. The economic aspect associated with potential risks for drilling a horizontal well in depleted reservoirs is also discussed. The most updated research and development findings for infill drilling are summarized in the article. It is recommended to use wellbore strengthening techniques while drilling a horizontal well through highly depleted formations. This will allow using higher mud weight to control unstable shales while drilling through the production zone. Managed Pressure Drilling should be considered as the last option for highly depleted formations because it will require a greater level of investment which is not going to have a superior rate of return due to the lack of high deliverability of the reservoir. Using rotary steerable systems is favored to reduce risks related to drilling through depleted formations. Precise analysis of different drilling programs allows the drilling team to introduce new technology to reduce cost, improve drilling efficiency and maximize profit. It is the responsibility of the drilling engineer to evaluate different scenarios with all the precautions needed during the planning stage to avoid unexpected issues. The present market conditions and the advancement in technologies for drilling horizontal wells increase the feasibility of producing the depleted reservoirs economically. This paper highlights the challenges in drilling horizontal wells in highly depleted reservoirs and provides means for successfully drilling those wells to reduce risks while drilling
Abstract Accurate real-time downhole data collection provides a better understanding of downhole dynamics and formation characteristics, which can improve wellbore placement and increase drilling efficiency by improving the rate of penetration (ROP) and reducing downtime caused by tool failure. High-speed telemetry through wired drill string has enabled real-time data acquisition, but there are significant additional costs associated with the technology. Data-driven techniques using recursive neural networks (RNN) have proven very efficient and accurate in time-series forecasting problems. In this study, we propose deep learning as a cost-effective method to predict downhole data using surface data. Downhole drilling data is a function of surface drilling parameters and downhole conditions. The downhole data acquired using relatively inexpensive methods usually have a considerable lag time depending on the signal travel length. So, the first step in the proposed method is syncing the downhole data and surface data. After the data are synced, they are then fed into an RNN-based long-term short memory (LSTM) network, which learns the relationship between the surface parameters and downhole data. LSTM networks can learn long-term relationships in the data, thus making them ideal for time-series forecasting applications. The trained model is then used to make predictions for downhole data using the given surface data. The median error for the prediction of downhole data using surface data was as low as 3% in this study. The study suggests that the developed model can accurately predict downhole data in real-time. The model is also very robust to the amount of noise or outliers present in the data and can predict downhole conditions 50–60 ft ahead with reasonable accuracy. It was observed that the prediction accuracy varied from well to well and drilling depths. The results demonstrate how deep learning can be cost-effectively employed for downhole data prediction. This paper presents a novel method for using surface data to predict downhole data by employing deep learning. The method can be deployed in real-time to aid in wellbore placement and improve drilling performance.
Chang, Hongli (University of Alaska Fairbanks) | Saravanan, Naresh (University of Alaska Fairbanks) | Cheng, Yaoze (University of Alaska Fairbanks) | Zhang, Yin (University of Alaska Fairbanks) | Dandekar, Abhijit (University of Alaska Fairbanks) | Patil, Shirish (King Fahd University of Petroleum and Minerals)
Abstract The formation of stable heavy oil emulsion, which may upset separation facilities and eventually lead to production impairment, is one of the most common issues encountered in the development of heavy oil reservoirs. This paper investigates the influence of various physicochemical parameters, including water cut, polymer status (sheared/unsheared), polymer concentration, demulsifier type and concentration, and the coexistence of polymer and demulsifiers on the stability of heavy oil emulsion. The viscosity of heavy oil emulsion is also studied at various water cut and polymer concentration. In this study, water-in-heavy oil emulsion was prepared at the water cut of 30% as the blank sample using heavy oil with API gravity of 14.5° and the synthetic brine. The effect of the water cut was investigated by both the bottle test method and multiple light scattering (MLS) method to validate the effectiveness and reliability of the MLS method. The other parameters were studied only through the MLS method. The results showed that the increasing water cut resulted in the decrease of heavy oil emulsion stability and could potentially invert the stable w/o emulsion to loose o/w emulsion at the phase inversion point where the emulsion viscosity peak occurred. Adding polymer, regardless of the polymer status, tended to reduce the stability of heavy oil emulsion, and the unsheared polymer contributed to less emulsion stability. However, the influence of polymer concentration was rather complicated. The emulsion stability decreased as polymer concentration increased, and further increasing polymer concentration enhanced the emulsion stability. A similar trend was also evidenced by emulsion viscosity with increasing polymer concentration. The addition of three oil-soluble emulsion breakers was able to break the heavy oil emulsion efficiently, whereas the water-soluble demulsifier had little demulsification effect. Furthermore, there existed an optimal concentration for the selected oil-soluble demulsifier to achieve the maximum separation. Although polymer itself could intensify the destabilization of heavy oil emulsion, it hindered the destabilization process of the heavy oil emulsion when the oil-soluble demulsifiers were added. This study will provide a comprehensive understanding of the factors affecting heavy oil emulsion stability.
Abstract Our previous research, honoring interfacial properties, revealed that the wettability state is predominantly caused by phase change—transforming liquid phase to steam phase—with the potential to affect the recovery performance of heavy-oil. Mainly, the system was able to maintain its water-wetness in the liquid (hot-water) phase but attained a completely and irrevocably oil-wet state after the steam injection process. Although a more favorable water-wetness was presented at the hot-water condition, the heavy-oil recovery process was challenging due to the mobility contrast between heavy-oil and water. Correspondingly, we substantiated that the use of thermally stable chemicals, including alkalis, ionic liquids, solvents, and nanofluids, could propitiously restore the irreversible wettability. Phase distribution/residual oil behavior in porous media through micromodel study is essential to validate the effect of wettability on heavy-oil recovery. Two types of heavy-oils (450 cP and 111,600 cP at 25C) were used in glass bead micromodels at steam temperatures up to 200C. Initially, the glass bead micromodels were saturated with synthesized formation water and then displaced by heavy-oils. This process was done to exemplify the original fluid saturation in the reservoirs. In investigating the phase change effect on residual oil saturation in porous media, hot-water was injected continuously into the micromodel (3 pore volumes injected or PVI). The process was then followed by steam injection generated by escalating the temperature to steam temperature and maintaining a pressure lower than saturation pressure. Subsequently, the previously selected chemical additives were injected into the micromodel as a tertiary recovery application to further evaluate their performance in improving the wettability, residual oil, and heavy-oil recovery at both hot-water and steam conditions. We observed that phase change—in addition to the capillary forces—was substantial in affecting both the phase distribution/residual oil in the porous media and wettability state. A more oil-wet state was evidenced in the steam case rather than in the liquid (hot-water) case. Despite the conditions, auspicious wettability alteration was achievable with thermally stable surfactants, nanofluids, water-soluble solvent (DME), and switchable-hydrophilicity tertiary amines (SHTA)—improving the capillary number. The residual oil in the porous media yielded after injections could be favorably improved post-chemicals injection; for example, in the case of DME. This favorable improvement was also confirmed by the contact angle and surface tension measurements in the heavy-oil/quartz/steam system. Additionally, more than 80% of the remaining oil was recovered after adding this chemical to steam. Analyses of wettability alteration and phase distribution/residual oil in the porous media through micromodel visualization on thermal applications present valuable perspectives in the phase entrapment mechanism and the performance of heavy-oil recovery. This research also provides evidence and validations for tertiary recovery beneficial to mature fields under steam applications.
Abstract This paper is a contribution to failure prediction of unconsolidated intervals that could have a negative impact on injection efficiency because of susceptibility to structural changes under fluid injection processes. In unconsolidated formations, formation fines may be subjected to drag forces by injected water because of poor cementation. This results in small grain moments, and continuation can result in a gradual increase in permeability and eventual development of washed-out or thief zones. This paper presents a new modeling approach using information from profile surveys and grain and pore size distribution to model the process of injection and the induced particle movement. The motivation came from field observations and realization of permeability increase from profile surveys and substantial fines movement, leading to an increase in rock permeability. A series of case studies based on realistic published data on pore and grain size distribution are included to demonstrate the estimated increases in formation permeability. In our modeling approach, once we establish the range of grain sizes that fits the criterion for particle movement, a probabilistic algorithm, developed for the study, is applied to track changes in porosity and associated variations in permeability. This algorithm, presented for the first time, considers a stochastic approach to monitor the reservoir particle movements, pore size exclusion by particle accumulation and their resultant changes in rock properties. For this methodology, we ignored potential effects of wettability and clay swelling, and considered perfect spheres to represent the various grain sizes. Predictions made using various realizations of channel formation and petrophysical alterations show the significance of having access to three sources of information; pore size distribution, grain size distribution, and profile surveys. Through inverse modeling using these pieces of information for a particular formation, we demonstrate how we can predict realistic changes and map rock transport properties.
Demayo, Trevor N. (Chevron Corporation) | Herbert, Nevil K. (Chevron Corporation) | Hernandez, Dulce M. (Chevron Corporation) | Hendricks, Jana J. (Chevron Corporation) | Velasquez, Beberly (SunPower Corporation) | Cappello, David (SunPower Corporation) | Creelman, Ian (SunPower Corporation)
Abstract This paper outlines one of the first efforts by a major oil and gas company to build a net exporting, behind the meter solar photovoltaic (PV) plant to lower the operating costs and carbon intensity of a large, mature oil and gas field in Lost Hills, California. The 29 MWAC (35 MWDC) Lost Hills solar plant, commissioned in April 2020, covers approximately 220 acres on land adjacent to the oil field and is designed to provide more than 1.4 billion kilowatt hours of solar energy over 20 years to the field's oil and gas field production and processing facilities. The upgrades to the electrical infrastructure in the field also include new technology to reduce the risk of sulfur hexafluoride (SF6) emissions, another potent greenhouse gas (GHG). Prior to solar, the Lost Hills field was importing all its electricity from the grid. With the introduction of the Innovative Crude Program as part of California's Low Carbon Fuel Standard (LCFS) and the revisions to the California Public Utilities Commission Net Energy Metering program, Lost Hills was presented with a unique opportunity to reduce its imported electricity expenses, reduce its carbon intensity, while also generating LCFS credits. The plant was designed to power the field during the day and export excess power to the grid to help offset night-time electricity purchases. The solar plant operates under a Power Purchase Agreement (PPA) with the solar PV provider and, initially, will meet approximately 80% of the oil field's energy needs. Future plans include the incorporation of lithium ion batteries, DC-coupled with the solar inverters, and with a total capacity of 20 MWh. This energy storage system will increase the amount of solar electricity fed directly into the field and reduce costs by controlling when the site uses stored solar electricity rather than electricity from the grid. The battery system will also increase the number of LCFS credits by 15% over credits generated by solar alone. Together, solar power plus energy storage provides a robust renewable energy solution. This project will generate multiple co-benefits for the Lost Hills oil field by lowering the cost of power, reducing GHG emissions, generating state LCFS credits and federal Renewable Energy Certificates, and demonstrating a commitment to energy transition by investing in renewable technology. Hopefully, Lost Hills solar can be a model for similar future projects in other oil fields, not only in California, but across the globe.
Abstract Leveraging publicly available data is a crucial stepfor decision making around investing in the development of any new unconventional asset.Published reports of production performance along with accurate petrophysical and geological characterization of the areashelp operators to evaluate the economics and risk profiles of the new opportunities. A data-driven workflow can facilitate this process and make it less biased by enabling the agnostic analysis of the data as the first step. In this work, several machine learning algorithms are briefly explained and compared in terms of their application in the development of a production evaluation tool for a targetreservoir. Random forest, selected after evaluating several models, is deployed as a predictive model thatincorporates geological characterization and petrophysical data along with production metricsinto the production performance assessment workflow. Considering the influence of the completion design parameters on the well production performance, this workflow also facilitates evaluation of several completion strategies toimprove decision making around the best-performing completion size. Data used in this study include petrophysical parameters collected from publicly available core data, completion and production metrics, and the geological characteristics of theNiobrara formation in the Powder River Basin. Historical periodic production data are used as indicators of the productivity in a certain area in the data-driven model. This model, after training and evaluation, is deployed to predict the productivity of non-producing regions within the area of interest to help with selecting the most prolific sections for drilling the future wells. Tornado plots are provided to demonstrate the key performance driversin each focused area. A supervised fuzzy clustering model is also utilized to automate the rock quality analyses for identifying the "sweet spots" in a reservoir. The output of this model is a sweet-spot map that is generated through evaluating multiple reservoir rock properties spatially. This map assists with combining all different reservoir rock properties into a single exhibition that indicates the average "reservoir quality"of the formation in different areas. Niobrara shale is used as a case study in this work to demonstrate how the proposed workflow is applied on a selected reservoir formation whit enough historical production data available.
Abstract Controlling excessive water production in mature oil fields has always been one major objective of the oil and gas industry. This objective calls for planning of more effective water-control treatments with optimized designs to obtain more attractive outcomes. Unfortunately, planning such treatments still represents a dilemma for conformance experts due to the lack of systematic design tools in the industry. This paper proposes and makes available a new design approach for bulk gel treatments by grouping designs of 62 worldwide field projects (1985-2018) according to gel volume-concentration ratio (VCR). After compiling them from SPE papers, the average gel volumes and polymer concentrations in the field projects were used to evaluate the gel VCR. Distributions of field projects were examined according to the gel VCR and the formation type using stacked histograms. A comprehensive investigation was performed to indicate the grouping criterion and design types of gel treatments. Based on mean-per-group strategy, the average VCR was estimated for each channeling and formation type to build a three-parameter design approach. Two approximations for the average polymer concentration and two correlations for minimum and maximum designs and were identified and included in the approach. The study shows that the gel VCR is a superior design criterion for in-situ bulk gel treatments. Field applications tend to aggregate in three project groups of clear separating VCR cut-offs (<1, 1-3, >3 bbl/ppm). The channeling type is the dividing or distributing criterion of the gel projects among the three project groups. We identified that VCRs<1 bbl/ppm are used to treat conformance problems that exhibit pipe-like channeling usually presented in unconsolidated and fractured formations with very long injection time (design type I). For fracture-channeling problems frequently presented in naturally or hydraulically-fractured formations, VCRs of 1-3 bbl/ppm are used (design type II). Large gel treatments with VCR>3 bbl/ppm are performed to address matrix-channeling often shown in matrix-rock formations and fracture networks (design type III). Results show that the VCR approach reasonably predicts the gel volume and the polymer concentration in training (R of 0.93 and 0.67) and validation (AAPE <22%) samples. Besides its novelty, the new approach is systematic, practical, and accurate, and will facilitate the optimization of the gel treatments to improve their performances and success rate.
Abstract This paper presents a large-scale experimental study of the compositional effect on produced bitumen properties in SAGD. The SAGD experiment used a sandpack in the cylindrical pressure vessel that was 1.22 m in length and 0.425 m in internal diameter. The pore volume of the sandpack was 58 L, and the porosity and permeability were 0.33 and 5.5 D, respectively. The sandpack was initially saturated with 93% bitumen and 7% deionized water. The SAGD experiment after preheating was operated mostly at a steam injection rate of 35 cm/min (cold-water equivalent) at 3600 kPa (244°C). The produced fluids (gas, oil, and water) were analyzed; e.g., ten oil samples were analyzed in terms of carbon number distribution (CND), the asphaltene content, density, and viscosity to investigate the compositional change of the produced bitumen. After the experiment, the sandpack was excavated and samples were taken for analysis of solid, water, oil, asphaltene, and sulfur contents. Experimental data (e.g., propagation of a steam chamber and production of oil and water) were history-matched by using a numerical reservoir simulator. Results showed that the produced bitumen was lighter and contained 1 to 5 wt% less asphaltenes than the original bitumen. Also, the remaining oil inside the steam chamber contained 6 wt% more asphaltenes. As a result, the produced bitumen was 1 to 6 kg/m less dense than the original bitumen. In the actual operations, bitumen is diluted with condensate to reduce the oil viscosity for pipeline shipping. This reduction in bitumen density corresponds to a reduction of the diluent cost by 5-10%. The produced bitumen became less dense with increasing steam-chamber volume. The history-matched simulation indicated that the progressively decreasing density of the produced bitumen can be attributed to the vaporization of the relatively volatile components in the remaining oil, and condensation of those components near the chamber edge. The history-matching also indicated that varying flow regimes (counter-current and co-current flow of water and oil) affected the oil recovery during in the SAGD experiment.
Abstract Implementing the IIoT paradigm into the classical oil & gas field OT systems is one of the essential concepts for Digital Oilfield 2.0. The transition in architecture and the corresponding technology changes can create a new cyber-physical security risk profile through alterations in the digital information structure of the oilfield OT system. With the onset of IIoT implementations in the industry, it is an opportune time to review and assess the emerging cyber-physical risk landscape. In the paper, we identified and compared the current oilfield OT logical structures with the designs emerging through the IIoT implementations. The analysis includes extensive reviews of developing standards, such as those proposed by Industrial Internet Consortium, and ongoing published experiences to find the primary points of transition. The security risks stemming from the IIoT implementation appear to raise significant concerns with regard to potentially severe cybersecurity outcomes, which could materially impact the integrity and safety of oilfieldoperations. The study concentrated on the cybersecurity threats that could pose negative physical and operational conditions resulting from loss of visibility and / or loss of control of the operational processes in field facilities. Extensive literature reviews were the basis for identifying the implications of cybersecurity risks in the ongoing stages of integrating the IIoT into the field. The reviews identified the modified strategies for cyber-physical systems, including potential threats and counter measurements for the field IIoT model. However, these proposed strategies still miss a fundamental denominator - the assessments generally ignore that it is the fundamental nature of IIoT structure itself that creates cyber-security vulnerabilities. To investigate further, we performed a contrasting analysis based on specific case studies of field IIoT devices such as the pump-off controller and OT architectures. Three foundational threat implications emerged on the transformation of IIoT architecture into the oilfield: 1)The exponential growth of connected distributed artificial intelligence (DAI) devices enormously increases the complexity of designing the software of each facility and system. 2)The cutting-edge Machine to Machine (M2M) characteristic in the IIoT model pushes the human out of the traditional control and monitor loop. 3)The widespread scale of DAI devices with the unique IP address in the network shifts cybersecurity risks to each connected endpoint. The cornerstone of the distinctive IIoT attributes illustrated in the paper contributes to the potential loss of control, leading to potential for serious damaging operational outcomes in the field. The goal of this paper is to aid oilfield security planning and design processes through animproved recognition of the cyber-physical security impacts emerging from the implementation of IIoT architectures and technologies integration into field OT domains.