In-situ combustion processes are largely a function of oil composition and rock mineralogy. The extent and nature of the chemical reactions between crude oil and injected air, as well as the heat generated, depend on the oil-matrix system. Laboratory studies, using crude and matrix from a prospective in-situ combustion project, should be performed before designing any field operation. The chemical reactions associated with in-situ combustion are complex and numerous. They occur over a broad temperature range.
Transmitting electrical current to the subsurface can create special considerations. Successful application of electromagnetic heating often requires a multi-disciplinary approach combining electric engineering and petroleum engineering. To assist petroleum engineers considering this approach, this article identifies some of the issues that an electrical engineer might normally anticipate and address. In most practical situations, we are concerned with fields that vary periodically in time (the sinusoidal steady state generally). In these cases the electrical phenomena are properly described by Maxwell equations in terms of complex vector field intensities of electric and magnetic fields (E and H); complex vector field electric, magnetic, and current densities (D,B,J); complex charge concentrations (ρc); and complex material parameters: conductivity, permittivity, and permeability (σ, ε, μM).
The claim that the world is irresponsible in rapidly consuming irreplaceable resources ignores technical progress, market pressures, and the historical record. For example, the "Club of Rome," with the use of exponential growth assumptions and extrapolations under static technology, predicted serious commodity shortages before 2000, including massive oil shortages and famine. First, the new production technologies are proof that science and knowledge continue to advance and that further advances are anticipated. Second, oil prices will not skyrocket because technologies such as manufacturing synthetic oil from coal are waiting in the wings. Third, the new technologies have been forced to become efficient and profitable, even with unfavorable refining penalties. Fourth, exploration costs for new conventional oil production capacity will continue to rise in all mature basins, whereas technologies such as CHOPS can lower production costs in such basins. Fifth, technological feedback from heavy-oil production is improving conventional oil recovery. Finally, the heavy-oil resource in UCSS is vast. Although it is obvious that the amount of conventional (light) oil is limited, the impact of this limitation, while relevant in the short term (2000 to 2030), is likely to be inconsequential to the energy industry in the long term (50 to 200 years). The first discoveries in the Canadian heavy-oil belt were made in the Lloydminster area in the late 1920s. Typically, 10- to 12-mm diameter perforations were used, and pump jacks were limited by slow rod-fall velocity in the viscous oil to a maximum of 8 to 10 m3/d of production, usually less. Operators had to cope with small amounts of sand, approximately 1% in more viscous oils. Small local operators learned empirically that wells that continued to produce sand tended to be better producers, and efforts to exclude sand with screens usually led to total loss of production. Operators spread the waste sand on local gravel roads and, in some areas, the roadbeds are now up to 1.5 m higher because of repeated sand spreading. The sharp oil price increases in the 1970s and 1980s led to great interest in heavy-oil-belt resources (approximately 10 109m3). Many international companies arrived and introduced the latest screen and gravel-pack technology but, in all cases, greatly impaired productivity or total failure to bring the well on production was the result. To this day, there are hundreds of inactive wells with expensive screens and gravel packs. The advent of progressing cavity (PC) pumps in the 1980s changed the nonthermal heavy-oil industry in Canada. The first PC pumps had low lifespans and were not particularly cost-effective, but better quality control and continued advances led to longer life and fewer problems. The rate limits of beam pumps were no longer a barrier and, between 1990 and 1995, operators changed their view of well management.
Evaluation of reservoirs as candidates for cold heavy oil production with sand (CHOPS) requires an assessment of the reservoir and an understanding of the key success factors for this technology. This article discusses what is known in these areas. The range of reservoir characteristics for CHOPS comes largely from Canadian experience. Table 1 contains the range of reservoir characteristics. Because Venezuelan heavy-oil deposits in the Faja del Orinoco represent a huge oil reserve, it is worth repeating that the physical properties and geological histories are similar.
Martini, Brigette (Corescan Inc.) | Bellian, Jerome (Whiting Petroleum Corporation) | Katz, David (Encana Corporation) | Fonteneau, Lionel (Corescan Pty Ltd) | Carey, Ronell (Corescan Pty Ltd) | Guisinger, Mary (Whiting Petroleum Corporation) | Nordeng, Stephan H. (University of North Dakota)
Hyperspectral core imaging studies of the Bakken-Three Forks formations over the past four years has revealed non-destructive, high resolution, spatially relevant insight into mineralogy, both primary and diagenetically altered that can be applied to reservoir characterization. While ‘big’ data like co-acquired hyperspectral imagery, digital photography and laser profiles can be challenging to analyze, synthesize, scale, visualize and store, their value in providing mineralogical information, structural variables and visual context at scales that lie between (and ultimately link) nano and reservoir-scale measurements of the Bakken-Three Forks system, is unique.
Simultaneous, co-acquired hyperspectral core imaging data (at 500 μm spatial resolution), digital color photography (at 50 μm spatial resolution) and laser profiles (at 20 μm spatial and 7 μm vertical resolution), were acquired over 24 wells for a total of 2,870 ft. of core, seven wells of which targeted the Bakken-Three Forks formations. These Bakken-Three Forks data (~5.5 TB) represent roughly 175,000,000 pixels of spatially referenced mineralogical data. Measurements were performed at a mobile Corescan HCI-3 laboratory based in Denver, CO, while spectral and spatial analysis of the data was completed using proprietary in-house spectral software, offsite in Perth, WA, Australia. Synthesis of the spectral-based mineral maps and laser-based structural data, with ancillary data (including Qemscan, XRD and various downhole geophysical surveys) were completed in several software and modelling platforms.
The resulting spatial context of this hyperspectral imaging-based mineralogy and assemblages are particularly compelling, both in small scale micro-distribution as well as borehole scale mineralogical distributions related to both primary lithology and secondary alteration. These studies also present some of the first successful measurement and derivation of lithology from hyperspectral data. Relationships between hyperspectral-derived mineralogy and oil concentrations are presented as are separately derived structural variables. The relationship between hyperspectral-based mineralogy to micro-scale reservoir characteristics (including those derived from Qemscan) were studied, as were relationships to larger-scale downhole geophysical data (resulting in compelling correlations between variables of resistivity and hyperspectral-mineralogy). Finally, basic Net-to-Gross calculations were completed using the hyperspectral imaging data, thereby extending the use of such data from geological characterizations through to resource estimations.
The high-fidelity mineralogical maps afforded by hyperspectral core imaging have not only provided new geological insight into the Bakken-Three Forks formations, but ultimately provide improved well completion designs in those formations, as well as a framework for applying the technology to other important unconventional reservoir formations in exploration and development. The semi-automated nature of the technology also ushers in the ability to consistently and accurately log mineralogy from multiple wells and fields globally, allowing for advanced comparative analysis.
Swami, Vivek (CGG) | Tavares, Julio (CGG) | Pandey, Vishnu (CGG) | Nekrasova, Tatyana (CGG) | Cook, Dan (Bravo Natural Resources) | Moncayo, Jose (Bravo Natural Resources) | Yale, David (Yale Geomechanics Consulting)
In this study, a state-of-the-art seismic driven 3D geological model was built and calibrated to a petrophysical and geomechanical analysis, 1D-MEM (Mechanical Earth Model), on chosen wells within the Arkoma Basin of Oklahoma. The well information utilized in this study included basic wireline logs and core analysis, including XRD (X-Ray diffraction) data. The traditional petrophysical analysis was augmented with advanced rock physics and statistical techniques to generate the necessary logs. Hydrostatic, overburden and pore pressures were calculated with a petrophysical evaluation model. The 1D-MEMs were based on the Eaton/Olson/Blanton approach with the HTI (Horizontal Transverse Anisotropy) assumption. The 1D-MEMs were calibrated to laboratory data (triaxial tests) and field observations (mud logs, wellbore failure, frac pressures). Therefore, a very good confidence was achieved on Biot's coefficient, tectonic components, anisotropy and dynamic to static conversion factors for Young's Modulus and Poisson's Ratio. Seismic inversions were performed in different time windows and merged to generate high resolution P- and S-Impedance attributes from surface down to the target interval after careful AVO compliant gather preconditioning. A density volume estimate was calibrated to well data, accounting for different geological formations, to decouple P- and S-Wave components as a 3D volume, as well as dynamic Young's modulus (E) and Poisson's ratio (PR). Dynamic E and PR were converted to static parameters using results from 1D-MEMs; and 3D models of Biot's coefficient (α) and tectonic components were built to compute 3D fracture pressure volumes calibrated to well data. The final products were seismic-driven 3D pore pressure and fracture pressure calibrated to 1D-MEMs. The correlation between measured/estimated well logs and corresponding seismic-derived pseudo logs was more than 80%, which indicates good quality of seismic inversion results and hence 3D-MEM. Also, stress barriers, anisotropy, and brittleness indices were calculated on well scale which would help to identify best zones to place hydraulic fractures. The 3D geological model will aid in identifying sweet-spots and optimizing hydraulic fractures.
Euzen, Tristan (IFP Technologies (Canada) Inc.) | Watson, Neil (Enlighten Geoscience Ltd.) | Chatellier, Jean-Yves (Tecto-Sedi Integrated Inc.) | Mort, Andy (Geological Survey of Canada) | Mangenot, Xavier (Caltech)
With the development of unconventional resources, the large number and high density of well data in the deep/distal part of sedimentary basins offer new avenues for petroleum system analysis. Gas geochemistry is a widespread and inexpensive data that can provide invaluable information to better understand unconventional plays. This paper illustrates the use of early production gas composition as a proxy for in-situ hydrocarbon phase distribution in the Montney play of westernmost Alberta and northeastern British Colombia. We demonstrate that a careful stratigraphic allocation of the landing zone of horizontal wells is a key step to a meaningful interpretation and mapping of gas geochemical data. The regional mapping of the dryness of early production gas from the Montney formation clearly delineate thermal maturity windows that are consistent with available carbon isotopic data from produced and mud gas. Integrating this mapping with pressure and temperature data also highlights gas migration fairways that are likely influenced by major structural elements and compartmentalization of the basin. In the wet gas window, reported condensate-gas ratios show that the liquid recovery from multi-stage fractured horizontal wells is highly variable and strongly influenced by variations in reservoir quality and stimulation design. Understanding in-situ fluid distribution can help narrow down the number of variables and identifying key controls on liquid recovery. Several examples combining produced and mud gas data illustrate the use of geochemistry to better constrain geological and operational controls on productivity and liquids recovery in the Montney play.
With the rapid development of unconventional resources, a wealth of new data has been released from historically undrilled or poorly documented portions of sedimentary basins. The large number and high density of well data over extended areas of deep/distal parts of these basins offer invaluable information and new perspectives for petroleum system analysis. In the Montney play of Western Canada, the distal unconventional part of the basin covers an area of approximately 65,000 square kilometers and has been penetrated by over 7,000 horizontal wells. Due to sustained low gas price in North America over the past decade, most of the industry activity has been focused on the liquids-rich gas and light oil fairways of this resource play. Production data show that although a broad liquids-rich fairway can be defined at the basin scale, local variations of fluid distribution and reservoir quality strongly affect the liquid recovery from horizontal wells. The geochemical compositions of both produced gas and mud gas provide a powerful tool to investigate those variations, their geological controls and their impact on well performance. While this paper focuses on the fluid distribution, numerous studies have documented the influence of reservoir quality on the liquid recovery in the Montney play (Chatellier and Perez, 2016; Kato et al., 2018; Akihisia et al., 2018; Iwuoha et al., 2018).
Schumi, Bettina (OMV E&P) | Clemens, Torsten (OMV E&P) | Wegner, Jonas (HOT Microfluidics) | Ganzer, Leonhard (Clausthal University of Technology) | Kaiser, Anton (Clariant) | Hincapie, Rafael E. (OMV E&P) | Leitenmüller, Verena (Montan University Leoben)
Chemical Enhanced Oil Recovery leads to substantial incremental costs over waterflooding of oil reservoirs. Reservoirs containing oil with a high Total Acid Number (TAN) could be produced by injection of alkali. Alkali might lead to generation of soaps and emulsify the oil. However, the generated emulsions are not always stable.
Phase experiments are used to determine the initial amount of emulsions generated and their stability if measured over time. Based on the phase experiments, the minimum concentration of alkali can be determined and the concentration of alkali above which no significant increase in formation of initial emulsions is observed.
Micro-model experiments are performed to investigate the effects on pore scale. For injection of alkali into high TAN number oils, mobilization of residual oil after waterflooding is seen. The oil mobilization is due to breaking-up of oil ganglia or movement of elongated ganglia through the porous medium. As the oil is depleting in surface active components, residual oil saturation is left behind either as isolated ganglia or in down-gradient of grains.
Simultaneous injection of alkali and polymers leads to higher incremental oil production in the micro-models owing to larger pressure drops over the oil ganglia and more effective mobilization accordingly.
Core flood tests confirm the micro-model experiments and additional data are derived from these tests. Alkali co-solvent polymer injection leads to the highest incremental oil recovery of the chemical agents which is difficult to differentiate in micro-model experiments. The polymer adsorption is substantially reduced if alkali is injected with polymers compared with polymer injection only. The reason is the effect of the pH on the polymers. As in the micro-models, the incremental oil recovery is also higher for alkali polymer injection than with alkali injection only.
To evaluate the incremental operating costs of the chemical agents, Equivalent Utility Factors (EqUF) are calculated. The EqUF takes the costs of the various chemicals into account. The lowest EqUF and hence lowest chemical incremental OPEX are incurred by injection of Na2CO3, however, the highest incremental recovery factor is seen with alkali co-solvent polymer injection. It should be noted that the incremental oil recovery owing to macroscopic sweep efficiency improvement by polymer needs to be taken into account to assess the efficiency of the chemical agents.
Since decades, steam-assisted oil recovery processes have been successfully deployed in heavy oil reservoirs to extract bitumen/heavy oil. Current resource allocation practices mostly involve reservoir model-based open loop optimization at the planning stage and its periodic recurrence. However, such decades-old strategies need a complete overhaul as they ignore dynamic changes in reservoir conditions and surface facilities, ultimately rendering heavy oil production economically unsustainable in the low-oil-price environment. Since steam supply costs account for more than 50% of total operating costs, a data-driven strategy that transforms the data available from various sensors into meaningful steam allocation decisions requires further attention.
In this research, we propose a purely data-driven algorithm that maximizes the economic objective function by allocating an optimal amount of steam to different well pads. The method primarily constitutes two components: forecasting and nonlinear optimization. A dynamic model is used to relate different variables in historical field data that were measured at regular time intervals and can be used to compute economic performance indicators (EPI). The variables in the model are cumulative in nature since they can represent the temporal changes in reservoir conditions. Accurate prediction of EPI is ensured by retraining regression model using the latest available data. Then, predicted EPI is optimized using a nonlinear optimization algorithm subject to amplitude and rate saturation constraints on decision variables i.e., amount of steam allocated to each well pad.
Proposed steam allocation strategy is tested on 2 well pads (each containing 10 wells) of an oil sands reservoir located near Fort McMurray in Alberta, Canada. After exploratory analysis of production history, an output error (OE) model is built between logarithmically transformed cumulative steam injection and cumulative oil production for each well pad. Commonly used net-present-value (NPV) is considered as EPI to be maximized. Optimization of the objective function is subject to distinct operating conditions and realistic constraints. By comparing results with field production history, it can be observed that optimum steam injection profiles for both well pads are significantly different than that of a field. In fact, the proposed algorithm provides smooth and consistent steam injection rates, unlike field injection history. Also, the lower steam-oil ratio is achieved for both well pads, ultimately translating into ~19 % higher NPV when compared with field data.
Inspired from state-of-the-art control techniques, proposed steam allocation algorithm provides a generic data-driven framework that can consider any number of well pads, EPIs, and amount of past data. It is computationally inexpensive as no numerical simulations are required. Overall, it can potentially reduce the energy required to extract heavy oil and increase the revenue while inflicting no additional capital cost and reducing greenhouse gas emissions.