Field development strategies in unconventional shale reservoirs have increased in intensity over the last few decades. Completion design and well spacing have been key focus variables in the incremental design process. With this wide range of design and development strategies, assets across different basins might end up with wells from a variety of design generations. This could make type curve creation even more complicated as it does not account for impact of hydrocarbon drainage in an area by the older (parent) well on the newer (child) wells. The present paper tackles this issue by addressing type curve development by including date dependent spacing variables to account for the dynamism of field development strategies over the years.
The present paper analyzes the impact of well spacing on type curve development in an asset. Type curve generation is a critical component in evaluation and subsequent planning so de-risking this step is very valuable. A lot of the analysis done in recent years is by considering well spacing as a static variable. The present analysis looks at spacing as a dynamic variable instead to account for time-series based variations. The spacing in the estimation process is also a 3-D spacing algorithm which identifies multiple points along the lateral section of the wellbore for a true evaluation of pressure transient propagation.
The present analysis showed the impact of date dependent well spacing on type curve development. The underestimation of well spacing in well-developed acreages was brought to attention as spacing mean deviations of upto 0.7 Standard Deviation were found between current well speacing and date-dependent well spacing scenarios analyzed. These deviations led to the type curves having upto a 40% EUR differential between estimation processes, with PV10 differentials higher than 100% in some cases. While the degree of impact of time series well spacing varied across the assets evaluated, quantifying the risk in type curve development and subsequent EUR estimation were key conclusions from the analysis.
The present paper presents a novel approach in tackling type curve development for parent and child wells observed across different basins. The paper provides guidelines on creating highly accurate type curves and highlights errors that may arise due to high well density and inter-well interaction by conducting the analysis in the high well density Middle Bakken formation.
Though seemingly straightforward, the concept of "net-to-gross" (NTG) is often a source of confusion. Its proper use is still being debated in some portions of the oil and gas industry. NTG is a method to account for non-reservoir quality rock when calculating oil volumes within a reservoir. This is normally accomplished by applying cutoffs to calculated quantities, such as porosity, which then get excluded from the volumetric calculation. To the extent there have been recent discussions of this, the focus has been primarily on how to determine appropriate cutoffs. There has been very little mention of the implications of using NTG in flow equations within a reservoir simulator. The paper discusses the derivation and implied assumptions for the simulator NTG formulation and possible errors and proposes modifications to account for inconsistencies.
Resolving the NTG flow equations can be viewed as an upscaling problem, subject to implied assumptions about reservoir continuity. Many fine-scale reservoir simulations were run to test this and to calibrate the NTG equations. The underlying attributes were sampled from a bimodal distribution, which represent pay and non-pay. The results show the effects of NTG ratio, values of fine-scale attributes and spatial correlation on steady state, single phase effective permeability and immiscible flow displacements. They demonstrate errors in effective horizontal and vertical permeability when using NTG within a simulator. These errors cause potentially significant differences in production responses between underlying detailed fine-scale models and coarser models. The results demonstrate a possible need for corrections to the simulator net-to-gross formulations due to underlying implied assumptions and inconsistencies. Some possible modifications are also presented. Both standard and machine learning techniques were used to analyze the results.
Determining the potential of shale-gas reservoirs involves an exhaustive process of calculating the volume of total gas, or original gas in place (OGIP). The calculation of total gas relies on calibrating wireline logs to core data, which are considered to be an empirical validation or ‘ground truth’. However, inconsistency in sample preparation and analytical techniques within, and between laboratories creates significant uncertainty in calculating the free- and adsorbed-gas components, which constitute total gas. Here, we present an analytical program performed on samples of core to elucidate the causes of uncertainty in calculation of total gas. The findings of this program are used to propose improved methods of calculating total gas from core.
Free gas calculated from properties, such as porosity and water saturation measured on core, was found to be highly dependent on laboratory analytical protocols. Differences in sample preparation and water extraction methods led to relative differences of 20% in water saturation and 10% in porosity observed between laboratories, leading to differences of 35% in calculations of free gas in place (FGIP).
Adsorbed gas was evaluated using methane adsorption testing to study the changes in Langmuir parameters in samples with a wide variety of water saturations, clay content, and total organic content over a range of temperatures. It was found that the storage capacity of adsorbed gas artificially increased by a factor of two to three when the experimental temperature exceeded the boiling point of water. This increase is related to the expulsion of clay-bound water and subsequent availability of clay surfaces for methane adsorption.
Total gas in place (TGIP) is the sum of free and adsorbed gas volume estimates. The interaction and overlap of pore space between these two volume components are also important to consider. It is proposed to use a simplistic monolayer-based correction of volume of adsorbed gas from the free-gas volume based on a composite pore-size distribution from scanning electron microscopy (SEM) point-counting and nitrogen-adsorption data.
Pressurized sidewall-core samples were acquired at reservoir conditions to measure free- and adsorbed- gas volumes during controlled depressurization under laboratory conditions. This provided a baseline measurement for comparison with calculations from traditional measurements to understand which laboratory protocol and sample preparation technique provided the most robust results.
This study has elucidated methods to reduce the uncertainty in gas-in-place calculations and better understand resource distribution in dry-gas source rocks.
A mechanistic approach for calculation of oil-gas capillary pressure curves and relative permeabilities in unconventional reservoirs is presented. The approach accounts for reservoir fluid composition, contact angle wettability and pore size distribution of each specific reservoir and generates a unique set of relative permeability curves based on those inputs. This allows calculation of curves in reservoirs where historical production data is limited.
Phase behavior calculations are computed by coupling the Peng-Robinson equation of state and the Young-Laplace capillary pressure model. This coupling allows for inclusion of the effect of confinement of reservoir fluids on volumetric and transport properties.
The reservoir is modeled as a bundle of tubes with diameters representative of the pore size distribution found in the reservoir. A multi-step depletion is modeled followed by gas injection and a secondary depletion. Separate capillary pressure results are obtained for each part of the process. After the capillary pressure curves are generated, an integration is performed on the capillary results to generate a set of relative permeability curves following the Nakornthap and Evans method (1986).
The multi-step process is used to allow recalculation of the relative permeability curves as the reservoir fluid composition changes due to the initial depletion and then secondary gas injection.The approach yields a unique set of relative permeability results for each set of input parameters.The mechanistic approach is demonstrated on two different oil compositions, a black oil sample and a volatile oil. For each of the oil compositions, two different injection gasses are evaluated (methane and carbon dioxide). The intermediate calculations are summarized and the final permeability results are included in the paper. The results show that for both oil samples evaluated, the gas injection results in an increase in oil relative permeability. Carbon dioxide is more effective at increasing the oil relative permeability than methane for both oil samples. This suggests that carbon dioxide could be an effective option for enhanced oil recovery operations in unconventional reservoirs.
A unique element of the approach presented is that the calculation of relative permeability curves for the initial reservoir depletion is immediately followed by the calculation of new relative permeability curves as the reservoir composition changes due to gas injection. This allows prediction of relative permeability results in an unconventional reservoir for both the initial reservoir depletion and also for hypothetical enhanced oil recovery operations. Since the model can be run quickly and repeatedly, sensitivity analyses can be performed on the permeability curves as a function of initial reservoir conditions and injection gas compositions and amounts.
A set of different numerical algorithms for non-Darcy flow models is developed and compared to each other in order to estimate functionality of algorithms and their potential of embedding into existing reservoir simulation software. In addition, a question of using such updated software to study an applicability of various non-Darcy flow models for unconventional reservoirs is discussed.
The approaches are based on generalization of a linear Darcy law in which a flow equation is modified by nonlinear expressions of a flow rate and other reservoir values, so various formulations of non-Darcy flows from different research papers can be described as particular cases of such a general formula. Next, this generalized flow equation is applied to the modified black-oil equations, but an exclusion of a flow rate as unknown is impossible due to properties of the generalization. A finite volume discretization and Newton linearization are performed, and several techniques of computationally efficient solution are observed.
A prototype of reservoir simulation program based on obtained mathematical model is constructed. Several numerical experiments are performed in order to verify numerical solutions and applied algorithms. Convergence rates of calculations by different approaches to non-Darcy flows are studied. The most significant finding is an existence of common approaches to exclude discretized and linearized flow equations at each iteration of nonlinear solver. This is important due to a presence of different non-Darcy models derived from different prerequisites (such as Forchheimer quadratic law and power law for non-Newtonian fluid) which can be studied through general algorithm as a research framework. Equally important is that the developed approaches are practically efficient and could be implemented in previously developed software without significant rearrangement of their code and algorithms in order to immediately gain practically useful simulations of non-Darcy flows or to explore their applicability, which is still an issue to resolve.
The novelty of the considered approaches is in ability to embed non-Darcy flow models into present reservoir simulation software keeping most of existing algorithms and data structures implemented. Taking into account that the algorithms are based on a generalized form of non-Darcy flows, it is possible to calculate a wide range of models preserving computational complexity.
The Gulf of Mexico, and more precisely the Wilcox trend, has long been considered as challenging area for developing profitable hydrocarbon fields. In fact, the safe drilling of deep offshore wells needs to take into account the geological and geomechanical complexities, generated by the different sedimentological and tectonic events that accompanied the development of the Wilcox trend. In the case of Buckskin field, located in Keathley Canyon protraction (Figure 1), and in order to overcome those challenges, we developed a workflow that ranks all the parameters related to the geometry, the geology, the rock quality and the geomechanics characteristics of the reservoir. The core of the workflow is articulated around a probabilistic method that will assess the uncertainty of the productivity index, based on experimental design and Monte Carlo simulation. The proposed workflow allowed the optimization of the PI of the well thanks to a highly deviated reservoir section at a depth below 24,000', combined with an optimal fracking job.
Donadel Basso, Eduardo (UFRGS – Universidade Federal do Rio Grande do Sul) | André Perondi, Eduardo (UFRGS – Universidade Federal do Rio Grande do Sul) | Francisco Lisboa Santos, Hugo (Petrobras – Petróleo Brasileiro S/A) | Augusto Couto Barone, Dante (UFRGS – Universidade Federal do Rio Grande do Sul) | Luis da Silva Júnior, Anselmo (Instituto SENAI de Inovação em Sistemas Embarcados) | Viegas Wentz, André (Instituto SENAI de Inovação em Polímeros) | Mendel, Henrique (Mendel Serviços de Engenharia)
The obstruction inside flexible lines with hydrates or paraffins is quite common in offshore production systems. In these cases, in order to perform the clearance of the pipes, many different techniques, usually time consuming and expensive, are currently used. This article presents the development of a new self-propelled robot that is under design to become an alternative solution for this problem, which has been a research challenge by decades.
The use of robots with umbilical cables inside long obstructed pipes with several curves and bends is currently an important challenge in the robotic field, especially due to the increased traction forces that may occur. The main challenge is the force in the umbilical cable caused by the friction between the cable and the inner pipe surface. This force can be theoretically outlined by the capstan Euler-Etelwein equation (usually applied to the case of rounding a capstan with a rope). In an early study, a theoretical model for cable traction was developed and experimentally validated. This model was used to calculate a typical friction force in a standard pipeline. Several in-pipe movement strategies were analyzed and a suitable mechanism was defined as the main element for the robot traction mechanism. Therefore, we show in this study that the traction challenge can be surpassed by a properly cable materials selection composed with a suitable traction system design. A mechanism base in an inchworm-like movement was selected due to its high traction capacity and, based in an extensive study, a hybrid (electric-hydraulic) system was developed. The electric power transmission is provided through the umbilical, and electric motors drive pumps which supply hydraulic power to linear pistons that execute the movement of the mechanical elements, moving the entire system. Previous analysis and tests indicate that the designed system will be suitable to perform the necessary missions.
Straight-line analysis (SLA) methods, which are a sub-group of model-based techniques used for rate-transient analysis (RTA), have proven to be immensely useful for evaluating unconventional reservoirs. Transient data can be analyzed using SLA methods to extract reservoir/hydraulic fracture information, while boundary-dominated flow data can be interpreted for fluid-in-place estimates. Because transient flow periods may be extensive, it is also advantageous to evaluate the volume of hydrocarbons-in-place contacted over time to assist with reserves assessment. The new SLA method introduced herein enables reservoir/fracture properties and contacted fluid-in-place (CFIP) to be estimated from the same plot, which is an advantage over traditional SLA techniques.
The new SLA method utilizes the
Validation of the new SLA method for an undersaturated oil case is performed through application to synthetic data generated with an analytical model. Thenew SLA results in estimates of LFP and OFIP that are in excellent agreement with model input (within 2%). Further, the results are consistent with the traditional SLA methods used to estimate LFP(e.g. the square-root of time plot) and OFIP (e.g. the flowing material balance plot).
Practical application of the new SLA method is demonstrated using field cases and experimental data. Field cases studied include online oil production from a multi-fractured horizontal well (MFHW) completed in a tight oil reservoir, and flowback water production from a second MFHW, also completed in a tight oil reservoir. Experimental (gas) data generated using a recently-introduced RTA core analysis technique, were also analyzed using the new SLA method. In all cases, the new SLA method results are in excellent agreement with traditional SLA methods.
The new SLA method introduced herein is an easy-to-apply, fully-analytical RTA technique that can be used for both reservoir/fracture characterization and hydrocarbons-in-place assessment. This method should provide important, complementary information to traditionally-used methods, such as square-root of time and flowing material balance plots, which are commonly used by reservoir engineers for evaluating unconventional reservoirs.
The method for modeling of a multilateral well design that is completely independent on the simulation grid and fluid properties is proposed. The method takes into account friction in the lateral branches and crossflow between them. Well parameters, such as trajectory, perforation intervals, roughness and diameter, are directly used to calculate pressure distribution along the wellbore at the current fluid composition and tubing head pressure (THP).
Well connections with grid blocks in a finite volume approximation for dynamic model should be created. The automatic creation of the well connections during dynamic simulation based on specified well trajectory and completion intervals is proposed. The connection factor is suggested to be calculated based on length of completion intersection with the block, trajectory direction and rock properties during the run time. To calculate pressure drop on well track intervals between connections and the well track intervals between top completion and tubing head the well-known correlations are utilized. The correlations are used for the current fluid composition in the wellbore in each connection using information for well trajectory, roughness and diameter.
Such an approach makes it possible to get rid of the use of the tabulated bottomhole pressure (BHP) as a function of tubing head pressure for a number of phase compositions. Such traditional use of phase compositions gives a non-physical response in compositional models, where the component composition of the product varies significantly throughout the life of the field. Usage of real coordinates (x, y, z) for setting well trajectory and perforation intervals, instead of the traditional grid block numbers (i, j, k), allows to calculate layer intersection, connection factors and pressure distribution along wellbore with arbitrary changes in the dynamic model grid, for example, when introducing local grid refinement or dynamic grid and rock properties variation used to describe hydraulic fracturing.
The proposed method is successfully used for modeling of a multilateral well design in dynamic simulation. The results of such dynamic simulation are consistent with the real samples from reservoir.
When key geological scenario uncertainties, captured in multiple conceptual models, are combined with continuous parameters, the evaluation of a representative sample set quickly becomes unmanageable, laborious and too time consuming to execute. A workflow is presented that enables users to easily model conceptual as well as parametric uncertainties of the reservoir without the necessity of any complex scripting. The chain of models for all concepts is presented in one view, to provide overview of the key differences between concepts used. An ensemble of geologically sound samples can be created taking into account parameter dependencies and probabilities of concepts. The chain of models per concept can easily be (re)executed.
A case study is presented that consists of multiple concepts based on different hierarchical stratigraphic models in combination with different fault models, each of which with its own fluid- (defined contacts per compartment), grid- (sub-layering and areal resolution) and rock property models. Volumetric calculations are run on an ensemble to get static model observables like GRV, Pore Volume, Oil-In-Place, etc., reported by multiple sub-regions of the model in combination with a lease boundary. (When coupled with dynamic simulation, observables like ultimate recovery, break-through timing, etc. could also be obtained). As thousands of realizations were run concurrently, run time was reduced from weeks to hours. Results reveal the distribution and dependency of observables like GRV on top-structure-depth uncertainty and contact-level uncertainty. For in-place volumes the full suite of concepts and other parametric uncertainties including the stochastic uncertainties (i.e. seed) is analyzed. This also enables the identification of the key uncertainties that impact equity the most, which can be of great commercial value during equity negotiations. This workflow demonstrates how, with the power of Cloud computing, rigorous evaluation of multiple concepts combined with many parametric uncertainties has been achieved within practical turn-around times. As such it overcomes the prohibitive hurdles of the past that often have led to simplifications necessary to save time and effort. The result is better decision quality in resource development decisions.