Lin, Qingyang (Imperial College London) | Alhammadi, Amer M. (Imperial College London) | Gao, Ying (Imperial College London) | Bijeljic, Branko (Imperial College London) | Blunt, Martin J. (Imperial College London)
We combine steady-state measurements of relative permeability with pore-scale imaging to estimate local capillary pressure. High-resolution three-dimensional X-ray tomography enables the pore structure and fluid distribution to be quantified at reservoir temperatures and pressures with a resolution of a few microns. Two phases are injected through small cylindrical samples at a series of fractional flows until the pressure differential across the core is constant. Then high-quality images are acquired from which saturation is calculated, using differential imaging to quantify the phase distributions in micro-porosity which cannot be explicitly resolved. The relative permeability is obtained from the pressure drop and fractional flow, as in conventional measurements. The curvature of the fluid/fluid interfaces in the larger pore spaces is found, then from the Young-Laplace equation, the capillary pressure is calculated. In addition, the sequence of images of fluid distribution captures the displacement process. Observed gradients in capillary pressure – the capillary end effect – can be accounted for analytically in the calculation of relative permeability.
We illustrate our approach with three examples of increasing complexity. First, we compare the measured relative permeability and capillary pressure for Bentheimer sandstone, both for a clean sample and a mixed-wet core that had been aged in reservoir crude oil after centrifugation. We characterize the distribution of contact angles to demonstrate that the mixed-wet sample has a wide range of angle centred, approximately, on 90°. We then study a water-wet micro-porous carbonate to illustrate the impact of sub-resolution porosity on the flow behaviour: here oil, as the non-wetting phase, is present in both the macro-pores and micro-porosity. Finally, we present results for a mixed-wet reservoir carbonate. We show that the oil/water interfaces in the mixed-wet samples are saddle-shaped with two opposite, but almost equal, curvatures in orthogonal directions. The mean curvature, which determines the capillary pressure, is low, but the shape of the interfaces ensures, topologically, well-connected phases, which helps to explain the favourable oil recovery obtained in these cases.
We suggest that the combination of imaging and flow experiments – which we call iSCAL – represents a compelling development in special core analysis. This methodology provides the data traditionally acquired in SCAL studies, but with insight into displacement processes, rigorous quality control, and flexibility over sample selection, while generating detailed datasets for the calibration and validation of numerical pore-scale flow models.
The traditional definition of volumetric sweep efficiency sums the effects of both fingering (arising due to contrasts in mobility) and bypassing (arising due to contrasts in permeability as well as well placement). Accordingly, we cannot quantitatively attribute poor sweep to either bypassing or fingering. Similarly, in EOR, the incremental recovery cannot be quantitatively associated with the reduction of those effects. For such purposes, we rely on visualization and mapping of saturation profiles to quantify and characterize the remaining oil in place including its distribution. . In this work, we propose a complementary method to obtain an instantaneous insight of the remaining oil distribution. We demonstrate the decomposition of fingering and bypassing effects and its utility. We first redefine recovery factors such that we decouple bypassing and fingering effects. We then validate those redefined sweep indicators by examining a 5-spot waterflood and two idealistic polymer floods. Later, we demonstrate the possible utility of those redefined sweep indicators through different examples. In one example, we compare the performance of a shear - thinning polymer to a recovery-equivalent Newtonian polymer. Using fingering and bypassing sweep indicators, we can immediately conclude that the shear-thinning polymer exacerbates bypassing. We recommend the adoption of our redefined sweep indicators in any simulation suite. They provide instant understanding of sweep and hence can be complementary to standard practices of oil saturation mapping and of special value when analyzing the results of multiple realizations and/or development scenarios.
Salinas, Pablo (Imperial College London) | Jacquemyn, Carl (Imperial College London) | Kampitsis, Andreas (Imperial College London) | Via-Estrem, Lluis (Imperial College London) | Heaney, Claire (Imperial College London) | Pain, Christopher (Imperial College London) | Jackson, Matthew (Imperial College London)
The use of dynamic mesh optimization (DMO) for multiphase flow in porous have been proposed recently showing a very good potential to reduce the computational cost by placing the resolution where and when necessary. Nonetheless, further work needs to be done to prove its usability in very large domains where parallel computing with distributed memory, i.e. using MPI libraries, may be necessary. Here, we describe the methodology used to parallelize a multiphase porous media flow simulator in combination with DMO as well as study of its performance. Due to the peculiarities and complexities of the typical porous media simulations due to its high aspect ratios, we have included a fail-safe for parallel simulations with DMO that enhance the robustness and stability of the methods used to parallelize DMO in other fields (Navier-Stokes flows). The results show that DMO for parallel computing in multiphase porous media flows can perform very well, showing good scaling behaviour.
Drilling multiple horizontal wells from a single pad has become a common approach in many shale plays in response to the economic, real estate, water management, regulations challenges the operators face while developing such plays. The challenge of optimizing the landing zones of those wells depends, in part, on the knowledge of the Stimulated Rock Volume (SRV) created during the fracturing jobs and the ability to predict its evolution during production. The objective of this work is to show how to get this understanding through a multidisciplinary workflow and how this helps to optimize a multi-landing zone development in a field case in Vaca Muerta.
The first part of this work presents a sensitivity study in a single-well, focusing on the key geological and geomechanical factors with ranges based on data collected from well logs and field observations. These include characteristics of the natural fracture network, facies, laminations, variations on petroelastic properties and principal stresses, and anisotropy. The impact of these parameters upon the geometry of the SRV and well productivity is presented using pseudo-3D fracture model and fluid flow-geomechanics simulation coupling technique. Once the key parameters affecting SRV geometry and productivity are determined, the second part of this work shows the results of multi realization (multiple scenarios and well landings) on green and brown field stimulations.
Analysis of the SRV geometry under undepleted and depleted conditions suggests that the stress change associated to production does impact the overall SRV generation and must be considered for multi-well multi-layer strategy. Horizontal Stress Anisotropy, preexisting fractures and laminations are the static properties which have the most important control on the Stimulate Rock Volume (SRV) dimensions and complexity. The SRV is dynamic, changes during time. Addressing these changes allows us to better plan a multi-landing zone design (sequence, landing depth, well spacing, etc.) at a given period of time for this field case. The presented work goes beyond an ordinary investigation of SRV creation driving properties: it allows for a better understanding of uncertainties related to these properties and ultimately depicts the static and dynamic impact on production in order to guide the optimization of well placement on the field case development.
Scapolo, Matteo (KIDOVA France & Imperial College London) | Garcia, Michel H. (KIDOVA France) | Mathieu, Jean-Baptiste (KIDOVA France) | Siffert, Deborah (KIDOVA France) | Gosselin, Olivier R. (Imperial College London) | Ackerer, Philippe (LHyGeS)
Using geostatistical modelling to populate reservoir properties is nowadays the most common approach in the industry and it has received a great deal of attention. A geostatistical reservoir model defines a space of spatial uncertainty, which can be explored by generating many equiprobable reservoir property realisations that are as many possible reservoir models complying with static data. Among them, the relevant models are those that also match the dynamic data, which complete the available data for reservoir model calibration. Finding the relevant reservoir models in the space of spatial uncertainty is a time-consuming process that requires simulating the dynamic (flow) response of many reservoir models. Having a fast and reliable simulation method is then highly desirable to speed up the process of reservoir model calibration. In this context, a new approach has been developed and tested. The method allows easy and fast comparison between interpreted well-test results and equivalent (average) reservoir model properties in terms of transmissivity (k.h) and permeability. The comparison can be used to validate or reject a reservoir model, and to obtain indications on how to modify it to fit the well-test data. This paper presents the method and the results obtained to evaluate its performances and to validate it.
Well-test-interpreted permeabilities (or transmissivities) are nothing but weighted average permeabilities that are to be calculated from permeabilities defined over closed surfaces properly defined around the well, the weights depending on the flow geometry. The proposed method is based on steady-state flow simulation that is carried out by making the tested well a source term (producing or injecting well) at the centre of a simulation domain (reservoir model region). The latter must be extended enough to contain, or at least overlap, the stabilisation area of the well test in which average transmissivities are to be estimated. The method relies on three key aspects: defining a simulation domain (extension and shape) that is consistent with the actual well-test drainage area, defining relevant boundary conditions to reproduce flow paths that are consistent with those generated by the actual well test, using the new effective-gradient based averaging method to compute average permeabilities over closed surfaces properly defined.
The method is tested on various synthetic and partly real field cases, for which the transient well-test responses are first simulated and interpreted, then compared with the transmissivities, predicted using the new method. Sensitivity analysis is also carried out on calculation parameters (flow simulation domain, flow rates…) to check the robustness of the method and identify improvement avenues. All these results tend to confirm the effectiveness of the method, which can combine speed and accuracy. This method is intended to be used as an objective function to perform automatic or assisted reservoir model calibration on interpreted well-test data. It is expected to be particularly useful to calibrate naturally fractured reservoir models for which permeability tensors are to be calculated from uncertain locally defined fracture property statistics.
Engineers need to predict the production characteristics from hydraulically fractured wells in tight gas fields. Decline curve analysis (DCA) has been widely used over many years in conventional oil and gas fields. It is often applied to tight gas, but there is uncertainty regarding the period of production data needed for accurate prediction.
In this paper decline curve analysis of simulated production data from models of hydraulically fractured wells is used to to develop improved methods for calibrating decline curve parameters from production data. The well models were constructed using data from the Khazzan field in Oman. The impact of layering, permeability and drainage area on well performance is also investigated. The contribution of each layer to recovery and the mechanisms controlling that contribution is explored.
The investigation shows that increasing the amount of production data used to fit a hyperbolic decline curve does not improve predictions of recovery unless that data comes from many years (20 years for a 1mD reservoir) of production. This is because there is a long period of transient flow in tight gas reservoirs that biases the fitting and results in incorrect predictions of late time performance. Better predictions can be made by estimating the time at which boundary dominated flow is first observed (tb), omitting the preceding transient data and fitting the decline curve to a shorter interval of data starting at tb. For single layer cases, tb can be estimated analytically using the permeability, porosity, compressibility and length scale of the drainage volume associated with the well. Alternatively, tb can be determined from the production data allowing improved prediction of performance from 2-layer reservoirs provided that a) there is high cross-flow or b) there is no cross-flow and the lower permeability layer either does not experience BDF during the field life time or it is established quickly.
Lin, Qingyang (Imperial College London) | Bijeljic, Branko (Imperial College London) | Krevor, Samuel C. (Imperial College London) | Blunt, Martin J. (Imperial College London) | Rücker, Maja (Imperial College London) | Berg, Steffen (Imperial College London / Shell Global Solutions International BV) | Coorn, Ab. (Shell Global Solutions International BV) | van der Linde, Hilbert (Shell Global Solutions International BV) | Georgiadis, Apostolos (Shell Global Solutions International BV) | Wilson, Ove B. (Shell Global Solutions International BV)
In the context of digital rock analysis, pore-scale imaging of multiphase flow experiments using X-ray microtomography can be used to obtain fundamental insights into pore-scale displacement physics. This provides a basis to better calibrate numerical pore-scale simulators, or it can be used to understand local fluid distributions, while simultaneously measuring average properties, equivalent to a traditional SCAL experiment. Imaging studies in the literature have historically been conducted on small water-wet plugs, using kerosene, or another refined oil, as the non-wetting phase. Prior to conducting waterflood experiments, the initial water saturation has been established by dynamic flooding. The disadvantage with this is that a nonuniform saturation profile is established due to the capillary end effect. This will result in a higher average initial water saturation compared with, for instance, standard SCAL techniques, such as the porous-plate method or centrifugation.
In this paper, a methodology for initializing multiple small rock samples to the same connate water saturation and wettability state has been developed by adopting best SCAL practices, namely the porous-plate method or centrifugation using crude oil, followed by aging. We drill multiple small plugs from a full-size SCAL core sample, without losing capillary continuity with the base of the original sample. In the example presented, for Bentheimer sandstone, the initial saturation was established using centrifugation. The experiment is designed to prevent a nonuniform saturation profile in the small plugs. We use in-situ imaging to determine the water saturation after primary drainage and show that it is indeed uniform across the sample with a value consistent with larger-scale SCAL measurements and the measured mercury-injection capillary pressure. We also show that a significant wettability alteration had occurred by measuring in-situ contact angles.
It has been demonstrated in both laboratory measurements and field applications that tertiary polymer flooding can enhance oil recovery from heterogeneous reservoirs, primarily through macroscopic sweep (conformance). This study quantifies the effect of layering on tertiary polymer flooding as a function of layer-permeability contrast, the timing of polymer flooding, the oil/water-viscosity ratio, and the oil/polymer-viscosity ratio. This is achieved by analyzing the results from fine-grid numerical simulations of waterflooding and tertiary polymer flooding in simple layered models.
We find that there is a permeability contrast between the layers of the reservoir at which maximum incremental oil recovery is obtained, and this permeability contrast depends on the oil/water-viscosity ratio, polymer/water-viscosity ratio, and onset time for the polymer flood. Building on an earlier formulation that describes whether a displacement is understable or overstable, we present a linear correlation to estimate this permeability contrast. The accuracy of the newly proposed formulation is demonstrated by reproducing and predicting the permeability contrast from existing flow simulations and further flow simulations that have not been used to formulate the correlation.
This correlation will enable reservoir engineers to estimate the combination of permeability contrast, water/oil-viscosity ratio, and polymer/water-viscosity ratio that will give the maximum incremental oil recovery from tertiary polymer flooding in layered reservoirs regardless of the timing of the start of polymer flooding. This could be a useful screening tool to use before starting a full-scale simulation study of polymer flooding in each reservoir.
The aim of this study is to determine to what extent the quality of a history matched model is a good predictor of future production. The background is the common assumption that the better a model matches the production data is the better it is for forecasting, or, at the very least, it leads to an improved estimate of the uncertainty in future production. We demonstrate that the validity of this assumption depends on the length of the history match period and that of the forecasting period. It also depends on how heterogeneous the reservoir is.
The correlation between the quality of history match and quality of forecast depends on various factors. For the same level of heterogeneity one of the strongest factors is the water breakthrough time for the base and compared cases.
Broadly if both the base and compared case have water breakthrough before the end of the history match period then the forecasts are reasonable. However, there appears to be a very rapid transition from a reasonably good history match leading to a good forecast to a moderately good history match leading to a very poor forecast. If water breakthrough has not occurred there is a very poor correlation between the quality of the history match and the quality of the forecast. So, the traditional belief that a good history matched model will also produce a good forecast is not always true.
Two new Non-Intrusive Reduced Order Modelling approaches to estimate time varying, spatial distributions of variables from arbitrary unseen inputs are introduced. One is a generalization of an existing'dynamic' approach which requires multiple surrogate evaluations to model the solutions at different time instances, the other is a'steady-state' approach that evaluates all time instances simultaneously, reducing the local approximation error. The ability of these approaches to estimate the water saturation distributions expected during a gas flood through a 2D, dipping reservoir is investigating for a range of unseen input parameters. The range of these parameters has been chosen so that a range of flow regimes will occur, from a gravity tongue to a viscous dominated Buckley-Leverett displacement. A number of practically relevant model error measures were employed as opposed to the standard L2 (Euclidean) norm. The influence of the number and the structure of training simulations for the model was also investigated, by employing two simple experimental design methods. The results show that POD based NIROM approaches are prone to significant deviations from the true model. The main sources of error are due to the non-smooth variation of system responses in hyperspace and the transient nature of the flows as well as the underlying dimensionality reduction. Since the first two sources are properties of the physical system modelled it may be expected that similar problems are likely to arise independently of the interpolation method and the reduction process used.