Not enough data to create a plot.
Try a different view from the menu above.
Summary Modeling the dynamic fluid behavior of low-salinity waterflooding (LSWF) at the reservoir scale is a challenge that requires a coarse-grid simulation to enable prediction in a feasible time scale. However, evidence shows that using low-resolution models will result in a considerable mismatch compared with an equivalent fine-scale model with the potential of strong, numerically induced pulses and other dispersion-related effects. This work examines two new upscaling methods that have been applied to improve the accuracy of predictions in a heterogeneous reservoir where viscous crossflow takes place. We apply two approaches to upscaling to bring the flow prediction closer to being exact. In the first method, we shift the effective-salinity range for the coarse model using algorithms that we have developed to correct for numerical dispersion and associated effects. The second upscaling method uses appropriately derived pseudorelative permeability curves. The shape of these new curves is designed using a modified fractional-flow analysis of LSWF that captures the relationship between dispersion and the waterfront velocities. This second approach removes the need for explicit simulation of salinity transport to model oil displacement. We applied these approaches in layered models and for permeability distributed as a correlated random field. Upscaling by shifting the effective-salinity range of the coarse-grid model gave a good match to the fine-scale scenario, while considerable mismatch was observed for upscaling of the absolute permeability alone. For highly coarsened models, this method of upscaling reduced the appearance of numerically induced pulses. On the other hand, upscaling by using a single (pseudo)relative permeability produced more robust results with a very promising match to the fine-scale scenario. These methods of upscaling showed promising results when they were used to scale up fully communicating and noncommunicating layers as well as models with randomly correlated permeability. Unlike documented methods in the literature, these newly derived methods take into account the substantial effects of numerical dispersion and effective concentration on fluid dynamics using mathematical tools. The methods could be applied for other models where the phase mobilities change as a result of an injected solute, such as surfactant flooding and alkaline flooding. Usually these models use two sets of relative permeability and switch from one to another as a function of the concentration of the solute.
Summary Polymer flooding offers the potential to recover more oil from reservoirs but requires significant investments, which necessitate a robust analysis of economic upsides and downsides. Key uncertainties in designing a polymer flood are often reservoir geology and polymer degradation. The objective of this study is to understand the impact of geological uncertainties and history matching techniques on designing the optimal strategy for, and quantifying the economic risks of, polymer flooding in a heterogeneous clastic reservoir. We applied two different history matching techniques (adjoint-based and a stochastic algorithm) to match data from a prolonged waterflood in the Watt Field, a semisynthetic reservoir that contains a wide range of geological and interpretational uncertainties. Next, sensitivity studies were carried out to identify first-order parameters that impact the net present value (NPV). These parameters were then deployed in an experimental design study using Latin hypercube sampling (LHS) to generate training runs from which a proxy model was created using polynomial regression. A particle swarm optimization (PSO) algorithm was employed to optimize the NPV for the polymer flood. The same approach was used to optimize a standard waterflood for comparison. Optimizations of the polymer flood and waterflood were performed for the history-matched model ensemble and the original ensemble. The optimal strategy to deploy the polymer flood and maximize NPV varies based on the history matching technique. The average NPV and the variance are predicted to be higher in the stochastic history matching compared to the adjoint technique. This difference is due to the ability of the stochastic algorithm to explore the parameter space more broadly, which created situations in which the oil in place is shifted upward, resulting in a higher NPV. Optimizing a history-matched ensemble leads to a narrow range in absolute NPV compared to optimizing the original ensemble. This difference is because the uncertainties associated with polymer degradation are not captured during history matching. The result of cross comparison, in which an optimal polymer design strategy for one ensemble member is deployed to the other ensemble members, predicted a decline in NPV but surprisingly still showed that the overall NPV is higher than for an optimized waterflood, even for suboptimal polymer injection strategies. This observation indicates that a polymer flood could be beneficial compared to a waterflood, even if geological uncertainties are not captured properly.
Summary Numerical fidelity is required when using simulations to predict enhanced-oil-recovery (EOR) processes. In this paper, we investigate the conditions that lead to numerical errors when simulating low-salinity (LS) waterflooding (LSWF). We also examine how to achieve more accurate simulation results by scaling up the flow behavior in an effective manner. An implicit finite-difference numerical solver was used to simulate LSWF. The accuracy of the numerical solution has been examined as a function of changing the length of the grid cell and the timestep. Previously we have shown that numerical dispersion induces a physical retardation such that the LS front slows down while the formation water front speeds up. We also report for the first time that pulses can be generated as numerical artifacts in coarsely gridded simulations of LSWF. These effects reflect the interaction of dispersion, the effective-salinity range, and the use of upstream weighting during calculation, and can corrupt predictions of flow behavior. The effect of the size of the timestep was analyzed with respect to the Courant condition, traditionally related to explicit numerical schemes and also numerical stability conditions. We also investigated some of the nonlinear elements of the simulation model, such as the differences between the concentrations of connate water salinity and the injected brine, effective-salinity-concentration range, and the net mobility change on fluids through changing the salinity. We report that to avoid pulses it is necessary, but not sufficient, to meet the Courant condition relating timestep size to cell size. We have also developed two approaches that can be used to scale up simulations of LSWF and tackle the numerical problems. The first method is dependent on a mathematical relationship between the fractional flow, effective-salinity range, and the Péclet number and treats the effective-salinity range as a pseudofunction. The second method establishes an unconventional proxy method equivalent to pseudorelative permeabilities. A single table of pseudorelative permeability data can be used for a waterflood instead of two tables, as is usual for LSWF. This is a novel approach that removes the need for relative permeability interpolation during the simulation. Overall, by avoiding numerical errors, we help engineers to more efficiently and accurately assess the potential for improving oil recovery using LSWF and thus optimize field development. We also avoid the numerical pulses inherent in the traditional LSWF model.
Beteta, Alan (Heriot-Watt University) | Nurmi, Leena (Kemira Oyj) | Rosati, Louis (Kemira Chemicals Inc.) | Hanski, Sirkku (Kemira Oyj) | McIver, Katherine (Heriot-Watt University) | Sorbie, Ken (Heriot-Watt University) | Toivonen, Susanna (Kemira Oyj)
Summary Polymer flooding is a mature enhanced oil recovery (EOR) technology that has seen increasing interest over the past decade. Copolymers of acrylamide (AMD) and acrylic acid (AA) have been the most prominent chemicals to be applied, whereas sulfonated polymers containing 2-acrylamido-tertiary-butyl sulfonic acid (ATBS) have been used for higher temperature and/or salinity conditions. The objective of this study was to generate guidelines to aid in the selection of appropriate polyacrylamide chemistry for each field case. Our focus was in sandstone fields operating at the upper end of AA-AMD temperature tolerance, where there is a decision as to whether sulfonation is required. The performance of the polymer throughout the whole residence time in the reservoir was considered because the macromolecule can undergo some changes over this period. Several key properties of nine distinct polymer species were investigated. The polymers consisted of AA-AMD copolymers, AMD-ATBS copolymers, and AMD-AA-ATBS terpolymers (up to 15 mol% ATBS). The polymer solutions were studied both in their original state as they would be during the injection (initial viscosity, initial adsorption, and in-situ rheology), as well as in the state in which they are expected to be after the polymer has aged in the reservoir (i.e., in a different state of hydrolysis with corresponding changes in viscosity retention and adsorption after aging for various time periods). We note that the combination of viscosity retention and adsorption during the in-situ aging process has not been typically investigated in previous literature, and this is a key novel feature of this work. Each of the above parameters has an impact on the effectiveness and the economic efficiency of a polymer flooding project. The majority of the work was carried out in seawater (SW) at a temperature of 58°C. Under these conditions, AMD-AA samples showed similar solution viscosity at 5 to 30% AA. When the AA-AMD polymer solutions were aged at elevated temperature, the AA content steadily increased because of hydrolysis reactions. When the AA content was 30 mol% or higher, the viscosity started to decrease, and the adsorption started to increase as the polymer solution was aged further. Thermal stability improved when ATBS was included in the polymer structure. In addition, sulfonated polyacrylamide samples showed constant initial viscosity yields and decreasing initial adsorption with increasing ATBS content. The samples showed that the maximum observed apparent in-situ viscosity increased when the bulk viscosity and relaxation time of the solution increased. The information generated in this study can be used to aid in the selection of the most optimal polyacrylamide chemistry, which may not necessarily be the standard 30% AA and 70% AMD copolymer, for sandstone fields operating with moderate/high salinity brines at the upper end of AA-AMD temperature tolerance.
Summary In this work, we evaluate different algorithms to account for model errors while estimating the model parameters, especially when the model discrepancy (used interchangeably with “model error”) is large. In addition, we introduce two new algorithms that are closely related to some of the published approaches under consideration. Considering all these algorithms, the first calibration approach (base case scenario) relies on Bayesian inversion using iterative ensemble smoothing with annealing schedules without any special treatment for the model error. In the second approach, the residual obtained after calibration is used to iteratively update the total error covariance combining the effects of both model errors and measurement errors. In the third approach, the principal component analysis (PCA)‐based error model is used to represent the model discrepancy during history matching. This leads to a joint inverse problem in which both the model parameters and the parameters of a PCA‐based error model are estimated. For the joint inversion within the Bayesian framework, prior distributions have to be defined for all the estimated parameters, and the prior distribution for the PCA‐based error model parameters are generally hard to define. In this study, the prior statistics of the model discrepancy parameters are estimated using the outputs from pairs of high‐fidelity and low‐fidelity models generated from the prior realizations. The fourth approach is similar to the third approach; however, an additional covariance matrix of difference between a PCA‐based error model and the corresponding actual realizations of prior error is added to the covariance matrix of the measurement error. The first newly introduced algorithm (fifth approach) relies on building an orthonormal basis for the misfit component of the error model, which is obtained from a difference between the PCA‐based error model and the corresponding actual realizations of the prior error. The misfit component of the error model is subtracted from the data residual (difference between observations and model outputs) to eliminate the incorrect relative contribution to the prediction from the physical model and the error model. In the second newly introduced algorithm (sixth approach), we use the PCA‐based error model as a physically motivated bias correction term and an iterative update of the covariance matrix of the total error during history matching. All the algorithms are evaluated using three forecasting measures, and the results show that a good parameterization of the error model is needed to obtain a good estimate of physical model parameters and to provide better predictions. In this study, the last three approaches (i.e., fourth, fifth, sixth) outperform the other methods in terms of the quality of estimated model parameters and the prediction capability of the calibrated imperfect models.
Abstract This work presents a new open access carbonate reservoir case study that uniquely considers the major uncertainties inherent to carbonate reservoirs using one of the most prolific aggradational parasequence carbonate formation sets in the U.A.E., the Upper Kharaib Member (Early Cretaceous), as an analogue. The ensemble considers a range of interpretational scenarios and geomodelling techniques to capture the main components of its reservoir architecture, stratal geometries, facies, pore systems, diagenetic overprints and wettability variations across its shelf-to-basin profile. Fully anonymized data from 43 wells across 22 fields in the Bab Basin in the U.A.E from different geo-depositional settings and height above the free water level (FWL) was used. The data comprises of a full suite of open-hole logs and core data which has been anonymized, rescaled, repositioned and structurally deformed; FWLs were normalized and the entire model was placed in a unique coordinate system. The resultant static and dynamic models(s) capture the geological setting and reservoir heterogeneities of selected fields but now at a manageable scale. Synthetic production data has been generated by adding wells to an undisclosed ‘truth case’ model to obtain field-wide and well-by-well production data (oil, gas, and water rates, bottom-hole pressures etc.) from simulation runs. The original oil in place (OOIP) and reserves that have been computed from these models are synthetic and unique. Here we present an initial field development plan and corresponding reservoir simulations that showcase the heterogeneity inherent to the model and demonstrate the variability of the flow and storage capacity of the different reservoir architectures found in and around the Bab Basin. This is an example application of how we can use synthetic production data to improve our understanding of flow behaviours in carbonates. The novelty of our work is the provision of a unique open access dataset which enables reproducible science in the field of reservoir characterisation and simulation, and helps training new generations of geoscientists and reservoir engineers in the art of characterising, simulating and predicting the reservoir performance of carbonate reservoirs under different recovery processes.
Jafarizadeh, Babak (Heriot-Watt University)
Abstract Drilling exploration wells depend on the uncertainty we perceive. With some simplification, this uncertainty is about the value of producible hydrocarbons. To drill a prospect, the expected benefits from producing and selling hydrocarbons should outweigh the costs. This principle also applies to clusters of interrelated exploration targets. Here, the benefits of drilling each prospect is information about the neighbouring prospects, proving producible hydrocarbons, or both; these benefits should outweigh the aggregate costs. In practice, estimating the expected benefit could be challenging especially when multiple smaller discoveries are developed under a joint development scheme. With these interrelationships, what is the optimal drilling strategy? In addition, what is the economic value of a group of prospects? This paper discusses the effect of joint development economics on perceived uncertainty and drilling decisions. We suggest a framework for valuation of correlated prospects.
Abstract Polymer flooding offers the potential to recover more oil from reservoirs but requires significant investments which necessitate a robust analysis of economic upsides and downsides. Key uncertainties in designing a polymer flood are often reservoir geology and polymer degradation. The objective of this study is to understand the impact of geological uncertainties and history matching techniques on designing the optimal strategy and quantifying the economic risks of polymer flooding in a heterogeneous clastic reservoir. We applied two different history matching techniques (adjoint-based and a stochastic algorithm) to match data from a prolonged waterflood in the Watt Field, a semi-synthetic reservoir that contains a wide range of geological and interpretational uncertainties. An ensemble of reservoir models is available for the Watt Field, and history matching was carried out for the entire ensemble using both techniques. Next, sensitivity studies were carried out to identify first-order parameters that impact the Net Present Value (NPV). These parameters were then deployed in an experimental design study using a Latin Hypercube to generate training runs from which a proxy model was created. The proxy model was constructed using polynomial regression and validated using further full-physics simulations. A particle swarm optimisation algorithm was then used to optimize the NPV for the polymer flood. The same approach was used to optimise a standard water flood for comparison. Optimisations of the polymer flood and water flood were performed for the history matched model ensemble and the original ensemble. The sensitivity studies showed that polymer concentration, location of polymer injection wells and time to commence polymer injection are key to optimizing the polymer flood. The optimal strategy to deploy the polymer flood and maximize NPV varies based on the history matching technique. The average NPV is predicted to be higher in the stochastic history matching compared to the adjoint technique. The variance in NPV is also higher for the stochastic history matching technique. This is due to the ability of the stochastic algorithm to explore the parameter space more broadly, which created situations where the oil in place is shifted upwards, resulting in higher NPV. Optimizing a history matched ensemble leads to a narrow variance in absolute NPV compared to history matching the original ensemble. This is because the uncertainties associated with polymer degradation are not captured during history matching. The result of cross comparison, where an optimal polymer design strategy for one ensemble member is deployed to the other ensemble members, predicted a decline in NPV but surprisingly still shows that the overall NPV is higher than for an optimized water food. This indicates that a polymer flood could be beneficial compared to a water flood, even if geological uncertainties are not captured properly.
Abstract Geothermal energy refers to the heat stored in the subsurface that can be extracted by producing the hot fluids (water and/or steam) in contact with the hot formation. A major issue that may restrict the extraction of geothermal energy is precipitation of mineral scales which can occur within the reservoir, inside the wellbore, or surface facilities. The objective of this paper is to find the most efficient scale treatment strategy to prevent mineral scaling. Continuous injection of chemical scale inhibitor (SI) downhole in the production well, is the most common method to prevent mineral scale in geothermal plants. This method although effective does not protect the near-wellbore area, where the highest pressure drop is expected. To address this issue, two methods will be studied, bullheading the production well with SI, commonly known as squeeze treatment, and injecting SI in the injection well. Optimum designs for both methods were identified considering different levels of SI adsorption, and also permeability variation in fractured and non-fractured formations. As expected, the volume of SI required in continuous injection in producer was lower than the other two methods. However, in cases where the highest risk of precipitation is in the near-wellbore area or it is below the continuous injection point, it is necessary to apply one of the suggested methods. While the squeeze treatment protects only the formation around the producer well, treatments deployed in injector wells will protect the whole system and this extra protection may offset the extra volume of chemical necessary. The application of SI in injector well was studied in both continuous and batch mode with different injection frequencies. It was shown in continuous injection that even though less SI volume is used, the SI breakthrough time in producer can be so long that a series of squeeze treatments might be required to protect the well. The simulation results showed that in high adsorption formations, squeeze treatment is more efficient than deploying SI in the injector well. However, in cases of low adsorption and fractured reservoirs, the scenario commonly found in geothermal plants, SI injection at the injector is more optimal. Two different scale treatment methodologies were studied in geothermal wells, including squeeze treatment in producer and SI injection in the injector and the results were compared with the continuous SI injection in producer, which is the most current treatment in geothermal wells. It was illustrated in fractured geothermal reservoirs with relatively low levels of adsorption, that SI injection in the injector is the most optimum treatment that can effectively protect the whole plant from scaling.
Abstract Carbonates are known to be heterogeneous. In this paper, we will focus on characterising inter-well connectivity by applying a multi-dimensional approach to production data analysis along with integration of inter-disciplinary data for a Brazillian carbonate reservoir. Results of the analysis are used for interpretation of a noisy 4D seismic data to locate sweet spots. This unique integrated approach characterizes inter-well connectivity from four perspectives: (1) determine reservoir quality (2) identify source of water production (3) tracking of injected fluid's flow path (4) verify impact on 4D seismic response. We will show how the quality of a carbonate reservoir and its aquifer strength can be verified with well logs, pressure depletion rates and production behaviour. The use of sensitivity analysis in mechanistic models will also be shared to analyse the impact of heterogeneity on production behaviour. Using Chan's (1995) water-oil-ratio diagnostic plots, the source of water production will be identified as well. Explanation of analysis of well chronology, bubble maps of water cut will also be provided for tracking of injected fluid flow paths. Finally, interaction of production parameters between well-pairs and resistance modelling will be used to evaluate inter-well connectivity, and verified with 4D seismic data. Findings from all the analysis in the integrated approach are summarized into an inter-well connectivity metric, which is used as a reference for production and seismic history matching and interpretation of the noisy 4D seismic data. The integrated data analysis shows that the sweet spot corresponds with softening on the 4D seismic map, un-swept by injectors as it is located on a structural high southeast of the reservoir. This paper offers a comprehensive analysis to characterize important reservoir characteristics (such as thief zones and tight streaks). It will also emphasize ways to integrate inter-disciplinary data, and showcase various visualization perspectives to fortify and enhance the importance of data integration.