High-resolution discretizations can be advantageous in compositional simulation to reduce excessive numerical diffusion that tends to mask shocks and fingering effects. In this work, we outline a fully implicit, dynamic, multilevel, high-resolution simulator for compositional problems on unstructured polyhedral grids. We rely on four ingredients: (i) sequential splitting of the full problem into a pressure and a transport problem, (ii) ordering of grid cells based on intercell fluxes to localize the nonlinear transport solves, (iii) higher-order discontinuous Galerkin (dG) spatial discretization with order adaptivity for the component transport, and (iv) a dynamic coarsening and refinement procedure. For purely cocurrent flow, and in the absence of capillary forces, the nonlinear transport system can be perturbed to a lower block-triangular form. With counter-current flow caused by gravity or capillary forces, the nonlinear system of discrete transport equations will contain larger blocks of mutually dependent cells on the diagonal. In either case, the transport subproblem can be solved efficiently cell-by-cell or block-by-block because of the natural localization in the dG scheme. In addition, we discuss how adaptive grid and order refinement can effectively improve accuracy. We demonstrate the applicability of the proposed solver through a number of examples, ranging from simple conceptual problems with PEBI grids in two dimensions, to realistic reservoir models in three dimensions. We compare our new solver to the standard upstream-mobility-weighting scheme and to a second-order WENO scheme.
As the oil and gas industry continues to operate in more complex and deeper water environments downhole scale control via scale squeeze treatments becomes an ever-increasing technical challenge. It is therefore essential that effective scale management strategies are adopted which incorporate suitable scale inhibitor (SI) selection, analysis and treatment design procedures to provide optimal and cost-effective squeeze treatment lifetimes to maximise oil production and reduce well intervention costs.
In this paper key factors are evaluated in order to provide a guidance to selecting a suitable treatment strategy for downhole scale control in co-mingled sub-sea well and the impact of chemical retention, minimum inhibitor concentration (MIC), limit of quantifiable detection (LOQD) and well dilution factors on treatment design and strategy are discussed. The pros and cons of different treatment strategies are presented in this paper and consideration is given to following three treatment strategies: Treating all wells with the same chemical and over designing the chemical treatment lifetime ie 18 months and then re-treating all wells after 12 months; Treating individual wells with tagged versions of the same scale inhibitor chemical; Treating individual wells with different scale inhibitors.
Treating all wells with the same chemical and over designing the chemical treatment lifetime ie 18 months and then re-treating all wells after 12 months;
Treating individual wells with tagged versions of the same scale inhibitor chemical;
Treating individual wells with different scale inhibitors.
Options (ii) and (iii) offer the ability to design similar treatment lifetimes for each well but have the flexibility to monitor wells individually and re-squeeze when required.
Examples are provided for treatment options (ii) and (iii) based upon a field example to illustrate the design concepts for fluorescent (F) and phosphorus (P) tagged polymers in two co-mingled wells and a theoretical example for treating three co-mingled wells with different scale inhibitors, one of which could be a phosphonate with two tagged polymers.
This paper presents an overview of the key factors that influence chemical selection and treatment design for co-mingled wells in the same flow line. In addition, it will highlight important concepts to provide guidance for the design of effective treatment strategies for squeezing co-mingled wells in sub-sea and deepwater environments.
Accurate numerical modeling of fluid transport is essential in reservoir management. Higher-order methods help to improve accuracy by reducing the numerical diffusion, which is common for all first order methods. In this paper, we present an implementation of a MUSCL-type second-order finite volume method and demonstrate its capabilities on 2D and 3D unstructured grids. This includes corner point grids that are typically used in reservoir modeling.
A second order finite volume method is compared to standard first order method in terms of accuracy, performance and an ability to handle nonlinearities. There are several ways to build a second order finite volume method. In this paper we choose an optimization-based strategy to compute the steepest possible linear reconstruction. At the same time, a steepness-limiting procedure is included in the optimization as constraint. This ensures that the steepest possible reconstruction that does not lead to oscillations is computed. As a result, sharper fronts compared to standard schemes are obtained.
The paper demonstrates the described method on several benchmark cases with emphasis on relevant for practical reservoir simulation test cases. In particular, we use Norne field open data set, which enables cross validation with other implementations. We test the method on the transport case, where an analytical solution is known, to verify convergence behavior and to isolate the errors. Furthermore, the performance of first- and second-order methods is compared on multiphase flow problems typical for improved oil recovery: solvent and CO2 injection. The second order method shows superior performance in terms of accuracy.
This paper verifies the desirable properties of higher order method for reservoir simulation. Moreover, all the described implementations are available in an open source reservoir simulator Open Porous Media (OPM). As a result, these methods are accessible for reservoir engineers and can be used with industry standard modeling setups.
Modelling multiscale-multiphysics geology at field scales is non-trivial due to computational resources and data availability. At such scales it is common to use implicit modelling approaches as they remain a practical method of understanding the first order processes of complex systems. In this work we introduce a numerical framework for the simulation of geomechanical dual-continuum materials. Our framework is written as part of the open source MATLAB Reservoir Simulation Toolbox (MRST). We discretise the flow and mechanics problems using the finite volume method (FVM) and virtual element method (VEM) respectively. The result is a framework that ensures local mass conservation with respect to flow and is robust with respect to gridding. Solution of the coupled linear system can be achieved with either fully coupled or fixed-stress split solution strategies. We demonstrate our framework on an analytical comparison case and on a 3D geological grid case. In the former we observe a good match between analytical and numerical results, for both fully coupled and fixed-stress split strategies. In the latter, the geological model is gridded using a corner point grid that contains degenerate cells as well as hanging nodes. For the geological case, we observe physically plausible and intuitive results given the boundary conditions of the problem. Our initial testing with the framework suggests that the FEM-VEM discretisation has potential for conducting practical geomechanical studies of multiscale systems.
Scale inhibitor (SI) analysis is an extremely important part of scale management and, in recent years, much work has been done on the development of specialist scale inhibitor analysis techniques like Liquid Chromatography Mass Spectroscopy (LCMS) to push the boundaries of low level scale inhibitor detection. However, LCMS requires costly and complex instrumentation and there was therefore still a need for the development of other advanced techniques like fluorescence (F) and Time resolved Fluorescence (TRF) that can be used on site to provide near "on line" data.
Fluorescence techniques are particularly suited to tagged polymers and naturally fluorescent molecules like polyamines whereas the operation principle of TRF is based on interactions between lanthanide ions and various functional groups of polymer or phosphonate scale inhibitors.
Both techniques work individually or in combination and this provides a distinct advantage for multiple scale inhibitor analysis in produced brines that enable the design of packages of different products for specific field applications. In addition, TRF and fluorescence techniques offer the capability of on-site detection compared to the majority of scale inhibitor analysis techniques and other advanced methods like LC-MS.
The ability to detect both phosphonate and polymeric scale inhibitors at very low MIC (<1ppm) has the potential for significantly extending scale squeeze lifetimes. This has now also allowed highly efficient, F tagged polymers, to be used in field situations where scale squeezing was either stopped or the lifetime was significantly compromised because of the lack of confidence in the residuals analysis.
Specific field and theoretical examples from both sub-sea and conventional wells will be presented where the application of both advanced fluorescence and TRF techniques has shown significant improvements in scale management.
This paper will compare and contrast the pros, cons and limitations of both fluorescence and TRF techniques for both phosphonate and polymeric scale inhibitors. In addition, it will highlight examples where scale management significantly improves through the application of Fluorescence and/or TRF scale inhibitor analysis techniques in complex production scenarios.
The ensemble based methods (especially various forms of iterative ensemble smoothers) have been proven to be effective in calibrating multiple reservoir models, so that they are consistent with historical production data. However, due to the complex nature of hydrocarbon reservoirs, the model calibration is never perfect, it is always a simplified version of reality with coarse representation and unmodeled physical processes. This flaw in the model that causes mismatch between actual observations and simulated data when ‘perfect’ model parameters are used as model input is known as ‘model error’. Assimilation of data without accounting for this model error can result in incorrect adjustment to model parameters, underestimation of prediction uncertainties and bias in forecasts.
In this paper, we investigate the benefit of recognising and accounting for model error when an iterative ensemble smoother is used to assimilate production data. The correlated ‘total error’ (combination of model error and observation error) are estimated from the data residual after a standard history matching using Levenberg-Marquardt form of iterative ensemble smoother (LM-EnRML). This total error is then used in further data assimilations to improve the model prediction and uncertain quantification from the final updated model ensemble. We first illustrate the method using a synthetic 2D five spot case, where some model errors are deliberately introduced, and the results are closely examined against the known ‘true’ model. Then the Norne field case is used to further evaluate the method.
The Norne model has previously been history matched using the LM-EnRML (
Wang, Wendong (China University of Petroleum) | Zhang, Kaijie (China University of Petroleum) | Su, Yuliang (China University of Petroleum) | Tang, Meirong (PetroChina Oil & Gas Technology Research Institute of Changqing Oil field) | Zhang, Qi (China University of Geosciences) | Sheng, Guanglong (China University of Petroleum)
In the development of shale oil and gas reservoir, hydraulic fracture treatments may induce complex network configuration, which is very challenging to characterize. The existing fracture properties interpretation methods mostly rely on simplifying assumptions and are typically empirical in nature. The aim of this work is therefore to introduce an integrated framework involving fractal theory, inverse analysis of micro-seismic events (MSE), and rate-transient analysis to map the heterogeneity and distribution of fracture properties. In this work, a general framework is proposed to characterize both the geometry configuration and the owing properties of the complex fracture network (CFN). The CFN characterization framework is naturally divided into two stages: characterize the fracture geometry network by microseismic data and characterize the fracture dynamic properties by production data. In the fracture configuration characterization stage, a stochastic fractal fracture model based on an L-system fractal geometry is applied to describe the CFN geometry. Moreover, the genetic algorithm (GA) as a mixed integer programming (MIP) algorithm are applied to find the most probable fracture configuration based on the microseismic data. As to the owing properties characterization stage, we introduced embedded discrete fracture model (EDFM) for the computational concern and a Bayesian framework is used to quantify these fracture dynamical properties e.g., conductivity, porosity and pressure dependent multiplier by assimilating the production data. In addition, rate-transient analysis is also applied to calibrate the total fracture length and estimate effective stimulated-reservoir volume (ESRV). In order to validate this framework, a synthetic numerical case is developed. The result indicates that our integrated framework is able to characterize both CFN configuration and properties by assimilating microseismic and production data sequentially. The proposed workflow shows that the characterized CFN model would yield reasonable probability predictions in unconventional production rate.
Is Surfactant Environmentally Safe for Offshore Use and Discharge? The current presentation date and time shown is a TENTATIVE schedule. The final/confirm presentation schedule will be notified/available in February 2019. Designing Cement Jobs for Success - Get It Right the First Time! Connected Reservoir Regions Map Created From Time-Lapse Pressure Data Shows Similarity to Other Reservoir Quality Maps in a Heterogeneous Carbonate Reservoir. X. Du, Y. Jin, X. Wu, U. of Houston; Y. Liu, X. Wu, O. Awan, J. Roth, K.C. See, N. Tognini, Shell Intl.
By International Petroleum Technology Conference (IPTC) Monday, 25 March 0900-1600 hours Instructors: Olivier Dubrule and Lukas Mosser, Imperial College London Deep Learning (DL) is already bringing game-changing applications to the petroleum industry, and this is certainly the beginning of an enduring trend. Many petroleum engineers and geoscientists are interested to know more about DL but are not sure where to start. This one-day course aims to provide this introduction. The first half of the course presents the formalism of Logistic Regression, Neural Networks and Convolutional Neural Networks and some of their applications. Much of the standard terminology used in DL applications is also presented. In the afternoon, the online environment associated with DL is discussed, from Python libraries to software repositories, including useful websites and big datasets. The last part of the course is spent discussing the most promising subsurface applications of DL.
Vazquez, Oscar (Heriot Watt University ) | Ross, Gill (Chrysaor) | Jordan, Myles Martin (Nalco Champion) | Baskoro, Dionysius Angga Adhi (Heriot-Watt University) | Mackay, Eric (Heriot-Watt University) | Johnston, Clare (Nalco Champion) | Strachan, Alistair (Nalco Champion)
Oilfield-scale deposition is one of the important flow-assurance challenges facing the oil industry. There are a number of methods to mitigate oilfield scale, such as reducing sulfates in the injected brine, reducing water flow, removing damage by using dissolvers or physically by milling or reperforating, and inhibition, which is particularly recommended if a severe risk of sulfate-scale deposition is present. Inhibition consists of injecting a chemical that prevents the deposition of scale, either by stopping nucleation or by retarding crystal growth. The inhibiting chemicals are either injected in a dedicated continuous line or bullheaded as a batch treatment into the formation, commonly known as a scale-squeeze treatment. In general, scale-squeeze treatments consist of the following stages: preflush to condition the formation or act as a buffer to displace tubing fluids; the main treatment, where the main pill of chemical is injected; overflush to displace the chemical deep into the reservoir; a shut-in stage to allow further chemical retention; and placing the well back in production. The well will be protected as long as the concentration of the chemical in the produced brine is greater than a certain threshold, commonly known as minimum inhibitor concentration (MIC). This value is usually between 1 and 20 ppm. The most important factor in a squeeze-treatment design is the squeeze lifetime, which is determined by the volume of water or days of production where the chemical-return concentration is greater than the MIC.
The main purpose of this paper is to describe the automatic optimization of squeeze-treatment designs using an optimization algorithm, in particular particle-swarm optimization (PSO). The algorithm provides a number of optimal designs, which result in squeeze lifetimes close to the target. To determine the most efficient design of the optimal designs identified by the algorithm, the following objectives were considered: operational-deployment costs, chemical cost, total-injected-water volume, and squeeze-treatment lifetime. Operational-deployment costs include the support vessel, pump, and tank hire. There might not be a single design optimizing all objectives, and thus the problem becomes a multiobjective optimization. Therefore, a number of Pareto optimal solutions exist. These designs are not dominated by any other design and cannot be bettered. Calculating the Pareto is essential to identify the most efficient design (i.e., the most cost-effective design).