Capillary pressure is a crucial step in reservoir properties definition and distribution during static and dynamic modelling. It is a key input into saturation height modelling (SHM) process, understanding the fluid distribution and into reservoir rock typing process. Capillary pressure models provide an insight into field dynamic for the identification of swept zones and provide another calibration besides the log calculated saturation. Capillary pressure curve tends to be more complex in carbonates in comparison to sandstone reservoirs because of post deposition processes that impact the rock flow properties, hence complex pore throat size distribution (uni-modal, bi-modal or tri-modal). Therefore, accurate determination of this property is the cornerstone in the reservoir characterization process.
Capillary pressure can be obtained using several experimental techniques, such as mercury injection (MICP), centrifuge (CF) and porous plate (PP). Each method has its own inherited advantages and disadvantages. The MICP method tends to be faster, cheaper and provides a full spectrum of pore throat size of a plug. Whereas, the PP method can be carried out at reservoir conditions with minimum required corrections.
In this paper, a detailed workflow for quality control capillary pressure is discussed. The workflow is sub-divided into three main parts: Instrumental and experimental level, core measurement level and logs level. Experimental level starts with proper designing the actual procedure of the capillary pressure experiment. Parameters such as pore volume, bulk volume and grain density are investigated at core measurement level. In geological-petrography montage, all petrography data; X-Ray Diffraction (XRD), Scanning Electron Microscope (SEM), thin section and computed tomography scan (CT) are used along with the capillary pressure curve for assessment. Comparing various methodologies of experimental technique carried out on twin plugs, if exist, are also investigated. The capillary pressure that passes the previous QC steps is used as input into saturation-point comparison as a logs level QC. The saturation calculated from capillary pressure is compared to log-derived water saturation eliminating any issues with porosity and permeability of the trims and provides insight to the uncertainty level in the model. As an additional step, the MICP measurements are fitted with bi-modal Gaussian basis functions with two practical benefits. First, the quality of this fitting is a useful indicator for the evaluation of pore structure complexity and the identification erroneous measurements. Second, the fitting parameters are useful inputs for geological interpretation, rock typing and SHM. This rapid and automated workflow is a useful tool for screening, processing and integration of large-scale capillary pressure data sets, a key step in integrated reservoir description, characterization and modelling.
Flow zonation and permeability estimation is a common problem in reservoir characterization; usually, integration of openhole log data with conventional and special core analysis solves the latter. We present a Bayesian based method for identifying hydraulic flow units in uncored wells using the theory of Hydraulic Flow Units (HFU) and subsequently compute permeability using wireline log data.
First, we use the F-test and the Akaike's criteria coupled with a nonlinear optimization scheme based on the probability plot to determine the optimal number of HFU present in the core dataset with the regression match giving the pertinent statistical parameters of each flow unit. Second, we cluster core data into its respective HFU by using the Bayes' rule. Finally, we apply an inversion algorithm based on Bayesian inference to predict permeability using only wireline data.
We illustrate the application of the procedure with a carbonate reservoir having extensive core data. The results showed the Bayesian-based clustering and inversion technique delivered permeability estimates in agreement with core data as well as with results obtained from pressure transient analysis.
Among the applications of the workflow presented are better productivity index assessments, enhanced petrophysical evaluations, and improved reservoir simulation models. Coupling of Nonlinear optimization with Bayesian inference proves a robust way for performing data clustering providing unbiased estimations
The initialization of a reservoir simulator calls for the populating of a three-dimensional dynamic grid-cell model using subsurface data and interrelational algorithms that have been synthesized to be fit for purpose. These prerequisites are rarely fully satisfied in practice. This paper sets out to strengthen initialization through four key thrusts. The first addresses representative data acquisition, which includes the key-well concept as a framework for the cost-effective incorporation of free-fluid porosity and permeability within an initialization database. The second concerns the preparation of these data and their products for populating the static and dynamic models. Important elements are dynamically-conditioned net-reservoir cut-offs, recognition of primary flow units, and establishing interpretative algorithms at the simulator grid-cell scale for application over net-reservoir zones. The third thrust is directed at the internal consistency of capillary character, relative permeability properties and petrophysically-derived hydrocarbon saturations over net reservoir. This exercise is central to the simulation function and it is an integral component of hydraulic data partitioning. The fourth concerns the handling of formation heterogeneity and anisotropy, especially from the standpoint of directional parametric averaging and interpretative algorithms. These matters have been synthesized into a workflow for optimizing the initialization of reservoir simulators. In so doing, a further important consideration is the selection of the appropriate procedures that are available within and specific to different software packages.
The implementation of these thrusts has demonstrably enhanced the authentication of reservoir simulators through more readily attainable history matches with less required tuning. This outcome is attributed to a more systematic initialization process with a lower risk of artefacts. Of course, these benefits feed through to more assured estimates of ultimate recovery and thence hydrocarbon reserves.
Yanc, Jianguo (Chengdu University of Technology) | Zhao, Zhou (Chengdu University of Technology) | Wen, Xiaotao (Chengdu University of Technology) | Tang, Xiang Rong (Chengdu University of Technology) | Gu, Wen (Chengdu University of Technology)