The accurate identification of the different components of unconventional tight-oil reservoirs is fundamental to determining reservoir quality. Whereas the existing methodologies for the identification and separation of these components from cores in unconventional shale are both time-consuming and destructive, high-field nuclear magnetic resonance (NMR) is not and may be a viable solution. Low-field NMR downhole logging can assist with a subset of these measurements based on 2D NMR T1-T2 maps. Diffusion measurements play a unique role in the identification of the producible fluid having relaxation times of several tens of milliseconds.
A comparison of 2D NMR T1-T2 measurements on the Upper Bakken source rock with the Middle Bakken and Three Forks intervals at 2-MHz and 400-MHz Larmor frequencies reveals clear differences between the T1/T2 ratios of the different fluids. The bitumen T1/T2 response is sensitive to its environment, varying in value from 20 to 26 in the kerogen-rich Upper Bakken interval to about 2 to 6 for the Middle Bakken and lower Three Forks at 2-MHz. One of the main challenges for low-field NMR T1-T2 relaxometry is the insufficient contrast between the bitumen and clay-bound-water signals. High-field NMR T1-T2 maps at 400-MHz Larmor frequency enable separating the kerogen/bitumen and the clay-associated water by using the frequency dependence of T1. This separation is enabled by the high T1/T2 ratios of the kerogen and the bitumen of more than 1,000 in comparison with that of clay-associated water, which has a relatively lower T1/T2 ratio, on the order of a few hundred at 400-MHz 1H Larmor frequency. Fluids in the inorganic porosity relaxing at a few tens of milliseconds have insufficient T1/T2 contrast to enable their identification. Diffusion measurements can fill this gap in fluid typing by helping in their identification as demonstrated for the Middle Bakken and Three Forks samples.
We provide experimental findings proving that the NMR T1/T2 ratio of the oil phase in a mixed-saturated rock strongly correlates with wettability. NMR laboratory data acquired using refined oil (Soltrol) demonstrate a linearly decreasing trend between the T1/T2 ratio of the oil phase and the rock wettability measured by the USBM technique. For downhole applications, the main challenge is to separate the NMR signals of oil and water. To address this challenge, a two-step workflow is used. First, the NMR signal is separated into those from the oil phase and water phase using 2D diffusivity vs. T2 analysis where the overall T2 distributions of oil and water are determined. These distributions guide the subdivision of individual slices of a 3D D-T2-T1/T2 cube into water and oil areas and the calculation of their partial pore volumes. The T1/ T2 ratio for each fluid is calculated as an average T1/T2 weighted by its partial porosity at each T1/T2 slice. This T1/ T2 ratio is converted to rock wettability by the laboratory correlation. We also discuss data acquisition limitations and potential improvements of the workflow.
Wettability is one of a few critical reservoir properties that is fundamental to reservoir description and engineering, as exemplified by Morrow (1990). There are many established laboratory wettability testing methods (Amott, 1959; Donaldson et al., 1969; Ma et al., 1999) but each one has its inherent advantages and disadvantages. The biggest concern for any laboratory wettability test is that people are not sure how representative the testing samples are to the targeted reservoir, even with great effort in core preservation or wettability restoration (Ma and Amabeoku, 2014).
It was proposed that reservoir wettability may be characterized downhole using measurements, such as reservoir pressure profiling with a formation tester (Desbrandes and Gualdron, 1988). Its application has been rare, possibly due to the difficulties to reduce the noise around in-situ capillary pressure measurement caused by pressure gauges, mud properties, the extent of filtrate invasion, and all other issues related to pretest pressure measurements (Proett et al., 2015).
Andersen, Pål Østebø (University of Stavanger and The National IOR Center of Norway) | Skjæveland, Svein Magne (University of Stavanger and The National IOR Center of Norway) | Standnes, Dag Chun (University of Stavanger)
Primary drainage by centrifuge is considered where a core fully saturated with a dense wetting phase is rotated at a given rotational speed and a less dense, nonwetting phase enters. The displacement is hindered by a positive drainage capillary pressure and equilibrium is approached with time. We present general partial differential equations describing the setup and consider a multispeed drainage sequence from one equilibrium state (at a given rotational speed) to the next. By appropriate simplifications we derive that the process is driven by the distance from equilibrium state as described by the capillary pressure at the inner radius and position of the threshold pressure (transition from two to one-phase) from their equilibrium values. Further, an exponential solution is derived analytically to describe the transient production phase. Using representative input saturation functions and system parameters we solve the general equations using commercial software and compare with the predicted exponential solutions. It is seen that the match is excellent and that variations in timescale are well captured. The rate is slightly underestimated at early times and overestimated at late times, which can be related to changes in total mobility during the cycles for the given input.
Measuring capillary pressure using the centrifuge approach (Ruth and Chen, 1995) is one of several methods available for obtaining such curves. Other methods include the use of membranes/porous disks (Lenormand et al., 1996; Hammervold et al., 1998) and mercury injection capillary pressure (MICP) (Purcell, 1949). Each method has benefits and disadvantages regarding time consumption and interpretation of the measured data. Most of the methods consist of exposing a rock sample saturated with a representative fluid composition to various pressure conditions where the equilibrium state corresponds to a unique distribution of fluids and hence capillary pressure in the sample (Forbes, 1997). The fluid saturations corresponding to these equilibrium states are determined by the observed fluid production.
Zhang, Quanying (China University of Petroleum) | Zhang, Feng (China University of Petroleum) | Liu, Juntao (China University of Petroleum) | Wu, He (China University of Petroleum) | Wu, Guoli (China University of Petroleum) | Jia, Wenbao (Nanjing University of Aeronautics and Astronautics) | Ti, Yongzhou (Isotope Research Institute of Henan Academy of Sciences Co. Ltd., Zhengzhou) | Li, Jing (Guta Branch School of Mongolian Senior High School)
Pulsed-neutron gamma density (NGD) logging, as an emerging density measurement technology is of significance for radioprotection and technological development of logging-while-drilling (LWD). Compared to the gamma-gamma density (GGD), the NGD technique has the advantages of environment, safety and health. However, due to the lack of theory, the quantitative relationship between the inelastic gamma field distribution and formation parameters has not been resolved so far. The current data-processing methods are mainly based on the empirical formulas obtained by experiment and simulation methods.
In order to quantitatively clarify the logging mechanism and theoretically develop a new density algorithm, the coupled idea is introduced to NGD logging. Based on the theories of fast-neutron scattering and gamma attenuation, the fast neutron-gamma coupled field theory is put forward to describe the distribution of inelastic gamma field. The inelastic gamma field distribution is characterized by the inelastic scattering cross section, fast-neutron scattering free path and formation density, and the influence of the formation parameters on the field distribution is quantified and a new density algorithm is derived from the coupled theory. The new density algorithm can avoid the complex correction of hydrogen index and simplify the process of density measurement. In addition, the coupled field theory and the new density algorithm are verified by Monte Carlo simulation. The research not only clarifies the NGD mechanism but also provides theoretical guidance for NGD logging.
Formation density is one of the most important parameter for formation evaluation, particularly in oil and gas exploration. The traditional gamma-gamma density (GGD) employing a Cs-137 radioisotope as a gamma source has raised certain health, safety, and environmental (HSE) concerns. The companies working with radioisotope sources have to follow rigorous standards and suffer enormous cost on the packaging, storage, transportation, handling, and disposal of the materials (Badruzzaman et al., 2004; Alakeely and Meridji, 2014).
Zuo, Julian (Schlumberger) | Gisolf, Adriaan (Schlumberger) | Pfeiffer, Thomas (Schlumberger) | Achourov, Vladislav (Schlumberger) | Chen, Li (Schlumberger) | Mullins, Oliver C. (Schlumberger) | Edmundson, Simon (Schlumberger) | Partouche, Ashers (Schlumberger)
Formation fluid properties are critical inputs for field development planning. Acquisition of representative, low-contamination, formation fluid samples is key to obtaining accurate fluid properties from laboratory analysis. Quantification of oil-based-mud (OBM) or water-based-mud (WBM) filtrate contamination of hydrocarbon or water samples is still one of the biggest challenges, both during real-time formation-tester sampling operations and with surface laboratory techniques. Laboratory sample analysis uses either the skimming or the subtraction method to quantify OBM filtrate contamination of hydrocarbon samples whereas tracers are typically required to quantify WBM filtrate contamination of water samples. Recently, a new real-time workflow has been developed to quantify OBM or WBM filtrate contamination of hydrocarbon or water samples with downhole multisensor formation-tester measurements. When discrepancies are observed between laboratory-derived and real-time contamination quantification, it can be challenging to uncover the source of the difference or to identify the most accurate method. This paper evaluates the applicability of different methods.
Surface laboratory methods to quantify OBM filtrate contamination crucially assume that the mole fraction of components in the C8+ portion of uncontaminated reservoir fluids and the corresponding molecular weights (or carbon numbers) follow an exponential relation. When actual fluid compositions deviate from the assumed behavior, a large error in OBM filtrate contamination quantification can occur. In this paper, more than 20 laboratory-created mixtures of formation fluid and mud filtrate are analyzed to validate the laboratory methods. Errors of 2 to 3 wt% in OBM filtrate contamination quantification were observed for virgin reservoir fluids that follow the assumed behavior. However, much larger errors may be observed for biodegraded oil, oils with multiple charges from different sources, or oil with similarly wide ranges of compositions to OBM filtrate.
A new workflow allows quantification of OBM or WBM contamination using multiple downhole sensors, for real-time measurement, with unfocused and focused sampling tools for water, oil, and gas condensate. The new workflow comprises five steps: (1) data preprocessing; (2) endpoint determination for a pure native formation fluid using flow-regime identification; (3) endpoint determination for pure mud filtrate and quality control of all endpoints using linear relations between measured fluid properties; (4) contamination determination in vol% and wt% on the basis of live fluids and stock-tank liquids; and (5) decontamination of the fluid properties including gas/oil ratio, density, optical density, formation volume factor, resistivity, and compositions.
The new workflow has been applied to a large number of field cases, with very good results. For most of the cases, the downhole analysis is consistent with the surface laboratory results.
When discrepancies between methods are observed, a thorough understanding of the limitations of each technique, as described in this paper, will help to determine which data to bring forward and what to discard.
Downhole formation pressure gradients are often estimated by defining the relationship between formation pressures and depths using ordinary least-squares regression (OLS). These lines are fit based on the implicit assumption of no depth measurement errors when using a pressure-on-depth regression model. For almost all pressure surveys, depth measurement errors are non-negligible and can significantly impact the estimation of the pressure gradient, sometimes to a level of 0.1 psi/ft. Because OLS pressure-on-depth regression models do not account for depth measurement errors, these models produce estimates of pressure gradients that are too low. Conversely, OLS depth-on-pressure regression models (pressure measurement errors and no depth measurement errors) produce pressure gradient estimates that are too high. Both models result in uncertainty estimates that are too narrow. Therefore, a modified OLS regression model (MOLS) was developed to account for errors in both measurement and to make better uncertainty estimates. MOLS is an extension of OLS regression modeling as it incorporates the limits or boundary conditions defined by the OLS regression models and incorporates the method of moments to make a more accurate estimate of the true pressure gradient. The MOLS methodology was validated against orthogonal regression models, which incorporates error in both measurements, via synthetic datasets as the errors in depth and pressure are known. Once MOLS was validated, these same models were used to quantify the impact of measurement error stemming from common pressure-survey design components and operational overprints. The number of pretests acquired and the interval thickness are directly related to the overall certainty of a given pressure gradient. Fewer pretests are needed to achieve the same level of certainty in a thicker interval relative to a thinner interval. Making one or two additional pretests does not have a high probability of reducing uncertainty. The potential of nonrepresentative measurement error, for a specific pressure survey, also has a significant impact on the accuracy and uncertainty of a pressure gradient. If the absolute sum of the measurement error is large, biased estimates of a pressure gradient result.
For actual pressure-survey data, nonrepresentative measurement error is difficult to identify and the acquisition of additional pretests on subsequent logging runs or passes does not reduce this risk. Repeating individual pressure measurements, through multiple drawdowns at specific depths, also does not significantly reduce uncertainty or the probability of nonrepresentative measurement error. Breaking larger intervals into smaller segments potentially introduces larger uncertainties and increases the risk of nonrepresentative measurement error. Additionally, if any of those intervals are associated with depth recalibrations, the probability of inaccurate estimates of the pressure gradient increases. Therefore, segmentation of an interval can increase the potential for false interpretations of vertical flow boundaries driven by an apparent multiple-gradient solution. All of these factors make the differentiation of pressure gradients between sands difficult. Under ideal conditions, the difference in pressure gradients must be bigger than the sum of their standard deviations to be statistically accurate.
Xu , Guangping (Schlumberger-Doll Research and Sandia National Laboratories) | McCormick , David (Schlumberger-Doll Research) | Herron , Michael (Schlumberger-Doll Research) | Cheshire , Stephen (EXPEC Advanced Research Center) | Al-Salim , Ahmed (EXPEC Advanced Research Center) | Almarzouq, Anas (EXPEC Advanced Research Center)
Mineralogy is important in the evaluation of reservoir quality and completion quality, and thus mineral modeling is of great interest to the industry. The inversion of major-element data from either XRF or nuclear spectroscopy tools to obtain mineralogy logs is a common approach to the interpretation of these data. The inversion method requires one to take into account all major minerals in the samples, which is often challenging to achieve with high accuracy and precision. An alternative approach, forward mineral modeling, uses a limited calibration dataset to derive mineral composition from elemental abundance.
This paper uses a forward-mineral-modeling method to correlate mineral content with all eight major elements (Si, Al, Ca, Mg, Na, K, Fe, and S), in which each individual mineral or a group of minerals is solved independently. The coefficients between mineral and element are obtained through the local calibration. This local calibration method can solve for most of the minerals sought even in situations where some minerals cannot be separately measured with confidence, such as illite and smectite from XRD measurements.
We present a calibrated algorithm using least-squares regression that is optimized by regularization and singular value decomposition applied to a set of samples from the Lower Silurian Qusaiba Member of the Qalibah Formation of Saudi Arabia. The optimized algorithm estimates mineral composition using elemental concentrations from either core or log measurements. With as few as 10 representative samples in the calibration dataset, the optimized algorithm has the ability to predict minerals with an accuracy of a few percent.
Mineralogy affects many petrophysical parameters including porosity, permeability, water saturation, and attributes related to rock strength, which are crucial for evaluating reservoir and completion quality of potential reservoir rocks. The mineralogy of unconventional hydrocarbon shale reservoirs is more complex, but has been less extensively characterized than conventional sandstone and carbonate reservoirs. For unconventional reservoirs, the common industry practice is to measure mineralogy on a set of selected samples to calibrate petrophysical data. The mineralogy can also be estimated from geochemical logging data.
Dang, Son (University of Oklahoma) | Gupta, Ishank (University of Oklahoma) | Chakravarty, Aditya (University of Oklahoma) | Bhoumick, Pritesh (University of Oklahoma) | Taneja, Shantanu (University of Oklahoma) | Sondergeld , Carl (University of Oklahoma) | Rai, Chandra (University of Oklahoma)
Mechanical characterization of an isotropic rock requires the measurements of at least two elastic constants. Dynamic constants are obtained using ultrasonic techniques and static constants are obtained from the stress-strain response of the rock; both techniques can be used at elevated pressures and temperatures. These methods typically involve the use of cylindrical plugs; however, the existence of natural fractures or fissility of shale formations precludes the extraction of cores. The challenge is to improve reservoir characterization by measuring elastic properties using irregular, but ubiquitous smaller rock samples. We propose measuring two elastic parameters, i.e., Young’s modulus and bulk modulus through nanoindentation and mercury injection capillary pressure (MICP) experiments, respectively. With these two constants and the assumption of isotropy, all other isotropic elastic constants can be derived. The idea is to infer Young’s modulus (Enano) using nanoindentation and estimate bulk modulus (KMICP) using MICP data; neither measurement requires core plugs and can be carried out on irregularly shaped rock fragments. We assume the fragments are representative of the formation of interest; confirmation comes from establishing statistics. We measured Woodford, Haynesville, Eagle Ford, Wolfcamp, Bakken, Utica and Green River shale core samples. These values are compared to values obtained in ultrasonic-pulse transmission experiments. Ultrasonic values of K measured at 5,000 psi confining pressure agree well with the values of KMICP at 5,000 psi. Similarly, Enano shows a 1:1 correlation with ultrasonically derived Young’s modulus at 5,000 psi confining pressure. At a confining pressure of 5,000 psi, the influence of cracks is reduced.
The ubiquitous use of hydraulic fracturing to stimulate unconventional reservoirs drives the need for improved methodologies to compute the mechanical properties of rock. Mineralogical variability (Rickman et al., 2008; 2009; Passey et al., 2010) in shale should be considered in the decision of the placement of laterals. Ductility is a function of mineralogy, TOC richness and in-situ stress profile. Within a stimulation zone, where principle stresses are minimally varied, mineralogical variability directly affects elastic properties (Al-Tahini et al., 2006), brittleness and ductility (Bai, 2016): High concentrations of clay make shale more ductile, while the predominance of quartz is associated with brittleness. Jarvie et al., (2007) related brittleness directly to mineralogy.
Besov, Alexander (University of Oklahoma) | Tinni , Ali (University of Oklahoma) | Sondergeld , Carl (University of Oklahoma) | Rai , Chandra (University of Oklahoma) | Paul , William (Encana Services Company Ltd.) | Ebnother , Danielle (Encana Services Company Ltd.) | Smagala, Thomas (Encana Services Company Ltd.)
Wells producing from the Tuscaloosa Marine Shale (TMS) have an initial production over 1,000 BOPD. Despite such significant hydrocarbon rates, the true potential and the factors controlling the production of hydrocarbon in the TMS remain elusive. Formation evaluation by logging and laboratory petrophysical measurements was performed to understand storage and production of hydrocarbons from this resource play.
A well was drilled, logged and cored through the Tuscaloosa Marine Shale formation. Field NMR and resistivity-based image logs were acquired. Laboratory NMR, mineralogy, total organic carbon (TOC), crushed rock porosity and SEM images were obtained on the core samples. The NMR measurements were conducted at the same echo spacing as the logging tool on “as-received” samples, after brine and dodecane imbibition, as well as on dodecane-saturated samples at 5,000 psi. T1-T2 maps were generated on the as-received and brine-imbibed samples.
The total clay content over the depth of investigation ranged from 25 to 81 wt%, averaging 63 ± 14 wt%. The dominant clays are illite and mixed-layer clays. Measured LECO™ TOC content is 1.6 ± 0.6 wt%. SEM images reveal that the organic matter is generally nonporous. Crushed helium porosity varies between 3.7 and 6.6%. The NMR porosity measurements show good agreement between the field and laboratory.
NMR results from imbibition and pressure saturation experiments reveal that the pore network is inaccessible to dodecane due to strong affinity for water and high capillary entry pressure. Vertical fractures are apparent in the image log in addition to microfractures observed in the SEM images. Based on the laboratory measurements, it appears that pores in the TMS cannot store hydrocarbons. Unlike other shale plays, hydrocarbons in TMS are likely stored and produced from microfractures, rather than from organic or inorganic porosity.
Kausik, Ravinath (Schlumberger-Doll Research) | Kleinberg, Robert (Schlumberger-Doll Research) | Rylander, Erik (Schlumberger) | Lewis , Richard (Schlumberger) | Sibbit , Alan (Schlumberger-Doll Research) | Westacott, Andrew (Kerogen Resources)
The total gas-in-place (TGIP) in a gas-shale resource is a direct measure of the reservoir quality of the play. Knowledge of TGIP as a function of depth enables the identification of gas-bearing zones and aids in the determination of sweet spots for landing horizontal drainholes. We describe a new method to determine TGIP directly from nuclear magnetic resonance (NMR) logs, and demonstrate how this can be further improved with multisensor analysis.
Gas-shale resources are more difficult to evaluate than traditional gas reservoirs, which require an estimate of gas volume, the product of porosity and saturation, and direct application of natural gas tables for gas hydrogen index. In gas shales, the hydrocarbon exists not only as pore-filling free gas but also as adsorbed gas on high-surface-area kerogen, with different densities and their downhole NMR signals cannot be separated and the effective hydrogen index of the hydrocarbon phase cannot be determined. The current method of evaluating TGIP, based on the use of the Langmuir isotherm to estimate adsorbed gas, requires laboratory investigations on core, which are rarely performed in a time frame appropriate for production decisions, and the limited number of measured isotherms are often insufficient to characterize the heterogeneity of the resource.
We introduce the TGIP-NMR method that converts NMR measurements into TGIP by counting hydrogen nuclei directly. The method circumvents the requirement to know the hydrogen index, pore sizes, pore pressures and formation temperature in order to use natural gas tables. We show examples comparing gas-in-place estimates, which use the current model (free and adsorbed gas) with TGIP-NMR, demonstrating 20% more gas than the free and adsorbed gas method, most likely because organic shales have very small pore sizes and as a consequence the free gas may be denser than predictions from natural gas tables.