The accurate identification of the different components of unconventional tight-oil reservoirs is fundamental to determining reservoir quality. Whereas the existing methodologies for the identification and separation of these components from cores in unconventional shale are both time-consuming and destructive, high-field nuclear magnetic resonance (NMR) is not and may be a viable solution. Low-field NMR downhole logging can assist with a subset of these measurements based on 2D NMR T1-T2 maps. Diffusion measurements play a unique role in the identification of the producible fluid having relaxation times of several tens of milliseconds.
A comparison of 2D NMR T1-T2 measurements on the Upper Bakken source rock with the Middle Bakken and Three Forks intervals at 2-MHz and 400-MHz Larmor frequencies reveals clear differences between the T1/T2 ratios of the different fluids. The bitumen T1/T2 response is sensitive to its environment, varying in value from 20 to 26 in the kerogen-rich Upper Bakken interval to about 2 to 6 for the Middle Bakken and lower Three Forks at 2-MHz. One of the main challenges for low-field NMR T1-T2 relaxometry is the insufficient contrast between the bitumen and clay-bound-water signals. High-field NMR T1-T2 maps at 400-MHz Larmor frequency enable separating the kerogen/bitumen and the clay-associated water by using the frequency dependence of T1. This separation is enabled by the high T1/T2 ratios of the kerogen and the bitumen of more than 1,000 in comparison with that of clay-associated water, which has a relatively lower T1/T2 ratio, on the order of a few hundred at 400-MHz 1H Larmor frequency. Fluids in the inorganic porosity relaxing at a few tens of milliseconds have insufficient T1/T2 contrast to enable their identification. Diffusion measurements can fill this gap in fluid typing by helping in their identification as demonstrated for the Middle Bakken and Three Forks samples.
We provide experimental findings proving that the NMR T1/T2 ratio of the oil phase in a mixed-saturated rock strongly correlates with wettability. NMR laboratory data acquired using refined oil (Soltrol) demonstrate a linearly decreasing trend between the T1/T2 ratio of the oil phase and the rock wettability measured by the USBM technique. For downhole applications, the main challenge is to separate the NMR signals of oil and water. To address this challenge, a two-step workflow is used. First, the NMR signal is separated into those from the oil phase and water phase using 2D diffusivity vs. T2 analysis where the overall T2 distributions of oil and water are determined. These distributions guide the subdivision of individual slices of a 3D D-T2-T1/T2 cube into water and oil areas and the calculation of their partial pore volumes. The T1/ T2 ratio for each fluid is calculated as an average T1/T2 weighted by its partial porosity at each T1/T2 slice. This T1/ T2 ratio is converted to rock wettability by the laboratory correlation. We also discuss data acquisition limitations and potential improvements of the workflow.
Wettability is one of a few critical reservoir properties that is fundamental to reservoir description and engineering, as exemplified by Morrow (1990). There are many established laboratory wettability testing methods (Amott, 1959; Donaldson et al., 1969; Ma et al., 1999) but each one has its inherent advantages and disadvantages. The biggest concern for any laboratory wettability test is that people are not sure how representative the testing samples are to the targeted reservoir, even with great effort in core preservation or wettability restoration (Ma and Amabeoku, 2014).
It was proposed that reservoir wettability may be characterized downhole using measurements, such as reservoir pressure profiling with a formation tester (Desbrandes and Gualdron, 1988). Its application has been rare, possibly due to the difficulties to reduce the noise around in-situ capillary pressure measurement caused by pressure gauges, mud properties, the extent of filtrate invasion, and all other issues related to pretest pressure measurements (Proett et al., 2015).
Andersen, Pål Østebø (University of Stavanger and The National IOR Center of Norway) | Skjæveland, Svein Magne (University of Stavanger and The National IOR Center of Norway) | Standnes, Dag Chun (University of Stavanger)
Primary drainage by centrifuge is considered where a core fully saturated with a dense wetting phase is rotated at a given rotational speed and a less dense, nonwetting phase enters. The displacement is hindered by a positive drainage capillary pressure and equilibrium is approached with time. We present general partial differential equations describing the setup and consider a multispeed drainage sequence from one equilibrium state (at a given rotational speed) to the next. By appropriate simplifications we derive that the process is driven by the distance from equilibrium state as described by the capillary pressure at the inner radius and position of the threshold pressure (transition from two to one-phase) from their equilibrium values. Further, an exponential solution is derived analytically to describe the transient production phase. Using representative input saturation functions and system parameters we solve the general equations using commercial software and compare with the predicted exponential solutions. It is seen that the match is excellent and that variations in timescale are well captured. The rate is slightly underestimated at early times and overestimated at late times, which can be related to changes in total mobility during the cycles for the given input.
Measuring capillary pressure using the centrifuge approach (Ruth and Chen, 1995) is one of several methods available for obtaining such curves. Other methods include the use of membranes/porous disks (Lenormand et al., 1996; Hammervold et al., 1998) and mercury injection capillary pressure (MICP) (Purcell, 1949). Each method has benefits and disadvantages regarding time consumption and interpretation of the measured data. Most of the methods consist of exposing a rock sample saturated with a representative fluid composition to various pressure conditions where the equilibrium state corresponds to a unique distribution of fluids and hence capillary pressure in the sample (Forbes, 1997). The fluid saturations corresponding to these equilibrium states are determined by the observed fluid production.
Zhang, Quanying (China University of Petroleum) | Zhang, Feng (China University of Petroleum) | Liu, Juntao (China University of Petroleum) | Wu, He (China University of Petroleum) | Wu, Guoli (China University of Petroleum) | Jia, Wenbao (Nanjing University of Aeronautics and Astronautics) | Ti, Yongzhou (Isotope Research Institute of Henan Academy of Sciences Co. Ltd., Zhengzhou) | Li, Jing (Guta Branch School of Mongolian Senior High School)
Pulsed-neutron gamma density (NGD) logging, as an emerging density measurement technology is of significance for radioprotection and technological development of logging-while-drilling (LWD). Compared to the gamma-gamma density (GGD), the NGD technique has the advantages of environment, safety and health. However, due to the lack of theory, the quantitative relationship between the inelastic gamma field distribution and formation parameters has not been resolved so far. The current data-processing methods are mainly based on the empirical formulas obtained by experiment and simulation methods.
In order to quantitatively clarify the logging mechanism and theoretically develop a new density algorithm, the coupled idea is introduced to NGD logging. Based on the theories of fast-neutron scattering and gamma attenuation, the fast neutron-gamma coupled field theory is put forward to describe the distribution of inelastic gamma field. The inelastic gamma field distribution is characterized by the inelastic scattering cross section, fast-neutron scattering free path and formation density, and the influence of the formation parameters on the field distribution is quantified and a new density algorithm is derived from the coupled theory. The new density algorithm can avoid the complex correction of hydrogen index and simplify the process of density measurement. In addition, the coupled field theory and the new density algorithm are verified by Monte Carlo simulation. The research not only clarifies the NGD mechanism but also provides theoretical guidance for NGD logging.
Formation density is one of the most important parameter for formation evaluation, particularly in oil and gas exploration. The traditional gamma-gamma density (GGD) employing a Cs-137 radioisotope as a gamma source has raised certain health, safety, and environmental (HSE) concerns. The companies working with radioisotope sources have to follow rigorous standards and suffer enormous cost on the packaging, storage, transportation, handling, and disposal of the materials (Badruzzaman et al., 2004; Alakeely and Meridji, 2014).
Zuo, Julian (Schlumberger) | Gisolf, Adriaan (Schlumberger) | Pfeiffer, Thomas (Schlumberger) | Achourov, Vladislav (Schlumberger) | Chen, Li (Schlumberger) | Mullins, Oliver C. (Schlumberger) | Edmundson, Simon (Schlumberger) | Partouche, Ashers (Schlumberger)
Formation fluid properties are critical inputs for field development planning. Acquisition of representative, low-contamination, formation fluid samples is key to obtaining accurate fluid properties from laboratory analysis. Quantification of oil-based-mud (OBM) or water-based-mud (WBM) filtrate contamination of hydrocarbon or water samples is still one of the biggest challenges, both during real-time formation-tester sampling operations and with surface laboratory techniques. Laboratory sample analysis uses either the skimming or the subtraction method to quantify OBM filtrate contamination of hydrocarbon samples whereas tracers are typically required to quantify WBM filtrate contamination of water samples. Recently, a new real-time workflow has been developed to quantify OBM or WBM filtrate contamination of hydrocarbon or water samples with downhole multisensor formation-tester measurements. When discrepancies are observed between laboratory-derived and real-time contamination quantification, it can be challenging to uncover the source of the difference or to identify the most accurate method. This paper evaluates the applicability of different methods.
Surface laboratory methods to quantify OBM filtrate contamination crucially assume that the mole fraction of components in the C8+ portion of uncontaminated reservoir fluids and the corresponding molecular weights (or carbon numbers) follow an exponential relation. When actual fluid compositions deviate from the assumed behavior, a large error in OBM filtrate contamination quantification can occur. In this paper, more than 20 laboratory-created mixtures of formation fluid and mud filtrate are analyzed to validate the laboratory methods. Errors of 2 to 3 wt% in OBM filtrate contamination quantification were observed for virgin reservoir fluids that follow the assumed behavior. However, much larger errors may be observed for biodegraded oil, oils with multiple charges from different sources, or oil with similarly wide ranges of compositions to OBM filtrate.
A new workflow allows quantification of OBM or WBM contamination using multiple downhole sensors, for real-time measurement, with unfocused and focused sampling tools for water, oil, and gas condensate. The new workflow comprises five steps: (1) data preprocessing; (2) endpoint determination for a pure native formation fluid using flow-regime identification; (3) endpoint determination for pure mud filtrate and quality control of all endpoints using linear relations between measured fluid properties; (4) contamination determination in vol% and wt% on the basis of live fluids and stock-tank liquids; and (5) decontamination of the fluid properties including gas/oil ratio, density, optical density, formation volume factor, resistivity, and compositions.
The new workflow has been applied to a large number of field cases, with very good results. For most of the cases, the downhole analysis is consistent with the surface laboratory results.
When discrepancies between methods are observed, a thorough understanding of the limitations of each technique, as described in this paper, will help to determine which data to bring forward and what to discard.
Downhole formation pressure gradients are often estimated by defining the relationship between formation pressures and depths using ordinary least-squares regression (OLS). These lines are fit based on the implicit assumption of no depth measurement errors when using a pressure-on-depth regression model. For almost all pressure surveys, depth measurement errors are non-negligible and can significantly impact the estimation of the pressure gradient, sometimes to a level of 0.1 psi/ft. Because OLS pressure-on-depth regression models do not account for depth measurement errors, these models produce estimates of pressure gradients that are too low. Conversely, OLS depth-on-pressure regression models (pressure measurement errors and no depth measurement errors) produce pressure gradient estimates that are too high. Both models result in uncertainty estimates that are too narrow. Therefore, a modified OLS regression model (MOLS) was developed to account for errors in both measurement and to make better uncertainty estimates. MOLS is an extension of OLS regression modeling as it incorporates the limits or boundary conditions defined by the OLS regression models and incorporates the method of moments to make a more accurate estimate of the true pressure gradient. The MOLS methodology was validated against orthogonal regression models, which incorporates error in both measurements, via synthetic datasets as the errors in depth and pressure are known. Once MOLS was validated, these same models were used to quantify the impact of measurement error stemming from common pressure-survey design components and operational overprints. The number of pretests acquired and the interval thickness are directly related to the overall certainty of a given pressure gradient. Fewer pretests are needed to achieve the same level of certainty in a thicker interval relative to a thinner interval. Making one or two additional pretests does not have a high probability of reducing uncertainty. The potential of nonrepresentative measurement error, for a specific pressure survey, also has a significant impact on the accuracy and uncertainty of a pressure gradient. If the absolute sum of the measurement error is large, biased estimates of a pressure gradient result.
For actual pressure-survey data, nonrepresentative measurement error is difficult to identify and the acquisition of additional pretests on subsequent logging runs or passes does not reduce this risk. Repeating individual pressure measurements, through multiple drawdowns at specific depths, also does not significantly reduce uncertainty or the probability of nonrepresentative measurement error. Breaking larger intervals into smaller segments potentially introduces larger uncertainties and increases the risk of nonrepresentative measurement error. Additionally, if any of those intervals are associated with depth recalibrations, the probability of inaccurate estimates of the pressure gradient increases. Therefore, segmentation of an interval can increase the potential for false interpretations of vertical flow boundaries driven by an apparent multiple-gradient solution. All of these factors make the differentiation of pressure gradients between sands difficult. Under ideal conditions, the difference in pressure gradients must be bigger than the sum of their standard deviations to be statistically accurate.
Neutron porosity measurements are heavily used in formation evaluation together with density and resistivity logs. Although this measurement provides a good porosity value for a clean formation, it is easily influenced by conditions in the borehole and in the formation being evaluated. The temperature and pressure of the formation are two factors that can significantly influence the final readings. For example, at 100 °C, a 20-pu limestone formation may appear around 3 pu lower due to the elevated temperature, while for the same depth (assuming 26 °C/km thermal gradient, 9.8-MPa/km pressure gradient), the change in the apparent porosity will be about 1 pu in the other direction, due to the effect of pressure compressing the pore fluid. Clearly, these are significant effects that must be considered. Although corrections for temperature and pressure exist, the way in which they are devised and used is far from being a universal standard. This paper discusses the assumptions that should be used in developing these corrections and the physics involved in the process, resulting in better use of these corrections.
In general, the temperature and pressure will change many parameters in the formation and in the nuclear interactions taking place. One of these changes is the density of fluids in the pore space of reservoir rocks and in the borehole. Density changes alter the nuclear reaction rates, resulting in different porosity values. Elevated temperatures also affect nuclear reactions and thermal neutron transport through Doppler broadening and elastic scattering kinematics. Specifically, elevated temperatures increase the average velocity of thermal neutrons and also change the thermal motion of target nuclei. These effects can significantly impact the porosity value read from neutron porosity logs in such environments, even if the net density change is minimal. Recent developments have provided more options for correctly treating the Doppler broadening and the thermal neutrons in high-temperature environments.
In this paper we discuss the relevant physics and then show the impact of applying new simulation tools on neutron porosity measurements. We provide examples of how individual assumptions influence the logs such as whether the formation and borehole fluid temperatures are in equilibrium. The impact of pressure on various cases will also be shown. These effects will be illustrated with changes in apparent porosity that are determined using different assumptions. With this information, an analyst will have a better understanding of how to implement temperature and pressure corrections more successfully.
Imaging features away from the wellbore using energy propagated into the formation by wireline acoustic tools has recently become an accepted formation evaluation technology. This technology is based on applying seismic imaging techniques to energy reflected by planar features in the formation with acoustic impedance contrasts back to the wellbore. Since its first application by Hornby in 1988 using reflected compressional energy to image the formation, subsequent developments by Tang, Zheng and Patterson in 2007 introduced the use of shear energy for imaging, with many associated benefits for depth of investigation and image quality.
However, the existing techniques have some limitations. For example, the depth of imaging is limited by the relatively short waveform acquisition time of wireline acoustic tools. Imaging of deeper features can be limited by signal attenuation and waveform mode conversion (especially when monopole energy is used). Images can be blurred by ringing that is inherent in the waveform data, and feature orientation can often be uncertain because of the waveform propagation patterns and lack of azimuthal sensitivity of the measurement.
Recent developments and new methodologies have minimized or mitigated many of these limitations. For example, an extended waveform recording enables a considerable increase in the depth of investigation. Advanced filtering techniques and mathematical deconvolution of the waveform data result in considerably sharper images, and mathematical rotation the of cross-dipole waveform to the maximum and minimum reflected energy directions enable estimation of the orientation of reflected features. The end result is an improved reflection image compared to those derived using previous techniques.
In this paper we present and discuss several of these new techniques, and present a case study where these techniques have been applied to image features in a complex carbonate formation. We then compare these images to those generated using the previous techniques and discuss the advantages and applications of the recent developments in the technology.
Ojha, Shiv Prakash (The University of Oklahoma) | Misra, Siddharth (The University of Oklahoma) | Sinha, Ankita (The University of Oklahoma) | Dang, Son (The University of Oklahoma) | Sondergeld, Carl (The University of Oklahoma) | Rai, Chandra (The University of Oklahoma)
There are limited laboratory-based techniques to estimate/measure pore network complexity, pore connectivity, pore size distribution, residual saturations, and saturation-dependent relative permeability in organic-rich shale samples. Moreover, there is a growing interest to correlate the variations in the above-mentioned pore characteristics and transport-specific properties across early-oil, oil, condensate, and late-condensate windows.
To that end, low-pressure nitrogen adsorption-desorption measurements were performed on 80 shale samples obtained from various maturity windows in Woodford, Bakken, and Wolfcamp formations. The measured adsorption-desorption isotherms (ADI) were then interpreted to estimate pore connectivity, pore complexity, pore size distribution, irreducible saturations, and relative permeability curves.
The shale samples had total organic content and porosity in the ranges of 1 to 9 wt% and 0.01 to 0.1 porosity units, respectively. First, pore size distribution (PSD) of the samples were computed by processing the ADI using modified Barrett-Joyner-Halenda (BJH) method. Following that, PSD and ADI were jointly processed to determine the fractal dimension, coordination number, percolation cluster length, and critical saturations of the pore network in the samples. Finally, saturation-dependent relative permeability and residual saturations of wetting and non-wetting phases in the shales samples were calculated. These were found using the percolation and critical path analysis models based on the parameters that were computed in the previous two steps of the interpretation methodology. The measurements were repeated after cleaning of samples with methanol-toluene mixture to remove bitumen and soluble dead hydrocarbons.
Estimates of irreducible water and residual hydrocarbon saturations increase from 30% to 60% and 20% to 40%, respectively, with the increased maturity of the sample-specific window. We estimate an increase in pore network complexity, quantified in terms of the change in fractal dimension of the samples from 2.4 to 2.9, with an increase in kerogen maturity. 20% of samples exhibit poor pore connectivity, quantified in terms of low coordination number of around 4.3, that indicates the existence of permeability-jail at corresponding formation depths. With an increase in the maturity of the sample-specific window, there is an increase in long-range pore connectivity and decrease in volume fraction of dead-end pores, quantified in terms of an increase in percolation cluster length from 2.7 to 5. The cumulative pore volume increases from average value of 0.01 cc/ g in oil window to 0.05 cc/g in late condensate window.
In vertical wells through horizontally laminated formations, the apparent resistivities from wireline electrical logs read very close to the horizontal resistivity. At the same time, the electric-current lines of these tools flow vertically through the formation. This apparent contradiction became known as the “paradox of anisotropy”, and in 1932, it was mathematically proven for the limit of negligibly small boreholes. The original proof was long and difficult; Coulomb’s law in generally anisotropic media offers a much simpler proof.
In such horizontally laminated formations, modern laterolog-array tools also read quite close to the horizontal resistivity. These tools may be combined with array-induction tools. In such conditions, array-induction tools generate current loops that truly are parallel to the laminations and thus reliably provide the horizontal resistivity. The apparent laterolog and induction resistivities show a small difference; the laterolog resistivities read systematically higher. This difference permits a joint in-version for horizontal and vertical resistivity, which quantitatively characterizes the laminated formation’s resistivity anisotropy.
The paradox of anisotropy states that such an inversion is impossible; the laterolog-array cannot measure the vertical resistivity. On the other hand, field observations show laterolog sensitivity to the vertical resistivity. The profound contradiction between these two concepts is quantitatively analyzed and reconciled in this paper.
The paradox of anisotropy as a rigorous, mathematical theorem assumes a negligibly small borehole. On the other hand, real laterolog-array tools operate in salt-water-filled boreholes with finite size. They emit electric current lines that locally fill the borehole to saturation.
Somewhere, a first current line will refract from the hole and enter into the rock. The refraction angle is defined by the borehole size and the ratio of formation over bore-hole resistivity. This refraction angle is sensitive to the vertical formation resistivity. In case of a negligibly small borehole, there is no refraction; the ratio becomes meaningless and the paradox is satisfied. In presence of the borehole, the current-line refraction causes non-negligible sensitivity to the vertical resistivity, which is exploited in the joint array-induction–laterolog-array in-version. This way, we reconcile the apparent contradiction between the paradox of anisotropy as a rigorous theorem and the actual laterolog-array anisotropy response.
Several field-log examples of joint array-induction–laterolog-array inversion confirm the viability of the an-isotropy response. Even while triaxial induction tools to-day provide more accurate resistivity anisotropy, the combined array induction and laterolog-array offer sufficient anisotropy sensitivity to render this combined service interesting for the interpretation of low-resistivity pay zones.