Resource volume assessment is an integral part of prospect evaluation. A prospect, in its simplest form, is an unpenetrated reservoir block with seismic data sometimes as the only available data. Hence, a prediction of the most likely hydrocarbon accumulation is required. Recovery factor evaluation for a prospect that has been prognosed via basin modelling to most likely contain non-associated gas (NAG) under high pressure will require an assessment of the abandonment reservoir pressure primarily from analogues or comparable reservoir.
The current practice assumes pressure gradient ranges which are then used to estimate the abandonment pressure for the high pressure gas prospectivity evaluation hence it does not take into account the impact of the dynamics of flow (and flow regime). Besides, the assumed pressure gradient ranges may not be robust enough for the evaluation.
This paper discusses an approximate methodology to evaluate abandonment reservoir pressure for a high pressure NAG prospect where suitable analogues are not readily available. Two methods were proposedand modelled accordingly using the available seismic data: pressure gradient and pressure sensitivity. The result showed that the pressure gradient modelling approach gave a wider range of recovery factor, wide enough for the resource class maturity.
Experimental design method is very useful in green field development. It helps to understand the impact of uncertainties on ultimate recovery, and hence, gives guidance on business decision making. Outputs from the process (Pareto/Tornado charts, selected models) are good exhibits for use in the uncertainty management plan, which also drives data acquisition and work plans for future stages/phases of the project.
This work shares some lessons from using experimental design for development planning of two Non-Associated Gas (NAG) condensate reservoirs. It demonstrates the importance of selecting appropriate design for proxy generation/Monte Carlo simulation runs – and eventual model selection.
This paper has two case studies: (i) a gas condensate Reservoir "A," with total of 20 parameters (9 discrete and 11 continuous variables), and (ii) another gas condensate Reservoir "B", which has 22 parameters.
Folded Plackett-Burman (FPB) design was first used in both cases. However, owing to limited number of runs, only linear proxies could be created. This did not meet the objective of the process, because it does not allow for generating interaction terms among parameters. The FPB runs were therefore used only as screening studies, while 3-level D-Optimal designs were subsequently used for response surface model (proxy) generation.
Four sensitivities were done on Reservoir "A": (i) 20 parameters with 300 D-Optimal runs; (ii) 14 parameters with 300 D-Optimal runs (after screening out less impacting parameters on objective functions); (iii) 20 parameters with 500 D-Optimal runs; and (iv) sensitivity (i), but with additional fresh 200 D-Optimal runs. The two sensitivities done on Reservoir "B" are: (i) 22 parameters with 200 runs; and (ii) 15 parameters with 300 runs. It was observed that only sensitivities (ii) and (iii) for Reservoir "A", and sensitivity (ii) for Reservoir "B" yielded meaningful proxies.
In conclusion, using Folded Plackett-Burman (FPB) designs alone in cases with many variables, as shown in this work, may not lead to meaningful proxies (especially, when there are interactions among parameters) because it is restricted to only linear proxies. Also, it is important to have adequate number of 3-level design (D-Optimal) runs for both process efficiency and proxy generation. Too few runs result in unreliable proxies, whereas too many runs take more time/computing resources. In addition, carrying large number of variables into the 3-level design stage requires more runs and also leads to more cumbersome proxies.
Awasthi, Amit (Shell Nigeria Exploration and Production Company) | Ikwueze, Obinna (Shell Nigeria Exploration and Production Company) | Itua, Osazua J. (Shell Nigeria Exploration and Production Company) | Tendo, Fidelis (Shell Nigeria Exploration and Production Company) | Osayande, Francesca (Shell Nigeria Exploration and Production Company)
For reservoirs that are not fully appraised, fluid type and resource volumes above Shallowest Known Oil (SKO) or Deepest Known Oil (DKO) are often unknown. This paper discusses the workflow adapted to predict the fluid type above the SKO with reasonable certainty. This enables to book SPE compliant resource volume (SCRV) above SKO to be proposed for booking and up dip wells to be justified.
As part of SCRV estimation for Field-X to book hydrocarbon volumes above the SKO, the oil properties, reservoir pressure data and seismic information were integrated utilizing the limitations and uncertainties around each data for the candidate reservoirs. The fluid properties (specifically bubble point pressure) are the most essential element in this analysis, which sometimes are not available or not of sufficient quality. This article will discuss the methodology and workflow adopted to use the existing limited information with their corresponding uncertainties to circumvent these limitations.
This methodology was demonstrated as a reliable technology supporting SCRV for Field X, resulting in the increase of ~ 25-30 % of the oil SCRV. Furthermore it has potential to be applied to other fields with similar circumstances.
It generally has been recognized by previous investigators that grain size is a fundamental independent variable controlling permeability in unconsolidated sediments. Several models that have been developed for estimating permeability based on grain size. However, these equations are based on empirical studies, and the results are not necessarily transferable from one location to another. It is therefore important to determine which permeability equations are appropriate for use in the Niger delta basin. The author seeks to establish the use of grain size data from Sieve analysis to determine the permeability of sediments from the Niger Delta area. Sieve analysis experiment was performed on samples from different wells and depths and the grain size data retrieved. Permeability was also determined using routine core analysis methods. Regression analysis tool on Excel was used to derive an empirical relationship to predict permeability from grain size data. Two models were developed and they represented different permeability regimes (k< 1000 md, k=1000 md). The accuracy of this method was checked by correlation with standard routine core analysis permeability data. Statistical indices showed percentage error (-0.0627%, 0.0008%) and correlation coefficient (r = 0.9844, 0.631). Sensitivity carried out on the model variables showed that the grain diameter and the sorting are the key variables controlling the determination of permeability from these models.
Idogun, Akpevwe Kelvin (Federal Polytechnic of Oil and Gas) | Iyagba, Elijah Tamuno (Federal Polytechnic of Oil and Gas) | Ukwotije-Ikwut, Rowland Peter (Federal Polytechnic of Oil and Gas) | Aseminaso, Abiye (Federal Polytechnic of Oil and Gas)
The utilization of nanoparticles (NPs) as an agent for improved oil recovery (IOR) and enhanced oil recovery (EOR) has attracted a lot of attention over the past decade. NPs are known to have high mechanical and thermal stability and they can be tailored to meet the specifications of a given reservoir system. As a result, as opposed to polymers, NPs are suitable options for EOR processes as they are capable of withstanding high temperatures, pressures, shear and salinity in harsh environments often prevalent in subsurface oil reservoir systems. Researchers have made promising observations and results at laboratory scale, proving that nanoparticles possess the capacity and potential to reduce residual oil in the rock matrix thus increasing ultimate recovery. NPs have been observed to lower interfacial tension (IFT), reduce the viscosity of oil and alter wettability. This study presents an in-depth review of the oil displacement mechanisms that favour nanoparticle enhanced residual oil recovery. It goes on to review the causes and effects of NP retention in porous media. Processes carried out at laboratory scale and properly designed for reservoir fluids may fail when implemented at reservoir scale as a result of geological factors. Hence the review concludes by making recommendations for successful NP EOR projects.
Taiwo, Oluwaseun Ayodele (Petroleum Engineering Department, University of Benin) | Uzezi, Orivri (Petroleum Engineering Department, University of Benin) | Mamudu, Abbas (Petroleum Engineering Department, University of Benin) | Onuoha, Sean (Petroleum Engineering Department, University of Benin) | Adijat, Ogienagbon (Petroleum Engineering Department, University of Benin) | Olafuyi, Olalekan (Petroleum Engineering Department, University of Benin)
The fractional or mixed wettability of porous media has been recognized as ubiquitous condition in the petroleum literatures. Fractional wettability refers to the fraction of the total pore surface area which is preferentially water or oil-wet. And this wetting condition of a reservoir rock plays a significant role in determining the oil recovery
This paper presents laboratory analysis of the effect of wettability alteration on recovery mechanism and the effects of Teepol at different wettability conditions using a glass beads-pack. Oil wet condition was established by the use of kerosene and other fractional wettability conditions were also established. Six experiments were performed. In all the experiments, imbibition, drainage, water flooding, surfactant flooding and polymer flooding were carried out on the porous medium. 0.7PV surfactant solution (teepol) at a concentration of 0.9%wt and 1.1PV polymer solution (gum Arabic) at 5%wt concentration were used. In experiment A, B, C, D, E and Z (control), the porous medium was 100% water wet, 25% water wet and 75% oil wet, 50% water wet and 50% oil wet, 75% water wet and 25% oil wet, 100% oil wet, 100% water wet respectively.
The results show that equal percent wettability of both water and oil and complete wettability of either oil or water yield more incremental oil when compared with those of 25% water wet and 75% oil wet condition and 75% water wet and 25% oil wet fractional wettability. Teepol is effective in lowering the oil-water IFT in all porous media with recovery ranging from about 75.5 to 90% of the residual oil saturation (ROS). Experiment C (50-50 fractional wettability) has the highest incremental oil recovery due to grain-to-grain interactive forces while experiment Z has the lowest recovery (about 41% of ROS). This strongly suggests mixed or fractional wettability reservoirs are good candidate of chemical EOR technique. However, even fractions contributions to the mixed wettability by both phases are required for optimum process performance.
Usan, a deepwater field located offshore of Nigeria, commenced production in February 2012 with development drilling to be completed in 2016.
The field has 2 subsea flowline loops with 4 risers giving the flexibility to flow wells through two different risers. Due to the constrained flowline capacity, optimization of well routing and drillwell sequencing became critical to sustain the remaining drillwell campaign while maximizing base production and incremental uplift. The adopted approach by the Production and Reservoir Engineering teams was to utilize the Integrated Production Modelling (IPM) software suite (PROSPER, GAP & RESOLVE) to ensure optimal well mix and flow stability.
A systematic approach of continuously testing various routing scenarios of wells within the different production loops was applied to minimize production back out loss due to system hydraulics during the streaming of new drillwells. Different ramp-up scenarios were also tested for the new wells to minimize back-out on base wells. Due to variations in fluid properties, reservoir pressures, depletion strategies and loopline distance, the system needs to be re-evaluated frequently. With over 50 routing sensitivities assessed at each evaluation, identifying and implementing the optimal routing/configuration became critical.
Production optimization using the IPM model has led to higher FPSO incremental rates from new drillwells due to optimized well routing within the production risers. This has also helped to sustain the ongoing drilling campaign in the Usan field and improved the economics of the overall project.
This paper describes how IPM software was used to maximize incremental production from new drillwells.
Obinna, Chudi (Shell Petroleum Development Company) | Anaevune, Austin (Shell Petroleum Development Company) | Lewis, Helen (Heriot Watt University United Kingdom) | Stow, Dorrik (Heriot Watt University United Kingdom)
A belief that has been widely accepted amongst petroleum engineers and geoscientists in the oil industry is that early hydrocarbon emplacement in sandstone reservoirs is capable of inhibiting quartz cementation, thereby leading to porosity preservation. Results from this study of the timing and degree of cementation of a deep Oligocene potential clastic reservoir are used to predict: (1) the onset and volume of quartz cementation in the potential reservoir, and (2) the effect of early hydrocarbon emplacement on reservoir quality. In this study, burial and thermal history of untapped deeper Oligocene plays that lie below the highly productive clastic Miocene plays of the western deep Offshore Niger Delta are investigated using basin modelling enhanced by quartz cementation modelling. Of particular interest is the timing of charge to the reservoirs and the timing of diagenesis. Quartz cementation was modelled using the Walderhaug quartz cementation algorithm as a precipitation rate limiting reaction controlled by the rock fabric and temperature. Model calibration was primarily achived using physical properties measured from well A1- the only well that penetrated the top of the Oligocene sequence.
Simulation results indicate that less than 0.5% of pore space of the shallow proven hydrocarbon-bearing Miocene sands have been cemented, having been exposed only to low temperatures unfavourable for quartz precipitation. This is in agreement with measured porosity values exceedin 30% and permeability in the Darcy range. The deeper Oligocene succession has been buried to depths where temperatures are simulated to exceed 70° C favouring quartz cementation. The cementation model of the Oligocene sediments indicates that less than 14% of the total porosity has been occluded by quartz cement. However, the charge model simulates peak hydrocarbon saturation being reached when less than 7% of the pore space, in all four simulated accumulations, was quartz cemented. Hence the rate of quartz cementation is likely to have reduced from the time of peak oil charge onwards, thereby preserving a significant fraction of the porosity in the potential Oligocene reservoirs.
The integration of all techniques adopted in this study that results in a holistic outcome is believed to represent a novel approach in reservoir quality prediction. If the Oligocene is proven then it would represent a significant new play that is likely to extend beyond the study area across the western offshore Niger Delta.
During times of cost constraints, expensive seismic data acquisition is an activity that is often seen as expendable in a bid to reduce the cost of reservoir management in deepwater offshore fields where such activities can easily add hundreds of millions of dollars to the capital expenditure. It becomes imperative for a field management team to devise innovative ways to aerially monitor their field to manage the subsurface uncertainties that limit the effectiveness of injector wells that support production through water flooding.
One method involves the use of a borehole recording system (DAS-VSP) that can be used to acquire well bore seismic in wells that have fibre optic cables installed. The seismic around the wellbore can be acquired without the use of a rig or a receiver installation on the sea floor. Field studies in Gulf of Mexico indicate that 4D acquired by DAS-VSP are comparable in signal quality to other acquisition methods. Additional advantage is that this system has a low acquisition footprint; hence the surveys can be acquired prior to injection start up in new wells and more frequently in order to determine the efficacy of injection in the short term.
A second method involves the use of i4D, which leverages ocean bottom node technology. It is an intelligent way of acquiring 4D signals in short time stamps (weeks) to get dynamic information with fewer nodes around problem wells. i4D identifies changes due to pressure and water flooding while it is taking place. The cost implication of i4D is an order of magnitude less than full field OBN acquisition because fewer nodes are used in a smaller shot box.
These methods will not totally replace the value of a full field 4D survey, but they can be used to provide the necessary information in-between full field surveys.
Individually, these methods are limited in scope but by combining them, 4D monitoring is achieved cost effectively to close knowledge gaps over time.
One major requirement of many governments for allowing production commingling is the ability to allocate production to individual reservoirs that the well produces from. Despite the continuous stream of data that is generated from the downhole gauges in an intelligent well system, operators may still find production allocation difficult.
This work proposes a technique for real-time production prediction for individual reservoir with data from pressure and temperature gauges using developed software -
The software uses the modified Sachdeva choke model to predict the mass rates from the downhole control valves of each of the well. The equations are complicated and are required to be solved numerically. Firstly, it determines if the flow across the downhole valves is critical or subcritical and secondly it determines the rates across each valve.
The working of the software is demonstrated by a hypothetical two-zone intelligent well completions system with multi-phase production. A comparison is made between this methodology and the one that uses the gauge values directly without correcting for pressure difference. It is observed that significant difference exists between the software method and the latter..