The identification of the fluid fill history is a necessity for the development strategy of any field, in particular in the Middle East where tectonic history is often reported to affect fluid distribution and contacts in many fields. The fluid fill concept for a low permeability carbonate field has been re-evaluated and modified from a tilted contact interpretation with imbibition of the deepest unit to a field-wide flat contact and primary drainage saturation distribution. The oil volumes in the reservoir under study are sensitive to minor changes in the structure and fluid fill due to the relatively low structural dip and low permeability transitional nature of the reservoir. The paper highlights the importance of removing preconceptions in data analysis and ensuring consistency on interpretations between different available data sources. It also demonstrates how data quality could completely change the fluid fill concept.
The three main reservoir units of the Lower Shuaiba A, Lower Shuaiba B and Kharaib have been charged from two oil migration events. Structural changes post the first primary drainage are revealed by regional seismic images of the shallower horizons. Due to the rock low permeability, the water saturations are above irreducible value and the whole interval is in the "transition zone". Kharaib unit was believed to be imbibed by the aquifer after charge and was not developed. Three possible fluid fill scenarios were investigated: a) tilted contact due to structural changes post-charge, b) imbibition of the deeper interval, c) primary drainage with field-wide flat contact related to the second pulse of charge. Each scenario impacts the development of the three units positively or negatively. Water saturation logs vs. True Vertical Depth plots were the main diagnostic tool used to rule out fluid fill scenarios. The plots were used to recognise lateral changes of the saturation profile and investigate imbibition signatures. Production data were also used to cross check the expected fluid fill scenario. The resistivity tools’ types and mud resistivities were examined.
This challenging reservoir characterization case study is defined by the interaction between two reservoirs with different production mechanisms: a fractured basement reservoir and an overlying sandstone reservoir. The existing static geologic concept has been significantly enhanced by integrating pressure data from a unique three-year shut-in period to aid modeling of fractured reservoir connectivity. Previously, the seismic dataset was predominantly used to model the fault and fracture network and guide well planning. In the current approach, the full field data set, including all drilling parameters and new reservoir surveillance data were integrated to address uncertainty in the connected hydrocarbon volume and the relative importance of each production mechanism. The result is a reservoir management tool with which to test re-development concepts and effectively manage pressure decline and increasing gas/oil ratio (GOR) and water production.
To achieve a fully integrated history matched model, the first step was to make a thorough review of the existing detailed seismic interpretation, vintage production logging tool runs (PLT's), wireline logs (including borehole image logs (BHI)) and drilling data to find a causal link between hydraulically conductive fractures and well production behavior. In parallel, a material balance exercise was run to incorporate the new pressure data acquired during the field's shut-in period. The results of the material balance analysis were combined with seismic and well data to define the distribution of connected fractures across the field. Additionally, the material balance analysis was used to determine the connected hydrocarbon volume, the distribution of initial oil in-place and the relative hydrocarbon contribution from each production mechanism.
The field is covered by multi-azimuth 3D seismic and 43 vertical to highly deviated development wells, providing significant static and dynamic data for characterizing the distribution of connected fractures. Despite this high quality, diverse and field-wide dataset, prior modeling iterations struggled to sufficiently describe the production behavior seen at the well level. This has resulted in a major challenge to predicting the production behavior of new development wells and planning for reservoir management challenges. Capturing the complex interaction between production variables (including lithology, matrix versus fracture network, geomechanical stresses, reservoir damage and pressure depletion) at a field level instead of at an individual well level resulted in a unified static and dynamic model that reconciles all scales of observation.
This oilfield represents a unique reservoir characterization opportunity. The result is a key example of how iterative, integrated geological and engineering driven reservoir modeling can be used to inform the development in a complex, mature field. This case study provides an excellent analogue for the reservoir characterization of other fractured Basement fields and/or Basement-cover reservoir couplet fields in the early to late phases of their development.
Raghunathan, Murali (ADNOC - Al Dhafra Petroleum Company) | Alkhatib, Mohamad (ADNOC - Al Dhafra Petroleum Company) | Al Ali, Abdulla Ali (ADNOC - Al Dhafra Petroleum Company) | Mukhtar, Muhammad (ADNOC - Al Dhafra Petroleum Company) | Doucette, Neil (ADNOC - Al Dhafra Petroleum Company)
A novel workflow was developed to select an optimal field development plan (FDP) which accounts for a number of associated uncertainties for an oil Greenfield concession that has a limited number of wells, production data and information. The FDP was revisited and updated to address the additional data acquired during the field delineation phase. The study in Ref-1 demonstrates the comprehensive uncertainty analysis performed and the resulting optimized FDP. The FDP was developed to minimize the economic risk and uncertainty. Further field delineation activities have revealed a north and south extensions with an increase in hydrocarbon accumulation by 115%. A reservoir dynamic model was updated because of the increase in HC and input data from 17 wells. A workflow has been created with a suitable development option to consider the recently appraised areas, which are: - Updated saturation height functions (SHFs) which improve the match between newly drilled wells and water saturations logs - Updated reservoir models which were based on well tests and new analytical interpretations - History matching well test data with new acquisition data - Optimized field development options, that cover additional areas - Inputs to reservoir surveillance plan Be implementing following an extensive analysis the most robust development concept was selected and will now in the field.
Yesterday’s practices are being superseded by a universal trend towards the extensive use of historical and real-time data to understand, learn and predict all well intervention operations. This course explores the impact of data analytics on well operations. Drawn from the presenter’s extensive experience in data analysis, it examines, in easily understandable terms, today’s data management processes targeting process improvement.
Workplace safety is a main objective of any company working in the oil and gas business. The processes have been developed and established over the past decades based on individual experiences and causal pathways. The exhaustion of technical and administrative barriers has led to the introduction of behavioral safety. Recent advances in data technology and machine learning have disrupted many businesses and processes and can lead to a new paradigm in workplace safety as well.
In this case study we demonstrate the application of data science and predictive analytics to aid the HSE function and prevent accidents. We have analyzed operational and accident data from the past 10 years at a leading oil and gas company to quantify the effectiveness of their safety programs.
We have determined how many accidents each program actually prevents, and is able to prevent in an optimal setting. We have determined the optimal level of engagement for each program, and at what level diminishing returns set in.
We have further developed a predictive model to forecast the occurrence of accidents one month ahead of time. In this way the HSE function is able to focus on 15% of locations to control 69% of the accidents. The forecast was also able to predict accidents at locations where one would traditionally not expect accidents to happen, such as locations with low activity.
This paper shows the potential for improvement that is possible with the emerging big data, artificial intelligence and machine learning tools specifically in the field of workplace safety.
Presently, drilling riser joints are inspected every five years. This is usually accomplished by rotating 20% onshore every year to be dis-assembled and inspected. This requires extensive boat trips from a mobile operating drilling unit (MODU) to onshore and trucking of the riser to the inspection facility. Typically, 20 riser joints from each riser system are transported on a boat and one riser per truck to an inspection facility each year, making the logistics of performing a drilling inspection complex and costly.
A laser-based measurement for inspection together with monitoring of riser systems has been implemented with a new standard process for collecting critical riser data that is ABS approved. The aim is to mitigate the costs and time associated with essential MODU drilling riser inspections, by empowering operators to reliably determine the condition of drilling riser joints, consistently predict when vital components will require service and accurately assess remaining component life.
The approach utilizes a life cycle condition based monitoring, maintenance and inspection system that can be deployed on a MODU, enabling resources to be deployed only when necessary, instead of on a calendar interval. The solution consists of: Performing a baseline inspection on the riser joints to assess their present state, Collecting the environmental and operating data when the rig is on site drilling, Feeding the environmental and operating data into a digital twin. The tuned digital twin can be used to predict future damage.
The approach removes uncertainties surrounding damage of riser joints and will allow the owner to determine whether riser should be redeployed or replaced. This is the only process that is ABS approved for condition based monitoring of drilling riser systems. The system is compatible with all present owners’ maintenance programs and ensures that maintenance requirements are supported with robust engineering.
The North Sea Oil and Gas industry counts over 7,800 wells drilled. The industry is now entering an era of well abandonment and decommissioning. Current barrier verification for P&A requires appropriate pressure testing and includes surface and downhole monitoring.
Globally, Spectral Noise Logging (SNL) has been utilized in many thousands of cases to detect fluid movement behind completion tubulars and/or across a cement barriers.
In Nov 2017, full-scale verification tests were conducted at the International Research Institute of Stavanger (IRIS). These tests were conducted in a controlled environment to verify current technology thresholds. These showed the technique validated the cement barrier integrity during pressure tests and can diagnose channeling as low as 9 ml/min behind the casing. The threshold matrix for different cement defect versus pressure and flow rates allowed the usage of the technology to support the positive qualification of the barrier elements (
Utilizing a purpose-built test assembly of standard oilfield tubular and cement with fitted end caps, a series of pressure tests operations were conducted to identify the pressure and associate leak rates in conjunction with the SNL. The results clearly demonstrated that the logging tool can provide evidence of barrier verification over a wide range of well applications. Barrier qualification requires that three conditions are met; firstly, cement behind casing is in place and not displaying a micro-annulus or any form of fluid movement behind pipe. Secondly, that a cement plug holds pressure and there is also no fluid leak and finally natural shale barriers are active and create a sufficient barrier. Currently, technology is in its 10th generation, and since the IRIS tests have been used in many wells, covering both onshore and offshore oil and gas wells and wells in highly sensitive environmental areas. On each case the logging operations were used to verify well status before and after the barrier establishment via cement squeeze or section milling and, in several cases, clearly, demonstrate that the barrier status remained ineffective, hidden and further remedial work was required.
This paper discusses the downhole passive noise listening and its spectral analysis technique to prove the effective cement barriers are in place. The concept, methodology and its application which have been successfully tested via yard and field tests are presented in this paper.
This article describes a practical approach to applying predictive analytics techniques against safety incident and near-miss data to generate actionable insights that change safety outcomes in the field. Examples illustrate three critical ways to use safety data: 1) predicting where incidents are most likely to occur, informing where to place additional resources and effort; 2) understanding the combinations of causes and sub-causes that are creating incidents, improving the focus of safety programs; and 3) revealing which proactive safety activities will best mitigate incident types predicted to occur, increasing the effectiveness of preventive measures. The authors discuss typical data and implementation challenges and encourage companies to stop waiting for "perfect" data and, instead, start applying predictive analytics to deliver targeted safety insights to supervisors and workers in the field. Are you ready to take the first step? According to the latest statistics published by Great Britain's Health and Safety Executive, the fatality rate has remained broadly flat across industries since 2012, claiming the lives of 144 workers in the UK during the 2017/2018 reporting period alone. When combined with 550,000 nonfatal injuries during the same time, it seems clear that current approaches to preventing occupational injuries are not working.
This paper will discuss when it is advantageous (in the context of an offshore oil and gas environment) to process data at the network edge (in close proximity to equipment assets) or to stream data to a cloud-based Internet of Things (IoT) platform for analysis. It will offer an objective assessment of both approaches and provide recommendations for securing data in both cases, as part of an overarching cybersecurity strategy.
IoT has opened the door to significant efficiency gains in the oil and gas industry. This is particularly the case in the offshore sector, where there is a pressing need to reduce costs and maximize equipment availability. In some cases, it is advantageous to process data in close proximity to equipment assets (i.e., at the edge). In others, it makes more sense to securely stream data to a cloud- based IoT platform and harness artificial intelligence (AI) to aid in decision making. In certain cases, both architectures can be utilized in compliment to one another.
Many factors need to be taken into consideration when evaluating an edge or cloud-based approach. Some of these include data volume, transmission and processing speed, control of data, cost, etc. Edge computing can be used to streamline and enhance the efficiency of data analytics. In certain applications, this can mean the difference between analyzing a performance failure after the fact, and pre-empting it in the first place, which in the offshore environment could potentially translate into millions of dollars per day.
On the other hand, there are situations where it is beneficial to store large volumes of data on a cloud-based platform. For example, if the goal is to leverage advanced IoT-based industrial analytics to optimize an entire fleet of a certain type of equipment, the cloud may be the best solution. Cybersecurity is another consideration. Attacks on critical infrastructure have risen significantly over the course of the past year. As more Intelligent Electronic Devices (IEDs) are deployed in the oil and gas industry to optimize efficiency, Industrial Control Systems (ICSs) are increasingly vulnerable. As a result, the threat extends beyond proprietary data to mission-critical operational technology (OT) assets and equipment.
Cybersecurity standards and layered, defense-in-depth models have grown in response to the frequency and sophistication of cyber attacks. Additionally, recent advances in cyber defense technology incorporate small, kilobit-sized embedded software agents to monitor networks for anomalies that could signal an intrusion. This paper will explore new cybersecurity threats to oil and gas assets, as well as strategies operators can employ to defend against them, whether using an edge or cloud-based platform, or both.
Life-cycle safety and integrity management of offshore structures is a critical activity owing to the adverse consequences of structural failure, ranging from loss of life and financial consequences to environmental pollution. Historically, integrity management of substructures such as jacket structures has been the subject of more detailed investigations than the integrity management of topside structures. For instance, more specific risk-and reliability-based methodologies exist for integrity assurance and planning the inspections of jacket structures than for topside structures. This article presents a practical methodology for risk-based inspection planning of large-scale topside structural systems under different limit states (ultimate, accidental, fatigue, and serviceability) and degradation mechanisms (e.g., corrosion and fatigue crack growth), with a view to data analytics and digitalization. The main advantage of the presented methodology is in its capability in systematically ranking the different structural elements/areas relative to one another based on their assessed level of risk of failure, i.e. a risk-based differentiation, and planning the inspections and repairs accordingly for large-scale structural systems. Such an integrated approach will result in efficient and economical management of offshore topside structural assets and can be used as a consistent and coherent basis for lifetime extension or decommissioning of offshore platforms. Integrity management of offshore structures is known to involve the analysis and management of large amounts of data and information over the lifetime of the structure. Therefore, insights are provided in the article regarding how the presented risk-based methodologies can be integrated into a digitalized and datadriven interface--a topic currently under heated investigation across the petroleum industry, facilitating the analysis and management of the involved data and information in an efficient and verifiable manner.