Kanvinde, Sanjay Sharadkumar (Schlumberger) | Heidenreich, Katy M. (Schlumberger) | Parsons, Barry Gerard (Schlumberger) | Pearson, Greg (Schlumberger) | Kristiansen, Tore (Statoil ASA) | Andersen, Ketil (Statoil)
This paper presents a framework and a systematic top-down approach for implementing a company-wide operator-service company integration program for well construction services called integrated competence. The paper describes the key aspects of implementing the integrated competence program: goals, objectives, critical success factors, levels of integration, asset selection and targeting, value creation and key performance indicators (KPIs), multiskill roles, onshore and offshore team configurations, training, and change management.
The integrated competence program developed by StatoilHydro and Schlumberger is an initiative under StatoilHydro's Integrated Operations (IO) corporate initiative and was applied to StatoilHydro's standardized well construction process. The joint team configurations in StatoilHydro's Onshore Operations Centers (OOC) improved collaboration in all phases of the well construction process, and the Schlumberger Support Center provided remote support of drilling operations. In addition, the paper describes two case studies used in the development of the wider program.
The framework, program approach, challenges, and results presented in this paper provide the E&P industry with an example of operator-service company integration, with possible implications for their own current and future digital initiatives, particularly those focused on the well construction process.
A novel sensor system, integrated into a rotary steerable system (RSS), has led to a new level of drilling optimization by providing the concurrent real-time measurement of near-bit borehole caliper and near-bit vibration. This new sensor system gathers all the information 2 to 3 meters above the drill bit, whereas conventional MWD/LWD sensor measurements are made 20 to 40 meters away from the bit.
Integration of these sensors with an RSS resulted in consistent measurements of borehole caliper and the three principal types of downhole vibration (torsional, lateral, and axial), which has helped facilitate comparative analysis of drill bit and BHA performance for different BHAs and wells. Additionally the real-time drilling data was available to a multi-disciplinary drilling optimization team consisting of engineers and researchers. Remote satellite communication and Internet technologies enabled the team to monitor the downhole drilling conditions around the clock and to provide critical feedback to rig site engineers for minimizing vibration-related failures and for improving drilling efficiencies.
This paper will discuss the application of real-time near-bit caliper and vibration measurements while drilling with various hole-size RSS. One unique feature of this particular RSS is a shop-configurable point-the-bit or push-the-bit mode, depending upon the application. This paper also shows a comparison of borehole conditions and vibration between point-the-bit and push-the-bit modes using an identical RSS, having consistency in stiffness, weight, force applying capability and control system.
Through the integrated sensor and team approach, the ability to detect borehole washout conditions and/or BHA instability due to vibration allows RSS operators to take timely remedial action before the occurrence of downhole RSS failure. As a result, the performance and survivability of the RSS has been remarkably improved.
The use of both point-the-bit and push-the-bit RSS has been widely accepted on directional drilling projects. RSS is often used in challenging and complex wells where daily rig rates are very high. The operators have high expectations for RSS performance to reduce drilling days and cut costs. This demand pushes the envelope of RSS reliability by using more aggressive drilling parameters. In order to complete the RSS project successfully, it is crucial to monitor real-time drilling conditions and optimize drilling parameters for increasing the drilling efficiencies and for avoiding vibration-related RSS failures. However, rig-site monitoring of downhole dynamic conditions often has a limited benefit if the information is not shared with multi-disciplinary off-site experts.
As a generation of oil industry professionals matures along with the oil fields, a high percentage of drilling experts will reach retirement age and the industry will lose their expertise in the upcoming years. Knowledge sharing of experienced experts across geographical boundaries will become more important. For example, drilling dynamics knowledge, local field knowledge, and "good old?? field experience must work together and overcome upcoming drilling challenges. Real-time near-bit data must be delivered to people with the most expertise with minimum delay. A team of experts in each operating region (e.g. in Houston, Texas, USA) should be able to review the data and contribute to making decisions for real-time drilling optimizations as illustrated in Figure 1.
Digital Oil Field, Computer Assisted Operations, Smart Fields and iFields are some of the names coined by different E&P Companies for processes that promise to use real time Information Technology to radically change the way the Oil and Gas business is run and is so doing enable significant business benefit.
The Digital Oil Field(DOF) has been the subject of much publicity and further development over recent years. However, there was also much DOF-related activity over prior decades, in which oil companies made their operations' progressively smarter using a combination of IT, instrument engineering, telecommunications and change management.
The intent of this document is to chronicle some of the learning's from DOF's history, initially from a Shell E&P perspective, as this is the author's bias. However, it is hoped that this will spark similar historic learning's from other Shell individuals and other E&P companies and that these thoughts and lessons be used to "grow?? towards a more comprehensive view.
Winston Churchill said "Study history, study history. In history lies all the secrets.?? Looking at DOF through the lens of one person's experience is hardly history, but may give some clues regarding how to do "it?? and how not to do "it.??
This paper will trace DOF historic developments from the author's personal perspective and tease-out the issues then and now, providing an insight into where we have been, where we are now and what can be practically achieved.
Definition of DOF for Oil/Gas Fields from Reservoir to Point-of-Sale
To provide context for this analysis it is appropriate to define what DOF is aspiring to achieve. DOF is the deployment of people, processes and technology, in pursuit of production, safety and technical integrity improvement, by timely, effective and sustained use of, sufficient, good, production information.
DOF is inherently multi-disciplinary e.g. Production, Instrument, Reservoir Engineer, Petroleum Technologist, Process Engineer, Telecoms, Information Technology
Chronology and Associated Learning's
In the following, the term "Digital Oil Field?? is used throughout for consistency, even though the term did not come into use until the millennium. DOF has evolved over the last 40 years as follows:
- Electronic instrumentation on the wells and RTU/SCADA ("devices?? to digitize, serialize and modulate telecommunications signals to a remote computer)
- Electronic instrumentation on the surface process (separators, compressors, pumps) acquired by SCADA
- Distributed Control Systems starting to replace/complement SCADA in the mid-eighties
- Subsea Control Systems from the mid-nineties
- Downhole instrumentation from the mid-nineties
- Growing emphasis on management of change issues progressively over the last 30 years
A brief overview of DOF experiences over the last four decades follows, summarizing key learning's for each era.
With multi-processor cluster computing, modular stochastic workflows and a dedicated project team, the turn-around time for project execution has been significantly reduced. Using these concepts, it is now possible to conduct integrated studies successively in a continuous chain of studies, as if they were on a conveyor belt. For example, field development planning studies for ten reservoirs, some with history of more than 20 years have been generated within one year.
The three main enabling technologies for the rapid execution of integrated studies are cluster computing and unique modular workflows that are based on stochastic concepts.
Clusters have been deployed because of its established advantage in improving performance which in this case translates into a significant reduction in simulation times. A modular workflow enables the various tasks in an integrated study to be assigned to project team members, facilitates the flow of task outcomes between project team members and creates enormous flexibility during project execution by permitting tasks to be swapped between members. Furthermore, introducing statistical methods to data handling, history-matching and risk analysis streamlines the activities and reduces the turn-around time for the various tasks.
For most of the ten years since market introduction, the majority of intelligent completion systems have been custom-designed and manufactured to meet the specific requirements of the customer. Customization was a natural step in the evolution of the technology, as operators tested usability and reliability in individual wells that reflected unique environments and challenges. Each situation was different, and the technology developed more or less in answer to those distinct demands (Figure 1); consequently, the experience base grew and the capabilities expanded.
As intelligent completion technology has "crossed the chasm??1 from the early adopter stage to mainstream market acceptance - growing from its early usage almost entirely for intervention avoidance to its current use as a primary component in optimizing production - demands for customization have kept pace (Figure 2). Customers are looking today for the best well solutions available - not just the latest technology. Meeting this demand often means providing more holistic solutions, usually with some degree of customization factored in, and generally at the most cost-effective rate.
But an interesting, and perhaps game-changing, trend has developed in parallel. Like users of most popular technologies, operators are now demanding that intelligent completion systems be designed more rapidly, manufactured more cheaply, and delivered sooner - all at lower prices. To these users, the technology has evolved from a costly customized solution to a commodity that can be cost-effectively and rapidly produced and delivered.
This emerging theme of dual manufacturing models - one for customization, one for commodity - leaves suppliers of intelligent completion technology with a puzzle: how to satisfy a market that demands customization at a commodity price, while still delivering high-quality, innovative products that can support the industry in the future.
The challenges to achieve real-time production optimization (RTPO) of oil and gas fields lie in the integration of asset-wide operations at multiple time scales, knowledge of reservoir phenomena, and efficient data management. Traditional approaches to production optimization workflows often make simplifying assumptions and work within artificial boundaries, to lower the complexity of an all-encompassing optimization problem. While this decomposition creates manageable workflows, it does not adequately support the integration of production optimization at multiple levels.
We propose a methodology to achieve hierarchical decomposition of the overall production optimization problem at different time scales, where real-time data are consistently used to identify reservoir performance and optimize production. The optimization tasks at each of these levels are organized through automated transactions of targets, constraints, and aggregate measurements. For example, strategic decisions such as long-term (e.g., yearly, monthly) injection targets, production plans etc. calculated using a full-physics reservoir model are resolved into tactical decisions for short-term (e.g., weekly, daily) production planning.
A moving-horizon based parametric model is proposed to provide fast predictions for production optimization in the short-term framework. Since the model structure is based on the decomposition of a full-physics reservoir model, it is reasonable to expect that the parametric model will be robust enough to be used for extrapolation outside the range of history data, a property needed for optimization purposes. In this paper, we present an analysis of the structure of the physics-compliant empirical model, the model's range of applicability, techniques that can be used for parameter identification, and use of the model for short-term production optimization. The paper presents a number of case studies to illustrate the benefits of the proposed methodology and its application in typical workflows for closed-loop reservoir management.
Even with the experience of delivering nearly forty real time/collaboration centers worldwide for oil companies or for its own offices, there was little sense of the routine as Schlumberger Information Solutions approached the design, construction, and use of its newest center for its Houston headquarters. Completed in 2007, the center promotes real-time and cross-discipline workflows for E&P teams while managing the competing needs of R&D to experiment and create new, more efficient workflows to optimize the search for and recovery of hydrocarbons. This facility meets needs for both intra and inter-company use and is designed with evergreen capabilities and maximum flexibility. Lessons learned as the center moved from initial design to daily use are shared along with the business side of managing one of the world's newest intelligent energy centers.
Whether called centers of excellence, collaboration centers, real time operational rooms, or immersive visualization and interpretation environments, these facilities are designed to help oil companies or oil service companies solve today's oil and gas challenges through the use of innovative technologies and shared views. Several hundred have been built since the late 1990's with varying degrees of impact to oil companies. The facilities range in size and complexity from single rooms with one to two overhead projectors to energy centers with complex display environments and multiple use facilities.
Schlumberger Information Solutions (SIS) has built nearly 40 of the latter around the world either for clients desiring visualization and/or operational centers or for its own use. An existing executive briefing room in Houston was lost as a result of the corporate relocation from New York into the building at 5599 San Felipe. The decision was made to open a new center of enhanced room in 2007 with visualization technology and real time operations capabilities for oil companies and for internal use. While selecting projectors and network configurations would be important, SIS personnel first reviewed challenges and lessons learned from immersive centers built and used by other companies during the previous nine years.
The majority of immersive centers with high-end display and computing technologies were constructed after 1998 with an especially rapid uptake and wide acceptance of their value by 2002. Early debates on design centered on rear versus front projection systems and relative benefits of curved versus flat screens for display of subsurface images. In addition, most centers were designed with a bias for use for peer review/integration of different technical disciplines but some were dedicated interpretation centers for high profile/large seismic volume projects.
While most were successful, of particular interest were the ones that either had been removed because they were underutilized or where teams failed to realize the expected benefits. One such center was built by a large independent in the western United States for over one million USD yet was dismantled in less than two years. In another case, one of the largest multinational oil companies had two teams utilizing the same new center yet one team was twice as effective as the other even though both had identical access and decision support. These were reminders that investing in changing company culture and team alignment remain as important as the spend on technology and furniture.
Other challenges were long term viability of centers and adapting them to changing conditions and business needs. Finally, our review showed that even the best centers had to proactively limit the use of the high end display rooms for large but static slide shows.
Well and Facility Operations make operating decisions based on processing huge amounts of data. However, there is a practical limit to the number of optimization moves an operator can make (due to: changing operating constraints, plant disturbances or interactions, fundamental process delays and dynamics, and the remoteness of wells). Automatic Process Control enhances the speed and accuracy with which decisions can be made and is essential for optimization. However, the advantages of automatic process control are often underestimated; hence the discipline is under-staffed and under-utilized.
Process Control also plays a crucial role in plant safety and availability as stable wells or facilities are operated more frequently within the design window. Stable operation results in fewer shutdowns, less breakdown maintenance, less deferment, less flaring, lower operational cost, and sometimes even higher ultimate recovery. The impact of process instabilities on overall well or facility performance are often not recognized; a single trip may wipe out months of optimization benefits.
To achieve the full potential of process control in Oil and Gas Production requires change management; Process Control technology skills need to be used throughout the whole projects lifecycle and integrated with the various processes from field development planning to surveillance and optimization.
This paper will provide the following examples:
where the application of automated control technology has improved production surveillance, management, and optimization.
The role of Process Control in Exploration and Production
Most engineers and managers confuse Process Control with Instrumentation; i.e. they think Process Control is only about the video screens in the control room and the associated computers. However, Process Control is the rationale behind the visible systems (instruments and valves) and the strategy about realizing process objectives. Design decisions are made when devising the control strategy; if the level in a vessel is higher than the desired level, the control strategy is either to open the valve of the liquid outlet stream or to throttle a valve in the incoming stream. The right choice will depend on the process itself and the process objectives of the integrated facility (i.e. the consequences of the strategy upstream or downstream of the unit must be considered). Certain control objectives will be conflicting or have different priorities. It is also important to consider the dynamics of the process, the drifts and changes in the underlying equipment and processes, the measurement noise, and the possible disturbances that can occur.
A web based system has been developed to improve workflow optimisation, collaboration and the communication of a business process. This new generation project management application greatly enhances the ability of an organisation to comply with external standards and implement consistent internal systems. It helps people of all levels of experience to share information across geographical and organisational boundaries, can reduce cycle or reactivation time and helps to implement controls such as stage gates. The user interface displays the workflow and current status of single or multiple projects with unparalleled clarity.
All companies have management systems or work processes, some more formal than others, to help maintain consistency and quality in their business delivery and to ensure compliance with corporate and regulatory requirements. There is a common need across industry sectors and disciplines to define a way of working and then ensuring that this "business process?? is used. Management systems may be inefficient or may even fail for a number of reasons, including lack of detail or too much detail, poorly defined requirements or inadequate information technology.
The simple but highly innovative web based system described in this paper was completed and implemented six months after coding began. An existing process had been mapped to the new software so users were immediately familiar with the task descriptions and terms used. The application delivered on its basic promise; improved process visibility, consistency and compliance with defined standards.
This paper contains a summary of the system attributes that led to a successful launch in Talisman Norway (TENAS) and suggests that a high degree of commonality exists between processes in different organisations. This paper will be of interest at all levels of an organisation - Managerial, Technical and Administrative.
Monitoring system for reservoir pressures in Kuwait Oil Company (KOC) consists of several components in the workflow. These include such processes as well bore surveillance plan, data acquisition from well bore surveys, data conversion into well-designed formats, data validation checks, flow of the formatted data into database, processing and analysis of total data for appropriate reporting. The reports on reservoir behavior may then be used for making informed decisions.
A review of the existing pressure monitoring system in the organization revealed some opportunities for improving the information system. This was found to be especially the case, since it was recognized that the corporate database that serves as a common platform for storing many different classes of data, could now be utilized efficiently to support an improved information system. Potential areas of improvement were found to lie in areas such as, increasing the speed of process flows, improving the value of the information itself and improving the user-friendliness of the overall system. These factors were taken into account while formulating an approach for system improvement.
Accordingly a series of software applications were developed in-house to improve the pressure monitoring system. The overall design was tightly integrated with data support from the corporate database. In terms of technology, the new system offered solutions through an integrated interface and provided flexibility in information generation on the basis of a variety of criteria that can be selected on-line. Such features offered versatility in the diagnostic capability of the system.
Deployment of this new monitoring system will be complemented by a training program where potential users will be shown how to access the system with ease and comfort and on how to derive the value-enhanced information through a faster work flow.