The paper describes an innovative approach to performance improvement using Causal Learning (CL), a method based on the general observation that a business performance is largely the outcome of the organization, processes and procedures, ways of working, constraints and norms - the systems that the business applies to itself. These system causes are often remote from physical causes of equipment failures and as such remain hidden until revealed by appropriate analysis. The objective of CL is discovering these system causes that ultimately lead to an undesired outcome or event. CL helps us "learn" the performance system, develop insights from these discoveries and recognize the specific aspects of a system that require change to shift business performance. The Company adopted this approach to improve problem solving and root cause analysis of machinery failures. The initial decision to apply CL followed several outages of power generation systems that continued to occur after previous analyses of similar events in the past. An Enhanced Problem Solving Team (EPST) was established and trained to apply Causal Learning principles to reveal the underlying system causes of these outages. In the time since that first analysis the tools and techniques of CL have been applied to other undesired or unexpected business outcomes including HSE and project work with little or no direct technical content. CL reveals the contribution of well-intended human behaviours behind unwanted outcomes (e.g.
This paper explains how half a billion hours of service data for pressure safety valves (PSVs) can be analysed and presented to allow optimisation of maintenance intervals and manage the inspection of PSVs. The large amount of data creates challenges, but also opportunities for an enhanced methodology expandable to other equipment types.
A methodology has been developed that uniquely combines qualitative and quantitative analysis. The latter ensures that the risk from PSV failure is determined and below set criteria. The drawback with this criterion alone would be that it is based on average data from a large set of PSVs and the average may not be applicable everywhere especially if failures are not random, but have an underlying, potentially unknown, cause. Therefore, each PSV is also qualitatively assessed. To bridge the gap between individual PSVs and "large sets", groups of PSVs are also identified in the methodology and data for them is collated and used within the whole group; the careful analysis required to define the groups, which must have similar properties, or performance, is described. This multi-layered assessment gains the most information possible from the data. The key part of the process is also to present this data for review and analysis, and this is achieved through a digital, cloud-based interactive dashboard.
The analysis has shown that maintenance intervals can be reduced significantly, but simultaneously risk reduced by concentrating effort on the worst-performing PSVs. Not least, a dashboard presentation of the risk-based inspection (RBI) showing the calculated inspection interval, changes to the intervals and failures allows a clear picture to be developed of PSV performance. Maintenance planning also becomes easier and information required for deferral assessments is available in seconds rather than hours. The analysis shows where poorer performance can occur, this is often applicable across different assets. The way in which the approach can be expanded to other equipment types will be described.
The novel approach in the assessment is the multi-layered combination of qualitative and quantitative analysis and the presentation of a large amount of data through the cloud to be used by maintenance teams, technical authorities and operators. It also shows the benefits of collecting half a billion service hours of data and that this need not be an onerous task.
The realization that fossil fuels are a limited resource, and the growing awareness of the negative impact their emissions have on the planet, has impacted every oil and gas major. The global challenge is expressed in the "energy trilemma" of: Enough Energy, Affordable Energy and Sustainable Energy.
The industry must adapt, in terms of cost and environmental footprint. In this paper we discuss how digitalization and renewable sources can drive innovation to meet these challenges.
We will use current long-range forecasts to understand how the global energy mix is expected to change over time, and illustrate how different scenarios are likely to affect the offshore industry. We also study how digitalization and hybridization with technologies like offshore wind and power-from shore, can reduce costs, energy consumption and emissions.
There are many trends accelerating the introduction of new energy sources These include: Global population growth and changing dynamics: "Millennials" bring with them their own expectations about technology, the pace of work and accountability. Equally influential, is the challenge to feed and power the 2 billion poorest and the extra 2 billion people expected by 2050. Transportation changes: Road, aviation and shipping account for more than 60 percent of the world's oil consumption and key to limiting the impact on the climate. Energy generation revolution: The grid needs to cope with the increased power demands and to incorporate and expand the contribution of renewables Rise in distributed generation: Hybridization pilot projects to use offshore wind turbines to power e.g. water injection systems.
Global population growth and changing dynamics: "Millennials" bring with them their own expectations about technology, the pace of work and accountability. Equally influential, is the challenge to feed and power the 2 billion poorest and the extra 2 billion people expected by 2050.
Transportation changes: Road, aviation and shipping account for more than 60 percent of the world's oil consumption and key to limiting the impact on the climate.
Energy generation revolution: The grid needs to cope with the increased power demands and to incorporate and expand the contribution of renewables
Rise in distributed generation: Hybridization pilot projects to use offshore wind turbines to power e.g. water injection systems.
There are a range of technologies described, which will provide the transformational step change to enable companies to transition into the broader energy ecosystem. However, the real game changer lies in integrating these technologies in a way that drives the evolution from connected operations, to collaborative operations and ultimately autonomous operations to achieve maximum value.
We will describe how, by properly using digital technologies, the sector can not only reduce capital and operating expenditures by up to 30 percent but also use energy optimization and hybridization with renewable energy sources to reduce emissions and help oil and gas operators do their part in addressing "The Energy Trilemma".
No matter what industry or activity, when a human is performing a task, there is a possibility that the person carrying out that task could make an error. There are numerous studies showing the contribution of poor procedures towards human error, which led to an incident, ranges between 65-90%. Process Safety Management legislation such as Seveso III and OSHA 1910.119, require the use of procedures when executing safety critical tasks and as such regulators have recognized the importance of having a set of good quality procedures as part of the management of human factors.
As companies begin to embrace the concepts of digitalization and big data, the main challenge still remains … ‘how do we make a step change in reducing human error in heavily paper based operating and maintenance procedures?’
This paper will provide examples of how poor procedures have led to human error causing across industry incidents, introduce the background to human factors with respect to procedures and explain some of human error categories to which people are susceptible. The paper will then explain the road map approach that the UK regulator (UK Health and Safety Executive) has adopted as part of their Human Factors Delivery Guide. The paper then shows how the energy industry's approach return these procedures back into a paper format, fails to take advantage of available digital technologies to make the step changes in reducing human error.
This paper shows that incidents continue to occur in all industries due to human error in procedures and shows how the drive from the regulator to perform Critical Tasks Analysis can actually lead to procedures becoming less useable (f these reviews are not performed correctly). The paper will then show how taking a digital approach to meeting these new regulatory requirements provides the opportunity to digitize existing operating and maintenance procedures, enabling a structured, efficient and auditable approach to theses assessments. The paper will also show how the adoption of the available digital technologies provide new performance influencing techniques that are not available in paper-based systems.
The paper will also show how emerging technologies such as Augmented Reality can further enable the transition to these new technologies and how big data, can provide additional continuous improvements in procedures, ensure appropriate competencies are in place for field workers performing tasks and also introduce significant efficiencies to lower operation costs.
Human error continues to contribute significantly to incidents in the energy and other industries. To address this, regulators, such as the UK Health and Safety Executive, are placing new requirements on operating companies to ensure the risks associated with errors in procedures are managed more effectively. The opportunity to make a step change in reducing human error, whilst also providing an efficient work flow, will lead to safer working environments, reduce potential impacts on the environment and also provide efficiencies for operation and maintenance teams, which will lead to savings in Operational Expenditure.
This paper's focus is the advocation of utilising diagnostic data available from digital field devices to help reduce operating costs for end users.
In recent years companies across multiple industrial sectors have invested in improving their understanding of both the historical and live data they produce. The source of the data is specific to the processes but the objective for all remains the same - to use statistical techniques to develop a toolset that can be used to predict performance based on live and historical data.
For the oil and gas industry, the continued adoption of digital device transmitters has increased the volume of data available from instruments such as flow meters, temperature probes and pressure sensors. Typically, this additional data provides information on the integrity or quality of the associated device. However, with the appropriate level of facility and instrument knowledge it is also possible to infer information with respect to the process stream.
Furthermore, this data, if correctly interpreted, can be used to predict maintenance and calibration requirements, resulting in reduced staff effort and shutdowns. The need for physical intervention due to device failure is also reduced, which in turn minimises the potential for accidental hydrocarbon release when a device is removed for repair or replacement.
NEL are currently undertaking research projects with the primary objective of developing definitive correlations between process effects, meter condition and diagnostic data response. The paper provides details of said research, with particular reference to the data science and mathematical techniques currently being trialed for the analysis stage. The techniques, when fully developed, will be metering technology specific and therefore offer a level of insight to end users on facility and meter performance which is not currently available in industry. The toolsets developed will in turn provide the end users with the knowledge and confidence to make cost saving decisions with respect to planned maintenance as well as improving facility efficiency through a more comprehensive understanding of their own data sets.
Presently, drilling riser joints are inspected every five years. This is usually accomplished by rotating 20% onshore every year to be dis-assembled and inspected. This requires extensive boat trips from a mobile operating drilling unit (MODU) to onshore and trucking of the riser to the inspection facility. Typically, 20 riser joints from each riser system are transported on a boat and one riser per truck to an inspection facility each year, making the logistics of performing a drilling inspection complex and costly.
A laser-based measurement for inspection together with monitoring of riser systems has been implemented with a new standard process for collecting critical riser data that is ABS approved. The aim is to mitigate the costs and time associated with essential MODU drilling riser inspections, by empowering operators to reliably determine the condition of drilling riser joints, consistently predict when vital components will require service and accurately assess remaining component life.
The approach utilizes a life cycle condition based monitoring, maintenance and inspection system that can be deployed on a MODU, enabling resources to be deployed only when necessary, instead of on a calendar interval. The solution consists of: Performing a baseline inspection on the riser joints to assess their present state, Collecting the environmental and operating data when the rig is on site drilling, Feeding the environmental and operating data into a digital twin. The tuned digital twin can be used to predict future damage.
The approach removes uncertainties surrounding damage of riser joints and will allow the owner to determine whether riser should be redeployed or replaced. This is the only process that is ABS approved for condition based monitoring of drilling riser systems. The system is compatible with all present owners’ maintenance programs and ensures that maintenance requirements are supported with robust engineering.
Objective/Scope: How will Verification Schemes of the future give clarity within risk management and process safety while providing increased value and cost savings to the Operator? Verification was first introduced to the UK in 1998 and is now required throughout Europe following the introduction of the 2013 EU Directive on Offshore Safety. It is intended to provide operators, the regulator and other stakeholders reassurance that the SECEs are operating as intended and therefore risks related to major accident hazards are managed. Much has changed in the industry over the last 30 years: our collective understanding of Major Accident Hazards (MAH) and the contribution made by Safety and Environmental Critical Elements (SECEs) has improved, safe systems of work have become more advanced, computerised maintenance management systems have developed and offshore communication and technology is unrecognisable compared to the 1990s. This paper will explore whether the role and scope of the verifier has moved with the times and offer suggestions as to how maximum benefit and value can be achieved by Verification. Methods/Procedures/Process: We have taken a critical review of the Verifier role and whether we still meet the original intent of the legislation, we have addressed the following specific questions in this regard: - Has our technical scope of work transformed since Certification of Fitness days?
Free text and hand-written reports are losing ground to digitization fast, however many hours of effort are still lost across the industry to the manual creation and analysis of these data types. Work orders in particular contain valuable information from failure rates to asset health, but at the same time present operators with such analytical difficulties and lack of structure that many are missing out on the value completely. This research challenges the current mainstream practice of manual work order analysis by presenting a methodology fit for today’s context of efficiency and digitization.
A prototype text mining software for work order analysis was developed and tested in a user-oriented approach in cooperation with industrial partners. The final prototype combines classical machine learning methods, such as hierarchical clustering, with the operator’s expert knowledge obtained via an active learning approach. A novel distance metric in this context was adapted from information-theoretical research to improve clustering performance.
Using the prototype tool in a case study with real work order data, analytical effort for certain datasets was reduced by 90% - from two working weeks to a day. In addition, the active learning framework resulted in an approach that end users described as "practical" and "intuitive" during testing. An in-depth review was also conducted regarding the uncertainty of the results – a key factor for implementation in a decision-making context.
The outcomes of this work showcase the potential of machine learning to drive the digitization of not only new installations, but also older assets, where as a result the large amount of unstructured historical data becomes an advantage rather than a hindrance. User testing results encourage a wider uptake of machine learning solutions in the industry, and particularly a shift towards more accessible in-house analytical capabilities.
Digitalization is the transformation of business models and activities through the strategic use of digital technologies. Despite technological advancements in machine learning (ML), artificial intelligence (AI), and virtual reality (VR), there remains a low maturity of digitalization across the oil and gas industry, especially in offshore operations. There are many roadblocks on the way to digitalization, from data silos to legacy systems. Operational inefficiency is one of the most painful byproducts of these problems.
To complete a single maintenance task, for example, on-site workers may need to access several separate systems to get the required data. They rely on printing out the information they need in order to complete the maintenance activities, and after taking notes on pieces of paper, they have to return to their desktop computer to log the performed tasks.
Not having the data readily accessible contributes to overall inefficiency, and offshore workers often run back and forth while performing maintenance tasks, increasing the hours they spend in challenging conditions.
This paper will outline an application design philosophy for oil and gas companies that combines academic and practical insights, an emphasis on continually testing products in development, and an overall goal of creating value.
This paper will describe how a Nordic software company is using the design philosophy to help an oil and gas operator in Northern Europe optimize on-site operations -- including increasing efficiency and safety -- on its offshore installations on the Norwegian Continental Shelf.
Specifically, the paper will show the software company ingested and contextualized operational data from the operator's assets and made historical data available for field workers via an application for computers and smart devices. This included access to sensor data and historic equipment performance data; all documentation related to maintenance, including procedures, drawings, piping and instrumentation diagrams (P&IDs), and maintenance logs; and interactive 3D models of installations and equipment.
After only three months, the crew at one of the operator's oil installations saw significant increases in the number of monthly maintenance jobs (up to 10% for certain tasks) and reduction of the time spent on certain routine inspections (in some cases up to 50%).
This paper will discuss when it is advantageous (in the context of an offshore oil and gas environment) to process data at the network edge (in close proximity to equipment assets) or to stream data to a cloud-based Internet of Things (IoT) platform for analysis. It will offer an objective assessment of both approaches and provide recommendations for securing data in both cases, as part of an overarching cybersecurity strategy.
IoT has opened the door to significant efficiency gains in the oil and gas industry. This is particularly the case in the offshore sector, where there is a pressing need to reduce costs and maximize equipment availability. In some cases, it is advantageous to process data in close proximity to equipment assets (i.e., at the edge). In others, it makes more sense to securely stream data to a cloud- based IoT platform and harness artificial intelligence (AI) to aid in decision making. In certain cases, both architectures can be utilized in compliment to one another.
Many factors need to be taken into consideration when evaluating an edge or cloud-based approach. Some of these include data volume, transmission and processing speed, control of data, cost, etc. Edge computing can be used to streamline and enhance the efficiency of data analytics. In certain applications, this can mean the difference between analyzing a performance failure after the fact, and pre-empting it in the first place, which in the offshore environment could potentially translate into millions of dollars per day.
On the other hand, there are situations where it is beneficial to store large volumes of data on a cloud-based platform. For example, if the goal is to leverage advanced IoT-based industrial analytics to optimize an entire fleet of a certain type of equipment, the cloud may be the best solution. Cybersecurity is another consideration. Attacks on critical infrastructure have risen significantly over the course of the past year. As more Intelligent Electronic Devices (IEDs) are deployed in the oil and gas industry to optimize efficiency, Industrial Control Systems (ICSs) are increasingly vulnerable. As a result, the threat extends beyond proprietary data to mission-critical operational technology (OT) assets and equipment.
Cybersecurity standards and layered, defense-in-depth models have grown in response to the frequency and sophistication of cyber attacks. Additionally, recent advances in cyber defense technology incorporate small, kilobit-sized embedded software agents to monitor networks for anomalies that could signal an intrusion. This paper will explore new cybersecurity threats to oil and gas assets, as well as strategies operators can employ to defend against them, whether using an edge or cloud-based platform, or both.