|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Digital twins are nothing but the 3D digital replica of a physical thing. They have been in existence since the days computer-aided design became mainstream during the 1990s. However, they remained standalone replicas for the next 20 years until augmented reality (AR) became prominent in the gaming and entertainment industries. As TechNewsWorld notes, AR—often referred to as mixed reality—is an immersive and "interactive experience of a real-world environment where computer-generated perceptual information enhances real-world objects." The technology expands our physical world by adding a digital layer and generating the AR.
DNV GL has published the oil and gas industry’s first recommended practice (RP) on how to build and quality-assure digital twins. Developed in collaboration with TechnipFMC, DNVGL-RP-A204: Qualification and Assurance of Digital Twins sets a benchmark for the sector’s varying approaches to building and operating the technology. However, there has been no requirement for their digital counterparts to go through the same procedures,” said Liv Hovem, CEO, DNV GL–Oil & Gas. It is time to prove that twins can be trusted, and that the investments made in them give the right return,” she added. The RP is the first systematic plan to show how a digital twin is delivering as expected.
The focus on digitalization and how to reap the benefits from it, is increasing in the oil and gas industry. To make this happen subsea by using machine learning and neural networks on mechanical products, we need to understand how we can digitize them.
In this paper we will show how we can generate a true digital twin that could enable the step change in how we monitor and understand the mechanical products placed out of reach for normal preventive maintenance.
We look at the industry’s current information philosophy regarding mechanical products; from idea to development and testing to installation and operation. Where the products are currently designed to end up with a nominal 3D model, test components and the verification that they are fit for purpose is made through a test scope simulating certain scenarios defined by the industry. The products are then installed, and their condition is assumed to be fine until evidence of the opposite is apparent, and the time window to perform mitigative intervention is gone.
Creating true digital twins of mechanical products will require more data. In recent tests we have focused on generating data points to understand the response/behaviour of the products in multiple scenarios and utilized this data to describe the behaviour accurately and numerically; from this we can generate a digital representation of what state the products are in.
The results are based on recent work in developing a digital twin for mechanical products. In this work we have proven through documented testing and quantitative analysis that we can generate a validated numerical model for mechanical products. This will form the basis for understanding the state of the products and predict when intervention/maintenance is imminent for the operator. Using this model and method, condition monitoring of the mechanical products can now be enabled with relative few datapoints extracted in time intervals, and through the use of its numerical representation, establish an actual condition history of the product itself.
Further this model would enable a deeper understanding of the actual operational effects, to which the product is exposed during its lifetime, leading to more precise and cost-efficient industry requirements and system knowledge.
The complexity and volume of data generated during a major CapEx Project is huge. There can be hundreds of thousands, potentially millions, of documents and records with repeated references to the same data in different formats. The success of a project relies on effective sharing of this data to ensure all team members get the data they need to do their tasks. This only gets more complex as project teams manage changes and details evolve through the life of the project. Maintaining control of the schedule and budget for a Mega Project depends on understanding the status of all this information.
This paper discusses how a data-centric approach to project data paired with robust engineering information management and contextualized in a Project Command Center, acts as a digital project hub to ensure Project teams can stay on top of their information and ensure project success by keeping schedule and costs on track.
The Digital Project Hub is the "Google Maps" of a Mega Project, using click-through, contextual 3D models and hot spotted 2D drawings to get access to relevant engineering. Digitalization of engineering systems enables this approach, leveraging common identifiers from systems with related attributes, status and relationships associated with it. The Project Command Center acts as data aggregator with API’s/Gateways that allow for data to be published from multiple systems into a single portal. The Hub is a true data aggregator, leaving all records in their originating systems, acting as a read-only publishing site.
The system can then be used for reporting, searching, filtering and running consistency checks and analysis between systems of record to catch potential misalignment in data, and inconsistencies in naming conventions, formats and other key data points.
Traditional file and document management systems consume a huge amount of project resource. Typically project team members can spend between 30-50% of their day looking for the data. Having the data readily available in the Project Command Center virtually eliminates this delay. Projects are also impacted by the accumulated inefficiencyimpacts created as teams/individuals wait for the data they need to complete their tasks, and by the rework caused by errors that are propagated across records in different systems.
The approach described here is unique in the industry as it offers a true view of a project, sSimultaneously accessing data from multiple teams, crossing contractual and scope boundaries. The Project Command Center becomes the central location to visualize the entire project, enabling collaboration between teams, from engineers to operators. Coupling technical information with status and relationship information enables multi-dimensional filtering and pivoting of project data. The result is a central location where data: is up to date and correct; searchable; visible in context; trustworthy; and easily accessible.
Note: The above abstract was updated on August 25, 2020 to reflect a solution name change from ‘Digital Hub’ to ‘Project Command Center’.
Asset Performance Management (APM) aims to improve the reliability and availability of physical assets while minimizing risks and operating costs, optimizing productivity to increase return on asset investment. Traditional APM combines IT data and OT data with big data analytics to define a course of action that will improve business outcomes.
This paper introduces a new concept for APM, Visual Asset Performance Management (Visual APM), that adds an asset information layer to APM. A Visual APM solution uses navigable 2D and 3D models to deliver "living" digital twins of equipment, machinery, and processes. It creates a centralized information repository to provide a seamless, contextualized view of data.
This approach integrates interactive visualizations of equipment and plants with real-time data and analytics that teams can leverage to inspect asset health and monitor business performance in real time.
Visual APM represents a paradigm shift for conventional Asset Performance Management, presenting a new way for engineers, operators, and maintenance teams to interact with asset information throughout its entire life cycle to reduce CAPEX and OPEX. The ability to quickly access asset and plant information and visualize asset performance can help increase the overall health of a facility significantly.
A top 10 Oil & Gas company that has adopted this approach achieved the following results:
Cut maintenance costs such as corrosion under insulation by 10–20% Shortened the time spent in the planning inspections process by up to 60% while improving the quality of the planning Achieved overall improvement in the effectiveness of campaigns and minimized rework Reduced health, safety, and environmental hazards Digitized the input of inspection results through handheld devices Increased the accuracy of data captures in the field Enhanced reporting capabilities
Cut maintenance costs such as corrosion under insulation by 10–20%
Shortened the time spent in the planning inspections process by up to 60% while improving the quality of the planning
Achieved overall improvement in the effectiveness of campaigns and minimized rework
Reduced health, safety, and environmental hazards
Digitized the input of inspection results through handheld devices
Increased the accuracy of data captures in the field
Enhanced reporting capabilities
This paper explains how advanced digitalization concepts were employed in the development and construction of the Taweelah Gas Compression Plant in the United Arab Emirates (UAE). The plant began operation in late 2018 and is one of the largest and most modern compression facilities in the world. It is owned and operated by the Abu Dhabi National Oil Company (ADNOC) and comprises three compression trains, each with a processing capacity of 225 million standard cubic feet per day (mmscfd). Two operate at any one time, with the third on standby, giving the plant 450 mmscfd of total production throughput.
Using the Taweelah Gas Compression Plant as an example, the paper describes how onshore oil and gas compression stations can be built efficiently and economically by leveraging advanced digital technologies, such as Digital Twins. Other concepts/strategies that the paper will discuss which can help accelerate project schedules and reduce costs include:
Sophisticated hydraulic modeling software; Large power blocks to reduce the number of compression trains; Sole-source provisioning of compression drive trains; ‘Plug and play’ equipment packages that required minimal onsite commissioning; Remote diagnostics and analytics for condition monitoring and condition-based maintenance to ensure maximum uptime and availability;
Sophisticated hydraulic modeling software;
Large power blocks to reduce the number of compression trains;
Sole-source provisioning of compression drive trains;
‘Plug and play’ equipment packages that required minimal onsite commissioning;
Remote diagnostics and analytics for condition monitoring and condition-based maintenance to ensure maximum uptime and availability;
Using the combination of the above concepts, along with extensive collaboration/co-creation between Siemens Energy and ADNOC, the Taweelah plant was able to achieve first gas just 16 months after front-end engineering design (FEED).
This paper focuses on solutions and strategies for conserving weight and space, reducing emissions, and leveraging data to optimize the performance of rotating equipment on floating, production, storage, and offloading (FPSO) vessels. It discusses design considerations for gas turbines in offshore applications (i.e., dry-low emissions technology, use of lightweight components, etc.) The paper also outlines a holistic digital lifecycle approach to FPSO topsides, which can help reduce capital and operating expenses, shorten project development cycles, and decrease offshore manpower requirements.
For illustrative purposes, the paper discusses specific power and compression solutions that were implemented on various offshore projects in 2017 - 2018, ranging from Offshore Brazil to the Bering Sea. It outlines how the equipment configurations helped operators meet horsepower requirements and emissions targets, as well as CAPEX and OPEX objectives. Additionally, the paper discusses how digital transformation can be leveraged to optimize FPSO lifecycle performance, delivering benefits such as 4-12 week reduction in project cycle times, ~$7 million reduction in CAPEX, and $60 - $100 million reduction in OPEX over a 10-year period.
The compression units installed on sales gas network face wide range of operating modes owing to the varying supply & demand scenarios and the associated network dynamics. It is very challenging to ascertain the real performance in such applications due to changing specific energy consumption. The paper presents development of a novel and robust monitoring system, enabling realtime energy performance monitoring of dynamic compression and revealing realistic opportunities for energy savings.
The methodology comprised review of design and OEM data for the compression units, followed by review of operating envelope. Subsequently, developed a thermodynamic model of compression units encompassing all the operating modes. Then embedded the actual performance curves from OEM in the thermodynamic digital twin and validated the model with actual operating data. Carried out site visits and held discussions with technical teams as part of the comprehensive approach. A mathematical model additionally developed, in addition to the thermodynamic model, to enable operators for off-line monitoring of the compressors performance during unavailability of thermodynamic digital twin.
Detailed performance analysis of centrifugal compressors is essential to ascertain their condition and functioning. A decrease in performance can be an indication of internal wear or fouling, which if allowed to continue, may result in reduced throughput or excessive energy consumption or even unscheduled outages. Thus, the performance is not just an indication of energy or operating cost but also reflects other vital aspects like reliability. The integrated thermodynamic digital twin developed for large sales gas compression units, with total throughput capacity of more than 500 MMSCFD, has enabled and demonstrated effective energy performance monitoring even with changing operating scenarios. It facilitated real-time comparison of actual performance (specific energy consumption) with model based expected performance. It also aided real-time trending of polytropic efficiency as well as real-time display of potential energy savings opportunities. The digital twin has proven to be a reliable and low cost tool to predict compressor performance for various operating modes on real time basis. The data from the compression digital twin can be tied into process simulation models for process optimization. The model can complement supervisory capabilities, diagnostics, control capabilities and even facilitate in predicting failures ahead of time.
This paper describes a data-centric approach for increasing asset reliability and performance through the creation of single-source-of-truth environment (i.e., an Evergreen Digital Twin).
For many oil and gas facilities today, asset data is spread across a multitude of applications and databases, and most are not connected. This paper will show how data can be aggregated and contextualized, and combined with powerful simulation, analytics, and asset performance management (APM) tools to enhance performance, reliability, and maintenance.
For illustrative purposes, an oilfield use case will be presented involving the implementation of a an open and flexible operational intelligence platform (OIP) and an advanced simulation platform. In the proposed scenario, the OIP makes it possible to integrate and contextualize data from the simulator – making it accessible and useful for field operations personnel. The paper outlines the benefits and capabilities the combination of these tools can enable – including visualization and optimization of well production and the ability to simulate "what if" scenarios. It also discusses how artificial intelligence (AI) and/or APM tools could be integrated within the digital asset portal to provide early prediction of equipment failures and conduct failure mode analyses. The solution has been conceptually and technically proven; however, it has not yet been implemented by the field operator.
Overall, the use case highlights the value that can be realized by integrating data from disparate sources within a one-stop environment to make better operational decisions regarding complex oil and gas assets and facilities.
Falsetta, Anthony (2H Offshore Engineering) | Whiteley, Elaine (2H Offshore Engineering) | Dickinson, Craig (2H Offshore Engineering) | Zhou, George (2H Offshore Engineering) | Sundararaman, Shankar (2H Offshore Engineering)
The objective of this paper is to present a method for monitoring the natural frequency of a fixed offshore platform and to utilize the raw data to proactively monitor and predict structural degradation of the platform using machine learning. This can enable operators to predict critical member failure and guide timing of operational decisions and interventions, thus helping reduce cost of ownership, particularly for ageing assets.
A Natural Frequency Response Monitoring (NFRM) system utilizes accelerometers and Fast Fourier Transform (FFT) to measure the sway and torsional frequencies of a fixed offshore platform. A significant structural event is detected by a shift in the natural response of the platform.
A machine learning model (or digital twin) can be trained on this accelerometer data to replicate the platform response and determine the fatigue damage accumulation in platform members. The model, which can be re-trained to maintain accuracy, acts as a near real-time fatigue tracker with the NFRM system acting as a verification tool.
A proof of concept case study is presented and demonstrates how Finite Element Analysis (FEA) simulations and measured data can be used to develop a machine learning model.
The combination of an NFRM system with machine learning has the potential to assist operators in the prediction and identification of structural changes prior to them becoming critical. Knowledge of the actual fatigue damage in key platform members on a near real-time basis allows for proactive planning of inspections or repairs, as well as confidence in the structural performance of the platform, which can justify life extension.