There are significant trends in the reduction of traditional 2D design, this is being replaced by the sole development of 3D models. This paper will detail how to develop algorithms to automate large aspects of a design review. These techniques significantly increase efficiency, ensure constancy and optimise the accuracy of the design, leading to reduced project costs.
Utilising the 3D models enriched metadata and by developing independent algorithms, it is possible to create a cyberphysical model that enables automation of the design review. For example; using the geometrical data in the 3D model to check a hazard with respect to a detector, confirming that the detector is located close to the hazard. There are multiple checks similar this example, cataloguing and scripting these checks can be managed within PLM software.
Using algorithmic automation techniques reduces the overall design hours of a project, it checks the consistency of the design. Getting it right first time reduces the number of changes later in the project lifecycle, avoiding expensive rework costs. During the first phase of this initiative, we have found, that automation leads to a reduction of design hours by 10% and increases the accuracy and consistency of the design review.
This first phase of automation uses the metadata in the 3D model, where the output from the check leads to a comment on the design. To scale the pilotm which will encompass the inclusion of other data sources, will further enrich the cyberphysical model. Ultimately, by creating a decisions database and using Artificial Intelligence we will be able to close the loop, which will lead to a design that is fully evaluated before it leaves the designer. It is also possible to automate in other phases of the project lifecycle, where image recognition will compare the real asset to the model.
This level of automation is unique, there are other low-level forms of automation, but the advancements of this technology has, to our knowledge, not been attempted in the Oil and Gas sector. The development and scaling of this technology is novel and will have a significant impact on the way future projects are executed.
This paper explains how half a billion hours of service data for pressure safety valves (PSVs) can be analysed and presented to allow optimisation of maintenance intervals and manage the inspection of PSVs. The large amount of data creates challenges, but also opportunities for an enhanced methodology expandable to other equipment types.
A methodology has been developed that uniquely combines qualitative and quantitative analysis. The latter ensures that the risk from PSV failure is determined and below set criteria. The drawback with this criterion alone would be that it is based on average data from a large set of PSVs and the average may not be applicable everywhere especially if failures are not random, but have an underlying, potentially unknown, cause. Therefore, each PSV is also qualitatively assessed. To bridge the gap between individual PSVs and "large sets", groups of PSVs are also identified in the methodology and data for them is collated and used within the whole group; the careful analysis required to define the groups, which must have similar properties, or performance, is described. This multi-layered assessment gains the most information possible from the data. The key part of the process is also to present this data for review and analysis, and this is achieved through a digital, cloud-based interactive dashboard.
The analysis has shown that maintenance intervals can be reduced significantly, but simultaneously risk reduced by concentrating effort on the worst-performing PSVs. Not least, a dashboard presentation of the risk-based inspection (RBI) showing the calculated inspection interval, changes to the intervals and failures allows a clear picture to be developed of PSV performance. Maintenance planning also becomes easier and information required for deferral assessments is available in seconds rather than hours. The analysis shows where poorer performance can occur, this is often applicable across different assets. The way in which the approach can be expanded to other equipment types will be described.
The novel approach in the assessment is the multi-layered combination of qualitative and quantitative analysis and the presentation of a large amount of data through the cloud to be used by maintenance teams, technical authorities and operators. It also shows the benefits of collecting half a billion service hours of data and that this need not be an onerous task.
The Health and Safety Executive's analysis shows poor hazard identification and risk analysis is a causal factor in 12 out of 14 recent major hydrocarbon releases, demonstrating that major accidents could be prevented if workers had a better understanding of major accident hazards (MAHs). Therefore, it is proposed that improving awareness of MAHs across the workforce, both onshore and offshore, would lead to better MAH management and a reduction in major accidents.
Once the domain of process engineers, major accident hazard management has been largely overlooked by much of industry. It was acknowledged as a problem but ignored in the hope that specialists had it under control.
Step Change in Safety's Major Accident Hazard Understanding workgroup responded to this by identifying different job roles (onshore and offshore), evaluating the resources to develop MAH understanding already available and creating a suite of resources to fill the gaps.
These resources include an e-learning tool for onshore (office-based) personnel, bowtie lunch and learn sessions, gap analysis tools to identify training requirements of offshore jobs, senior leaders' workshops and a MAH Awareness programme. The MAH Awareness programme, consisting of short films and presentations which can be customised to suit specific worksites and job roles. Each of the four packs explores different aspects of major accident management including MAH identification and analysis, bowties and safety and environmental critical elements, barrier maintenance, assurance and verification and the importance of taking responsibility of ‘owning’ your barrier.
Analysis of questionnaires completed before and after exposure to the programme demonstrates that knowledge of MAH management increased by approximately 30%. Additionally, the data demonstrates that elected safety representatives have a greater base knowledge of MAHs than the general offshore workforce, as do technical staff compared to non-technical and those employed by operators compared to contractor employees.
Whether this increased knowledge gained through taking part in the MAH Awareness programme is retained or impacts the number of major accidents has not yet been analysed but data such as the number of major accidents, including hydrocarbon releases, will be examined over forthcoming years to evaluate the effectiveness of the resources developed.
Objective/Scope: How will Verification Schemes of the future give clarity within risk management and process safety while providing increased value and cost savings to the Operator? Verification was first introduced to the UK in 1998 and is now required throughout Europe following the introduction of the 2013 EU Directive on Offshore Safety. It is intended to provide operators, the regulator and other stakeholders reassurance that the SECEs are operating as intended and therefore risks related to major accident hazards are managed. Much has changed in the industry over the last 30 years: our collective understanding of Major Accident Hazards (MAH) and the contribution made by Safety and Environmental Critical Elements (SECEs) has improved, safe systems of work have become more advanced, computerised maintenance management systems have developed and offshore communication and technology is unrecognisable compared to the 1990s. This paper will explore whether the role and scope of the verifier has moved with the times and offer suggestions as to how maximum benefit and value can be achieved by Verification. Methods/Procedures/Process: We have taken a critical review of the Verifier role and whether we still meet the original intent of the legislation, we have addressed the following specific questions in this regard: - Has our technical scope of work transformed since Certification of Fitness days?
Risk Assessments are used to assess the impact of alternativedesigns, changes during operations, and compliance of offshore installations against tolerabilitycriteria. Typically, asset information is used to develop a mathematical model; this can beupdated to reflect changes during the facility's lifecycle. This paper examines how the use ofcloud-based technology to develop a Digital Twin improves efficiency. Allowing projectstakeholders full access to the QRA model also enables greater understanding of hazards.
Digital technology pervades all areas of business and societyand offers great advantages to safety engineering relative to traditional approaches. This paperdemonstrates how cloud basedtools canturn the traditional static QRA process into a living QRA which can be updated throughout aninstallation's lifecycle by creating a digital twin. This type of living QRA allows projectstakeholders to change key parameters and assess the effect of these changes on risk levels. Inaddition, the results can be interrogated down to fundamental levels using a Microsoft Power BIdashboard.
The output of QRAs are usually static reports providing anoverview of the detailed work undertaken and a high-level summary of the results which arecompared with tolerability criteria or to demonstrate ALARP. This paper demonstrates howcustomised internet browser tools utilising 2D and 3D graphics may be built on top of the QRA toextract more detail than previously possible and communicate risks in a flexible and interactiveway. It also shows how consistent data management can form a basis for innovating beyond thetraditional approach. This allows a wider range of stakeholders to determine risk drivers, isolatesingle accident scenarios and filter results to a greater depth than is possible through a paperreport and allow a greater understanding of their hazards.
Digitalisation is an increasingly ‘hot topic’ in the process industry. Making use of new technologies to provide greater insights can aid in better and more timelyhazard management whilst reducing costs to stakeholders. Examples of innovations which promote better assessment are provided.
A considerable number of fixed offshore platforms around the world have either already surpassed their design life or are approaching it. In the Middle East, more than 70% of the platforms are operating for more than 25 years; some of the assets are even operating for more than 40 years. High oil price and advancement in technology planned to increase the productivity of reservoirs have led to significant investment in terms of cost and resource to manage these assets. There is still a lot of recoverable oil and gas in the reservoirs hence, there is an increased need to extend the life of these facilities while managing the associated risk. This technical paper details a methodology for assessment of remaining life and details some of the degradation/ life extension issues for fixed offshore assets. As it is extremely difficult to cite all the degradation issues, some of the critical degradation issues based on experience and knowledge has been covered in this paper.
This paper describes an original approach to reduce drastically the analysis time needed for fatigue design of structures by using machine learning techniques. The approach is applied to the spectral fatigue analysis of various structural details on a converted FPSO hull, where design iterations are usually time consuming. For the structural detail example used in the present study, numerical results show that by including the right inputs to the machine learning algorithm, the predicted fatigue life could compare well with the spectral fatigue analysis output with a score of up to 0.997. For the critical elements with high fatigue damage, the predicted fatigue life is found up to 2.5 times of the actual value. Overall an estimated 5.5 hours (out of 6 hours) are saved for one iteration of spectral fatigue analysis.
Ocean developments for oil and gas and renewables involve site specific floaters designs. For such assets, it is required to perform detailed structural analysis at the design stage to consider all the specificities of the structure and the environments in the design and make sure the floater can operate safely throughout its design life. For fatigue design, the state of the art approach for such floaters is spectral fatigue analysis.
The calculations of the fatigue damage using a full spectral direct calculation approach is labor intensive, especially when design iterations are needed. The complete assessment procedure comprises hydrodynamic analysis to compute wave-induced loads; structural analysis for both global coarse mesh and local fine mesh finite element models; statistical analysis to calculate the short-term stress response based on the environmental conditions on site; and lastly fatigue strength evaluation using the applicable fatigue S-N curve. This analysis approach can be rather time consuming in cases where many structural details are assessed or where multiple iterations are needed to reach a satisfactory design.
In recent years, there has been several cases of mooring line failures on various floating offshore units. In several of those cases, the failures were identified months or years after they initially occurred. Most assets being designed for single line failure only, this means that the risk of a catastrophic failure of the whole mooring system is quite high if the failure of a single line cannot be detected reliably. To mitigate that risk, class societies such as DNV GL have introduced requirements for the use of lines tensions monitoring systems as part of the mooring class notations (POSMOOR). However, most of the line tension monitoring systems available on the market today have proven unable to remain functional after more than 2 years in operation, due to the harsh conditions and loads they are exposed to. For that reason, an alternative system for line failure detection is needed.
In this paper, a system is developed to detect reliably a single line failure based solely on GPS and motions sensors data installed on the asset. The GPS and motion time series are used to train a neural network which can then reproduce any motion signal as a function of the others, capturing all the complex nonlinear correlations between the wave frequency and drift motions of the asset along its 6 degrees of freedom. Any change in the mooring system properties such as a line break has an impact on those correlations, this change is captured by the neural network, therefore enabling it to detect a line failure. The accuracy of the system is demonstrated using numerical simulations for an FPSO in various sea states, where a line break occurs at one time instant during the simulation.
A moored floater is a 6-DOFs system where the loads are stochastic environmental loads, wind waves and current. The behavior of the system in a given sea state is governed by its inertia, damping and stiffness properties. The failure of one or several mooring lines has an obvious effect on the stiffness, but also on the damping of the whole system since the drag on the lines is one of the damping contributions. The floater displacement in the same sea state will therefore be different for an intact or damaged system. Those differences might not be easily detectable since depending on the mooring system configuration, and the sea state, a line failure might not change significantly the basic statistics of the motions such as mean and standard deviation. In those cases, changes in the higher order statistics, as well as in the correlations between different degrees of freedom motions need to be detected to identify a line failure.
Assuming perfect knowledge of soil strength and prescribing load factors for loads, we calibrate the necessary requirement for the material factor on characteristic soil strength by tuning a reliability analysis to meet a prescribed target failure probability. Keeping this calibrated material factor unchanged, the reliability analysis is repeated with the stochastic model for soil strength altered to include statistical uncertainty owing to limited soil data. A reduced "cautious" value of the characteristic soil strength is determined such that the failure probability resulting from the analysis is maintained equal to the target. Based on this reduced value, the minimum confidence level needed for characteristic value estimation is interpreted. Using soil strength for prediction of axial pile capacity as an example, this paper outlines a procedure for reliability-based calibration of the minimum confidence needed when estimating characteristic soil strength, defined as the mean value, with confidence for use in offshore pile design.
Although Trinidad and Tobago has an abundant supply of relatively pure CO2 and more than 1 billion barrels of heavy oil deposits there are no active enhanced oil recovery (EOR) projects using carbon dioxide (CO2).
In this paper, we have performed black oil simulation studies to evaluate several injection strategies with carbonated water, varying the salinity and viscosity of injected water. The salinity was varied by 1,000 and 35,000 ppm. The viscosity was increased by adding 0.1 weight percent polymer to injected water. The investigation was carried out using a commercial reservoir simulator. The simulation grid represents the properties of a quarter five-spot of the Lower Forest sand of the Forest Reserve Field. The reservoir simulation components used are water, polymer, H, Na, Cl-, dead oil, solution gas and CO2. The Stone #1 three-phase relative permeability model was used to calculate the three-phase relative permeabilities from two-phase data. In addition, a factorial experimental design was utilized and twelve simulation runs were done along with nine benchmark runs for comparison to other EOR methods.
From the results obtained the following was concluded: water salinity has no effect on either oil recovery or carbon dioxide storage; polymer injection increases oil recovery and carbon dioxide storage. We found the optimal injection strategy to be a cycling of carbonated water alternating with polymer injection.