Southwest Research Institute’s Maria Araujo led the development of the Smart Leak Detection system, which uses machine learning techniques to analyze sensor data to autonomously discover and identify liquid pipeline leaks in real time. Leaks in the US liquid hydrocarbon pipelines network exceeded 100,000 barrels each year between 2007 and 2012. Using algorithms to process images from sensors scanning the infrastructure, the Smart Leak Detection (SLED) autonomously pinpoints small hazardous leaks before they become major problems, with minimal false alarms. These sensors can be positioned at sensitive pipeline junctures or deployed on drones to cost-effectively fly over pipeline networks, according to Southwest Research Institute (SwRI). It can differentiate between a hydrocarbon and an innocuous puddle of water.”
This paper examines both the method and results of a leak detection sensitivity analysis for a liquid pipeline. A fractional factorial design is used to quantify both primary effects as well as confounding effects between parameters. The analysis examines the impact of uncertainty and bias in pressure and flow measurements, as well as spatial and temporal discretization on leak flow estimation. These are considered under conditions of transient pressures, the presence of a leak and with altered SCADA poll frequency. The results of the parametric study as well as the applicability of the general approach are discussed.
INTRODUCTION AND BACKGROUND
The ability of pipeline operators to swiftly detect pipeline leaks is critical to the safeguarding of public and environmental interests. One of the prevalent tools for achieving this ability within industry is the use of a real time transient model of the pipeline. A primary benefit of utilizing a real time transient model for pipeline leak detection is the ability to accurately represent the pressure profile of the pipeline under transient conditions (Learn, 2015). A more accurate representation of pipeline transients leads to a more accurate estimation of linepack and hence a lower error in the leak flow estimate. As a result, alarm threshold values can be lowered without increasing false alarm frequency, and a better leak detection sensitivity can be achieved.
One of the more challenging roles for a leak detection engineer is to assess and understand the multitude of parameters affecting the error in leak flow estimation. The most widely applied standard, API1149 (1993), provided an excellent theoretical framework for estimating leak flow uncertainty as a function of time averaging window and telemetry uncertainty. However, the most recent update to this standard recognizes that potentially many different parameters affect leak flow uncertainty and recommends a perturbation approach against a reference model. (Salmatanis, 2015)
Given the number of parameters which may affect leak detection sensitivity, a more efficient method is needed to assess the impact of such parameters. Assessing all the potential effects of all parameters within a large quantity of scenarios can be time consuming. It can be onerous to perform this analysis on pipelines in the early stages of project development, during which certain other design assumptions are yet to be confirmed. In addition, many projects may never progress beyond the prospecting stage despite significant design and analysis.
Nicholas, Ed (Nicholas Simulation Services) | Carpenter, Philip (Great Sky River Enterprises LLC) | Henrie , Morgan (MH Consulting, Inc.) | Hung, Daniel (Enbridge Pipelines, Inc.) | Kundert, Kris (Enbridge Pipelines, Inc.)
Testing of pipeline leak detection systems can be challenging. It is also a critical activity which provides key information on the systems capability for communications to regulators and key stakeholders. The authors describe an API RP 1130 compliant test method that relies on the development of a limited number of realistic "leak signatures" that are superimposed on archived SCADA data in a way that preserves not only a faithful representation of the leak, but the real-world impacts of noise, calculation uncertainties, and measurement errors as well. In addition to maintaining high hydraulic fidelity, coverage and flexibility, this procedure is performed at low cost while potentially providing a greater degree of insight into the detailed performance of the leak detection system than can be achieved with other methods.
INTRODUCTION AND BACKGROUND
The Need for Testing of Pipeline Leak Detection Systems
A leak detection system (LDS) is a safety and integrity-critical component of an operating pipeline that is designed to help mitigate negative consequences following an unplanned commodity release. Its intended purpose is to reduce the potential negative impacts from a breach in pipeline hydraulic integrity (e.g., a leak with its resulting spill). Reducing these potential negative impacts is achieved by rapidly detecting the leak and determining its most probable location. Determination of these factors in as short as time frame as possible provides key information that is critical in terms of enabling the pipeline operator to respond faster, more effectively, and with greater precision. Note that the most commonly applied method for leak detection is via Computational Pipeline Monitoring (CPM) systems, which are the explicit focus of this document.
As part of the operator’s overall spill response plan the organization should be able to quantify the leak detection system’s predicted performance. This allows the operator to identify areas where further leak detection improvements are desirable and refine location specific response plans. It also provides a mechanism by which the LDS performance can be monitored and tracked over time.
Quantifying the leak detection performance requires testing. As stated in the American Petroleum Institute recommended practice 1130 (API 1130), “[t]he primary purpose of testing [quantifying] is to assure that the CPM system will alarm if a commodity release occurs.” Note that while API 1130 is specific to Computational Pipeline Monitoring leak detection systems, the quantification of system testing is applicable to all leak detection systems.
Healing leaks is always a priority in the oil and gas industry and plays a major concern to human safety. The time required to fix any leak has a direct relationship in determining the damages caused to the environment, industry and most importantly the number of lives lost caused by catastrophic pipe failures. Detecting leak size and location in pipelines with higher accuracy present major challenges to operators. This paper presents an innovative solution to detect leaks or potential future leaks and heal them instantly using Twin Balls Technology. The solution is based on establishing a relationship between leaks and its associated sound level. Knowing those sound levels precisely and how they propagate with respect to time from the leak source will help build the solution with higher precision. The solution consists of inserting two flowing smart balls into the pipeline. The first smart ball will be receiving acoustic data thanks to the sensors inside of it. Once the threshold of sound level is surpassed, designed thanks to multiple experiments, the smart ball will send a signal immediately to a second flowing ball responsible of ejecting a healing fluid. The healing fluid will flow towards the leaking outlet and close it instantly, preventing any further damages. Also, the first ball will alert instantly the supervisors via wifi monitoring and text messages describing the leak size and location. Twin Balls Technology could be used in pipelines of different sizes and flowing fluids.
This work focus on the simulation of gas pipeline dynamic models in view to develop a leakage detection tool. The gas dynamics in the pipes is represented by a system of nonlinear partial differential equations. The linear partial differential equations is reduced to a transfer function model. Taking advantage of an electrical analogy, a pipeline can be represented by a two port network where gas mass flows behave like electrical currents and pressures like voltages. Thence, four transfer functions quadripole models are found to describe the gas pipeline dynamics, depending on the variable of interest at the boundaries. These models are simple enough to be used in the control and management of the network. These models have been validated using operational data and used to simulate a leakage.
Natural gas sustains one quarter of the global energy necessities  with the world gas networks representing many kilometers of pipelines, manifold branches, pumps, compressors, valves, among others; being therefore a large scale and complex system to control and maintain . The control and the safe operation of these systems are crucial due to grave consequences that may result from default operation , mainly leaks.
Model-based (MB) software methods continually measure pressure and/or mass flow signals frequently at the intake and outtake of the pipeline, so instrumentation is usually limited to the extremes of the pipelines and consequently economical to implement. The most reliable among these methods are based on nonlinear partial differential equations (PDE) to describe the gas dynamics . These PDE are not modular and its resolution can lead to a non-efficient and high complexity models; nevertheless PDE can be linearized and leading to simple and accurate models [6,3].
In this paper, we propose a model focus on the modelling and simulation of the pipeline dynamics, and able to improve leakage detection/location techniques. Therefore, we describe four quadripole models which are derived using an electrical analogy. These models bestow boundary values of pressure and mass flow, as well as intermediate values, allowing for obtain mass flow and pressure calculated profiles. The analysis of the discrepancies between calculated and measured values allows investigating the effect of a leakage.
The article is organized as follows:
In Section 2, transfer function (TF) models are derived from a nonlinear first order hyperbolic PDE. Four different TF quadripole models are described in Section 3. In Section 4 the described models are validated using real data. An approach to determine intermediate pressure and mass flow values is presented in Section 5 and adapted to the leakage situation in Section 6. The last section provides some brief conclusions and directions for future work.
Operator Training Systems (OTS) are traditionally used for training pipeline controllers in how to deal with normal and abnormal operating conditions, however OTS can serve to provide multiple uses in an Oil & Gas pipeline software environment. One of these is as a test platform for a Liquid Computational Pipeline Model Leak Detection System (CPM LDS).
This paper demonstrates such a use and outlines the benefits of providing a testing environment that takes into consideration normal operating conditions (NOC) as well as abnormal operating conditions (AOC) and their impact on LDS performance. LDS performance maps can then be generated for most anticipated pipeline conditions. Details of hydraulic modeling, control system emulation, and SCADA telemetry simulation will be discussed as well as their impact on the fidelity of the OTS response and subsequent accuracy of LDS predicted performance. Review of the level of abnormal operating conditions and their implementation on the OTS will also be outlined and the resulting OTS behavior. Lastly, the determination of expected LDS performance for a range of normal and abnormal operating conditions will be summarized.
Introduction and Background
The use of offline methods to establish performance metrics for leak detection systems has been common in the pipeline simulation industry. Regardless of the leak detection method, the necessity to define the anticipated performance and validation of the design of the system is important. Several papers have addressed this topic. A previous study has used historical SCADA data provided to the LDS and analyse the response of the LDS with the use of statistical methods to superimpose leak events (1). In the absence of real pipeline data, as in the case of greenfield projects, the LDS vendor is left to wait until such data is available to be able to create an initial performance map for the LDS. Conditions of the pipeline that are not normally encountered are also problematic as the historical data would not exist. This pertains to failure modes not normally encountered or combined effects of transients and failure modes. To address this, we propose to use the OTS as a leak detection test bed.
The use of the operator training environment as a test bed for defining LDS performance is based on the assumption that the training environment provides a reliable platform that will allow for realistic recreation of pipeline operating conditions. Based on Schneider Electric’s SimSuite Pipeline™ modeling tool the test bed used to illustrate this has several components of the real pipeline operating environment. These are the duplication of the SCADA system and its functionality, the duplication of the field response in terms of PLC/RTU algorithm and device switch gear response, and lastly the modeling of the pipeline hydraulics. All these components integrated together serve as the test bed that will provide a high level of fidelity to the real pipeline response as seen by a LDS in the production environment. The Figure 1 shows the functional architecture of these components.
The American Petroleum Institute (API) publication number 1149 (first published in 1993)  is perhaps the first accepted industry procedure for the numerical assessment of uncertainty in software-based Computational Pipeline Monitoring (CPM) leak detection systems (LDS). This publication remains valid and extremely valuable within its range of applicability. Generally speaking, it is designed for crude oil and refined products pipelines. It also focuses on (while also discussing other ancillary issues) single, straight pipelines and on the Material Balance method of CPM, particularly under steady state conditions.
A recent initiative sanctioned through American Petroleum Institute (API) and sponsored by the Pipeline Research Council International (PRCI) has developed a revision of procedure for the assessment of uncertainty in CPM techniques, in light of a number of recent technological developments and operational requirements. It is also directed at engineering uncertainty factors that prove, in practice, to have a significant effect on LDS uncertainty but that were not thoroughly addressed in the 1993 version.
The new procedure’s aim to follow the standard API and ASME measurement uncertainty practice of defining a Reference Value, a Bias and a Precision for LDS, just as with any other industrial measurement system. In particular, it is possible for the reference conditions of the pipeline to be estimated using a transient pipeline system simulation model – the Reference Model – that takes all the relevant engineering uncertainty factors into account. In this respect, the procedure is similar to a formalized, statistical Leak Sensitivity Study (LSS) as is often performed as part of the requirements analysis and design of a software-based LDS.This paper provides an overview of the procedure, with a focus on the utilization of transient pipeline simulation models as the Reference Model. The process of identifying and prioritizing the key areas of input uncertainty is highlighted. In particular, experiences in running the procedure for LVL liquids, HVL liquids and natural gas pipelines are discussed. Other areas of discussion and comment include how the new API 1149 update, technical report, can be used for a relative benefit analysis of different candidate CPM techniques for a specific pipeline; and how it might fit in with industry best practices as an API Technical Report (TR).
Evaluating the effectiveness of a CPM implementation via leak testing is paramount to confirm that the performance of the CPM system is acceptable based on a pipeline company's risk profile for detecting leaks. However, leak testing of a CPM system is challenging due to the complexity of the CPM design, as well as the need to stress test the CPM over the breadth of operational scenarios to assess the robustness of the CPM, where test coverage includes steady state threshold sensitivity, transient threshold sensitivity and the threshold switching action. This paper reviews the leak testing challenges encountered during CPM implementation and evaluation, outlines its limitations, and proposes a novel approach to an API RP 1130 recommended test method that can be applied to stress test CPM sensitivity, providing an evaluation of CPM robustness over a range of varying operating scenarios. The concept of the new testing methodology, along with a feasibility study on the automation of the test process, is discussed. Extensive tests are carried out to evaluate and assess the new testing methodology, and a comparison is made with other API RP 1130 recommended leak test methodologies such as parameter manipulation tests, simulated leak tests, and fluid withdrawal tests. The results indicate that the proposed technique has far wider testing coverage compared to existing approaches to leak testing while providing similar sensitivity measurement results and appears promising for use in stress testing sensitivity of CPM systems to gain an understanding of CPM robustness, which in turn has improved the sensitivity and robustness of Enbridge's current leak detection systems.
This paper examines the feasibility of Real Time Transient Model (RTTM) based methods for gas pipeline leak detection, elucidates the factors that must be managed for effective gas pipeline leak detection, and examines factors that impact leak detection and location sensitivity.
A growing regulatory focus on minimizing the impacts of ruptures in natural gas commodity pipelines is increasing the pressure on the operators of such systems to provide means of rapidly detecting and locating such leaks. Leak detection systems have become standardized components of liquid commodity pipelines over the last few decades, but have not been emphasized for natural gas systems.
Although many methods have been used to detect leaks in liquid systems, the most commonly used approach uses real time transient models and a mass-balance approach to detect commodity losses. The approach is extensible to gas systems in a fairly straightforward manner, and this paper will discuss such implementation. However, it is worth nothing that gas systems have certain differences that make them distinct from liquid pipeline systems. One difference is that gas pipelines, especially if they are part of or support gas distribution, have the potential to be far more highly networked, branched and looped than liquid transportation systems. Gas is a far more highly compressible commodity than most liquids are, and this has ramification for desired levels of instrumentation and speed of response. Finally, gas pipelines are more highly typified by maintenance requirements that can interfere with or degrade the performance of RTTM systems.
Another significant different between liquid and gas pipeline leak detection requirements is that generally, in a liquid line, a very large leak may be quickly identified by rate of change alarms. In contrast, a large leak in a gas pipeline, because of the compressibility of the gas, will cause much slower changes in the pipeline pressure. A gas pipeline therefore, may need to rely on an RTTM based leak detection system even for the reliable and timely detection of very large leaks.This paper attempts to illuminate these issues and equip the reader to understand and deal with them.
Significant financial and environmental consequences often result from line leakage of oil product pipelines. Product can escape into the surrounding soil as even the smallest leak can lead to rupture of the pipeline. From a health perspective, water supplies may be tainted by oil migrating into aquifers. A joint academic-industry research initiative funded by (Pipeline and Hazardous Materials Safety Association) PHMSA(1) has led to the development and refinement of a free-swimming tool which is capable of detecting leaks as small as 0.01 L/min (0.03 gallons) in oil product pipelines. The tool swims through the pipeline being assessed and produces results to the end user at a significantly reduced cost compared to current leak detection methods. Above Ground Markers (AGM’s) capture low frequency acoustic signatures and digitally log the passage of the tool through a pipeline. A tri-axial accelerometer system gives the odometric position of the ball, and has the accuracy of standard instrumented pigs. Several other types of sensors like temperature, and pressure, are also present in the ball and collect useful data.