Mracka, Igor (Mathematical Institute, Slovak Academy of Sciences) | Somora, Peter (Altova GmbH) | Hajossy, Rudolf (Mathematical Institute, Slovak Academy of Sciences) | Žácik, Tibor (Mathematical Institute, Slovak Academy of Sciences)
The article deals with a rupture localization system based on massive parallelized simulations on GPGPU (General-Purpose computing on Graphics Processing Units). The presented method is suitable for the localization of accidents on one pipeline and on a complex pipeline network as well. It can also be used in challenging situations where pressures at some inner network nodes are not available or in scenarios where emergency shutdown valves are closed and only one pressure sensor exists in a damaged pipeline section.
Main advantage of the suggested approach is the combination of highly precise simulation (to achieve localization accuracy) and a massive parallelization (to obtain the result fast enough). The described concept has been tested on both simulated rup-ture scenarios and data from a real rupture.
Fast and precise location of a rupture is important as it can prevent human casualties and environmental disasters after an accident. Prompt closing of shut-off valves minimizes the inevitable gas losses by stopping the delivery of gas to the rupture from the joined pipeline network.
There are many software-based leak detection and location methods but only a few for rupture location (Time-of-Flight Method based on the speed of rarefaction waves or a method based on a comparison of Real-Time Transient Model data with those from SCADA , ). A rupture location can be considered mathematically as an inverse problem to a pipeline simulation.
Highly nonlinear gas flow equations can be solved analytically but only approximately. When the first quick processes in a damaged pipeline just after a rupture disappears, the equations can be simplified to a heat transfer equation with known solutions. Using these solutions for pressure along damaged pipe-line in the rupture aftermath, the inverse problem can be solved analytically. It leads to four localization methods for specific cases (see ). As it was shown, only two methods for shut-off sections of a damaged pipeline are sufficiently precise and fast.
The aim of this article is to propose a solution of the inverse localization problem numerically. Presented approach is based on a comparison of real pressure measurements in the after-math of a real pipeline rupture with corresponding pressure outputs from simulations of ruptures at various positions. The location of the real rupture will be determined by the best agreement of the measured data and those data from simulations. The employed simulation has to be able precisely de-scribe high-dynamical pressure changes caused by a pipeline rupture.
Model calibration is the act (some might say “art”) of adjusting model parameters in such a way that the model’s behavior matches as closely as possible the behavior of the real-world system that it represents. In order to successfully calibrate a hydraulic model, certain hydraulic conditions must be known in order to have a defined calibration solution. Pipes that run parallel to each other (i.e. from the same upstream location to the same downstream location in roughly the same right-of-way) can pose serious difficulties to this requirement, especially when no inline flow measurement on any of the parallel lines exist, as the lack of knowing the exact flow distribution between the parallel lines means that the calibration problem either has no finite solution, or the finite solution is exceedingly difficult to determine.
A potential solution to this problem involves utilizing multiple data sets. Each data set will have a particular range of possible solutions, and by comparing the solution ranges of multiple data sets, a single solution can easily be found. This paper will describe this method and provide examples with the intent of enabling the reader to apply the methodology to his or her own hydraulic calibration challenges.
INTRODUCTION AND BACKGROUND
Most engineers involved with hydraulic simulation are probably quite familiar (too familiar?) with the Darcy- Weisbach flow equation that describes head loss in terms of flow, pipe length, and pipe diameter. A form of the equation is shown below, as understanding the equation will be crucial to understanding the fundamental difficulty of calibrating parallel pipes.
Assessing the discharged volume during third party digging damages with efficiency and acceptable accuracy is a challenge for multiple reasons. It involves complex physics phenomena, requires taking the right assumptions and has to be perform relatively quickly as the number of cases to compute each year is important. This paper summarizes the application of a comprehensive methodology used to perform this task.
The first part deals with the required field measurements needed to be obtained, the selection of an adequate physics equation as a function of the flow regime and the linkage between an analytical equation and a commercial CFD software to obtain a valid network pressure at the damage point.
In the second section, a validation attempt between computed results and field measurements is made using two different sets of data. First, for some very specific incidents where the pipeline damage is close to a gate station with SCADA recording, it is possible to obtain an hourly flow profile at the break. Second, simple configurations of pipe rupture have been replicated in laboratory and tested with air. For most of the cases, the described methodology shows a good match with experimental data with typical discharge coefficient values in the range of 0.61 to 0.92.
INTRODUCTION AND BACKGROUND
Gaz Metro « GM » is the main natural gas distributor in the province of Quebec on the east part of Canada. The territory is connected to the TCPL Mainline (figure 1) at the very end of the transmission system.
GM distributes 97% of all natural gas in Quebec (figure 2) to over 200 000 customers located in more than 300 municipalities . This can be achieved by an asset of 10 000 km of underground network pipeline with more than 90% being distribution mains and services, mostly small diameter plastic pipes. Along this distribution network, digging damages by third party represent by far the primary reason for unplanned emergency response and an important part of the total non-fugitive annual gas loss. Each year, 300 to 400 rupture cases are reported and analyzed.
This paper will focus on the utilization of advanced pipeline simulation software to proactively identify and rank distribution regulator stations based on their criticality to meet customer demand in an interconnected distribution system. A particularly complex distribution network modeled as a single system containing multiple subsystems operating at various pressure levels is analyzed over a large number of scenarios using PG&E’s Batch Analysis Tool (BAT).
BAT has the capability to run numerous simulations in a time efficient manner to discover critical regulator stations within an interconnected distribution system in one hydraulic model. A fundamental issue with performing regulator station failure analysis is shutting in regulator stations can cause the system to “crash,” that is, the model will not balance. If the model does not balance, it is not necessarily clear which portion of the system is affected based off of the software’s error log. For example, if a model doesn’t balance when an upstream regulator station is shut in, it may be difficult to conclude whether the upstream or the downstream subsystem is creating the hydraulic problem from the closure. With the help of BAT, system performance can be observed as temperatures decrease (demand increases) and the point at which the system crashes can be recorded. Furthermore, BAT can track the pressure at multiple locations to check at what point a certain area crashes.
Proactive gas system planning is to understand operating risks before they occur on the system. It is a goal of PG&E’s Gas System Planning department to utilize hydraulic simulation to gain broad system intelligence over a range of conditions and to identify facilities that are the most critical to system operations.
Hydraulic simulation requires the usage/input of Heating Degree Day (HDD), Peak Hour Hactor (PHF) and a demands file containing customer usage loads. PG&E designs its gas hydraulic system to Abnormal Peak Day (APD) conditions, which is an extremely cold day that has been recorded once in a 90 year time frame. From a previous PG&E paper presented on the BAT, it was shown that BAT can automate the entire loading process of an individual simulation. This paper will look at the impact of BAT on critical distribution regulator stations.
This paper examines both the method and results of a leak detection sensitivity analysis for a liquid pipeline. A fractional factorial design is used to quantify both primary effects as well as confounding effects between parameters. The analysis examines the impact of uncertainty and bias in pressure and flow measurements, as well as spatial and temporal discretization on leak flow estimation. These are considered under conditions of transient pressures, the presence of a leak and with altered SCADA poll frequency. The results of the parametric study as well as the applicability of the general approach are discussed.
INTRODUCTION AND BACKGROUND
The ability of pipeline operators to swiftly detect pipeline leaks is critical to the safeguarding of public and environmental interests. One of the prevalent tools for achieving this ability within industry is the use of a real time transient model of the pipeline. A primary benefit of utilizing a real time transient model for pipeline leak detection is the ability to accurately represent the pressure profile of the pipeline under transient conditions (Learn, 2015). A more accurate representation of pipeline transients leads to a more accurate estimation of linepack and hence a lower error in the leak flow estimate. As a result, alarm threshold values can be lowered without increasing false alarm frequency, and a better leak detection sensitivity can be achieved.
One of the more challenging roles for a leak detection engineer is to assess and understand the multitude of parameters affecting the error in leak flow estimation. The most widely applied standard, API1149 (1993), provided an excellent theoretical framework for estimating leak flow uncertainty as a function of time averaging window and telemetry uncertainty. However, the most recent update to this standard recognizes that potentially many different parameters affect leak flow uncertainty and recommends a perturbation approach against a reference model. (Salmatanis, 2015)
Given the number of parameters which may affect leak detection sensitivity, a more efficient method is needed to assess the impact of such parameters. Assessing all the potential effects of all parameters within a large quantity of scenarios can be time consuming. It can be onerous to perform this analysis on pipelines in the early stages of project development, during which certain other design assumptions are yet to be confirmed. In addition, many projects may never progress beyond the prospecting stage despite significant design and analysis.
The objective of this paper is to describe a method that simulates an energy recovery system (ERS), which exploits water hydraulic power to boost inlet flow pressure. The impact of pipeline pressure surge (water hammer) on water treatment units was investigated. Surge pressure and pressure rise rate were calculated.
A novel methodology has been developed in this paper to simulate an energy recovery system and estimate pressure rise rate. This method integrated an energy recovery system into an existing pipeline simulation model. The energy recovery system model was developed using basic hydraulic pump equations. Actual system efficiency was used. Both maximum surge pressure and pressure rise rate are calculated each model time step. This same method can be used for other energy recovery systems hydraulic analysis.
In this study, a high-pressure feed pump with a discharge pressure of 630 psig was analyzed. The model was used to calculate the maximum surge pressure downstream of the ERS.
In this analysis, downstream of the ERS there is an RO (reverse osmosis) filtration system. The maximum pressure and rate of change of pressure must be controlled so as not to damage the filter membranes.
Different surge scenarios were investigated. For the cases analyzed it was possible to keep the maximum surge pressure to 1117 psig that is below the maximum membrane design pressure. It was also possible to keep the maximum pressure rise rate for all cases simulated to below 5.2 psi/second. The membrane warranty for the cases analyzed limited the pressure rise rate to 10 psi/second and stipulated a maximum pressure or 1200 psig. The simulation results also provide design parameters for sizing surge relief devices and designing the required control system.
Traditional surge analysis tools can properly estimate surge pressure within the pipeline system. However, energy recovery system behavior in a surge scenario was not simulated previously. The provided method can simulate energy recovery systems, calculate maximum surge pressure and pressure rise rate. The method sheds light on simulating energy recovery system and can be adopted for different simulation tools.
Zlotnik, Anatoly (Los Alamos National Laboratory) | Rudkevich, Aleksandr M. (Newton Energy Group) | Goldis, Evgeniy (Newton Energy Group) | Ruiz, Pablo A. (Boston University) | Caramanis, Michael (Boston University) | Carter, Richard (DNV-GL) | Backhaus, Scott (Los Alamos National Laboratory) | Tabors, Richard (Tabors Caramanis Rudkevich) | Hornby, Richard (Tabors Caramanis Rudkevich) | Baldwin, Daniel (Kinder Morgan)
As dependence of the bulk electric power system on gas-fired generation grows, more economically efficient coordination between the wholesale natural gas and electricity markets is increasingly important. New tools are needed to achieve more efficient and reliable operation of both markets by providing participants more accurate price signals on which to base their investment and operating decisions.
Today’s Electricity energy prices are consistent with the physical flow of electric energy in the power grid because of the economic optimization of power system operation in organized electricity markets administered by Regional Transmission Organizations (RTOs). A similar optimization approach that accounts for physical and engineering factors of pipeline hydraulics and compressor station operations would lead to location- and time-dependent intra-day prices of natural gas consistent with pipeline engineering factors, operations, and the physics of gas flow.
More economically efficient gas-electric coordination is envisioned as the timely exchange of both physical and pricing data between participants in each market, with price formation in both markets being fully consistent with the physics of energy flow. Physical data would be intra-day (e.g., hourly) gas schedules (burn and delivery) and pricing data would be bids and offers reflecting willingness to pay and to accept. Here, we describe the economic concepts related to this exchange, and discuss the regulatory and institutional issues that must be addressed. We then formulate an intra-day pipeline market clearing problem whose solution provides a flow schedule and hourly pricing, while ensuring that pipeline hydraulic limitations, compressor station constraints, operational factors, and pre-existing shipping contracts are satisfied. Furthermore, in order to support the practical application of these concepts, we provide a computational example of gas pipeline market clearing on a small interpretable model, and validate the results using a commercial pipeline simulator. Finally, we validate the modeling by cross-verifying simulations with SCADA data measured on a real pipeline system.
SGN is the second largest gas distribution company in the UK owning and operating two gas distribution networks made up of three Local Distribution Zones (LDZs) delivering natural and green gas through over 46,000 miles (74,000 km) of pipeline to 5.9 million homes and businesses across Scotland and the south of England. The distribution networks receive gas from the UK’s National Transmission System (NTS), multiple biomethane producers and an LNG terminal.
The SGN Gas Control Centre has a key role to ensure there is a safe and reliable network to enable them to meet the daily demand whilst operating the networks within designed limits. National Grid Gas UK Transmission (NGG UKT), which operate the National Transmission System (NTS), require the provision of an hourly schedule of gas volumes (hourly profiles) taken through each of the NTS Offtakes into the LDZ in the form of Offtake Profile Notifications (OPNs). Although these profiles can be altered throughout the day to reflect changes in the demand, there are strict rules as to how much the profiles may change. Each OPN is scrutinized against the Uniform Network Code (UNC)(2) rules on submission to UKT.
To gain further insight and enable more efficient operation of its networks, SGN has deployed a real-time Gas Network Modelling System (GNMS) on the high pressure (100-1000 psig (7-69 barg)) parts of its networks. As well as providing a valuable stock monitoring tool, the GNMS utilizes data from the OPNs to predict the performance of the networks over the period of the current and next gas days.
This paper will discuss the OPN generation tool developed by SGN and Emerson and the implementation of the GNMS. Despite limitations in instrumentation in certain parts of the networks early indications are the GNMS is providing accurate results: this too will be discussed.
In recent years, pipeline operators have faced reduced production environments caused by declining brownfield operations and capital constraints induced by oil prices, among other factors, which have led to pipelines operating well under their designed capacity and challenges such as congealing—the precipitation of wax solids in a crude oil pipeline. This paper discusses how models are built using scientific principles and how simulation may be used to predict where congealing is or may occur inside a pipeline. Finally, a case study from a major oil and gas company’s site demonstrates how these modeling and simulation techniques may be effectively applied in the field.
INTRODUCTION AND BACKGROUND
Pipeline operators are currently challenged with operating pipelines safely in reduced production environments, which have been caused by declining brownfield operations, capital constraints brought on by oil prices, and the lack of drilling rigs to keep pipelines full. These present conditions result in pipelines operating well under their designed capacity and challenges such as congealing.
Congealing refers to the precipitation and nucleation of wax solids in a crude oil pipeline. It is initiated by a temperature gradient between the pipe wall and the centerline flow, leading to high-yield flow stress and causing changes in flow behavior.
This paper discusses the physical considerations that contribute and are necessary to detect congealing followed by a series of modeling steps to accurately simulate when and where congealing occurs in a pipeline while accounting for multiphase flow of differing compositions from multiple producers. In turn, this information can automatically be displayed as a visual pipeline profile, allowing operators to understand their entire pipeline operation from remote locations and view critical parameters and events, such as congealing, leak detection, and slugging.
These modeling and congealing algorithms were implemented and validated at a major oil and gas company’s site on a 150-km (~93.2 mi) commercial pipeline network used to transport roughly 50,000 BOPD (7,949 m3/day) from 11 gathering stations to a distribution tank farm. The main transportation pipeline was designed to transport 500,000 BOPD (79,490 m3/day). Congealing events were detected and verified by comparing the simulated and assayed pipeline data. Prediction time averaged between three and six hours in advance of the congealing event, allowing the pipeline operator take appropriate mitigation actions and reduce lost production opportunity (LPO).
Knowledge of natural gas quality in the short-term future (24 h) is expected by many of end users. Also, European Union Law requires to provide such information by Transmission System Operators. In case of multiloop network which is supplied with many sources and different gas compositions, the dynamic network simulation combined with forecasting of behavior all sources and offtakes, is necessary.
The article describes model of full chain of calculation. At entries, there are: productions, storages and interconnectors with more less stable gas compositions and LNG Terminals where gas composition smoothly change in function of time or significantly due to filing in from next vessel. At exits forecasting of demand relates to nomination processes (industrial end users) or forecasting systems (city gates to household areas). Between them the multiloop transmission network is dynamically simulated with full quality tracking model.
The paper contains also our practice experience based on Polish transmission system which has many entries from production, interconnectors, storages, more than 900 exits and new LNG Terminal. The multiloop network has also several compression stations and reduction points. There is a possibility of determining the degree of gas mixing i.e. providing clients with such information with simulation software. Such analyses are executed on a regular basis. Calculations are performed in three minute cycles (reconstruction network state) and future calculation are performed with the 15-minute step. For future calculations city gate exit points demand is obtained from the forecasting system (short-term forecasts - 10 days) or nominations used in supply points or industrial exits. Such values are compared up – to – date with the values obtained from chromatographs located in the transmission system network – reference chromatographs.
Finally, the article presents the case study of stream mixing degree depending on gas composition from different gas sources, exit point demand and settings of the non-linear network elements. The analyses were performed for both static and dynamic scenarios where one of the parameters is a dynamic change of the quantity and quality supply from LNG Terminal to the network.