Mracka, Igor (Mathematical Institute, Slovak Academy of Sciences) | Somora, Peter (Altova GmbH) | Hajossy, Rudolf (Mathematical Institute, Slovak Academy of Sciences) | Žácik, Tibor (Mathematical Institute, Slovak Academy of Sciences)
The article deals with a rupture localization system based on massive parallelized simulations on GPGPU (General-Purpose computing on Graphics Processing Units). The presented method is suitable for the localization of accidents on one pipeline and on a complex pipeline network as well. It can also be used in challenging situations where pressures at some inner network nodes are not available or in scenarios where emergency shutdown valves are closed and only one pressure sensor exists in a damaged pipeline section.
Main advantage of the suggested approach is the combination of highly precise simulation (to achieve localization accuracy) and a massive parallelization (to obtain the result fast enough). The described concept has been tested on both simulated rup-ture scenarios and data from a real rupture.
Fast and precise location of a rupture is important as it can prevent human casualties and environmental disasters after an accident. Prompt closing of shut-off valves minimizes the inevitable gas losses by stopping the delivery of gas to the rupture from the joined pipeline network.
There are many software-based leak detection and location methods but only a few for rupture location (Time-of-Flight Method based on the speed of rarefaction waves or a method based on a comparison of Real-Time Transient Model data with those from SCADA , ). A rupture location can be considered mathematically as an inverse problem to a pipeline simulation.
Highly nonlinear gas flow equations can be solved analytically but only approximately. When the first quick processes in a damaged pipeline just after a rupture disappears, the equations can be simplified to a heat transfer equation with known solutions. Using these solutions for pressure along damaged pipe-line in the rupture aftermath, the inverse problem can be solved analytically. It leads to four localization methods for specific cases (see ). As it was shown, only two methods for shut-off sections of a damaged pipeline are sufficiently precise and fast.
The aim of this article is to propose a solution of the inverse localization problem numerically. Presented approach is based on a comparison of real pressure measurements in the after-math of a real pipeline rupture with corresponding pressure outputs from simulations of ruptures at various positions. The location of the real rupture will be determined by the best agreement of the measured data and those data from simulations. The employed simulation has to be able precisely de-scribe high-dynamical pressure changes caused by a pipeline rupture.
Performance test provides important information about whether a newly constructed pipeline can achieve the design capacity and transportation efficiency. The field performance test also provides mechanical performance correction factors for compressors as part of the commissioning procedure of natural gas transmission pipelines. The EPC contractors are usually asked to ensure that the pipeline can achieve the delivery pressures and flow rates under the protocols agreed upon. Pipelines are normally designed to achieve certain flow rates in different phases. However, it is not uncommon to see pipelines falling short on gas supply or user demand during the time period that the performance test is scheduled for. The available pipeline inlet pressure and flow rate are lower than the guarantee conditions described in the performance test protocols. The quality of the gas supply may also create unexpected problems on the interpretation and evaluation of the performance test results. Therefore, it is necessary to conduct the performance test at lower pressures and lower flow rate conditions. This paper presents the methodology, strategy, procedure as well as a real life example of a simulation based performance test of a gas pipeline. The method presented offers a practical tool to validate pipeline performance by evaluating the fundamental physical parameters. This method can also be used in assessing the pipeline hydraulics conditions for potential corrosion issues (ID change), roughness changes, BSW deposition or other issues influencing pressure drops in the pipeline.
INTRODUCTION AND BACKGROUND
Performance test is an important part of the commissioning procedure of natural gas transmission pipelines after their construction. The EPC contractors are responsible of ensuring the pipeline can achieve the delivery pressures and flow rates under the protocols agreed upon. Pipelines are normally designed to achieve certain flow rates in different phases. However, it is not uncommon to see pipelines falling short on gas supply during the time in which the performance test is scheduled. During the economic down turns, the pipeline users may also have difficulty accepting the gas at the flow rate required to achieve the pipeline guarantees. In these circumstances, the available pipeline inlet pressure and pipeline flow rate are lower than the guarantee conditions described in the performance test protocols. The quality of the gas supply may also create unexpected problems in the interpretation and evaluation of the performance test such as the presence of black powder. Therefore, it is necessary to conduct the performance test at lower than anticipated pressures and flow rate conditions. The question is how to assess whether the pipeline performance will achieve the guarantees with the data collected at a lower pressure and lower flow conditions? The answer lays in the assessment of the pipeline parameters using measurement data at lower pressures and flow rates, along with gas pipeline hydraulic simulations at guarantee conditions. This paper presents the methodology, strategy, procedure as well as a real life example of a simulation based performance test of a gas pipeline.
Model calibration is the act (some might say “art”) of adjusting model parameters in such a way that the model’s behavior matches as closely as possible the behavior of the real-world system that it represents. In order to successfully calibrate a hydraulic model, certain hydraulic conditions must be known in order to have a defined calibration solution. Pipes that run parallel to each other (i.e. from the same upstream location to the same downstream location in roughly the same right-of-way) can pose serious difficulties to this requirement, especially when no inline flow measurement on any of the parallel lines exist, as the lack of knowing the exact flow distribution between the parallel lines means that the calibration problem either has no finite solution, or the finite solution is exceedingly difficult to determine.
A potential solution to this problem involves utilizing multiple data sets. Each data set will have a particular range of possible solutions, and by comparing the solution ranges of multiple data sets, a single solution can easily be found. This paper will describe this method and provide examples with the intent of enabling the reader to apply the methodology to his or her own hydraulic calibration challenges.
INTRODUCTION AND BACKGROUND
Most engineers involved with hydraulic simulation are probably quite familiar (too familiar?) with the Darcy- Weisbach flow equation that describes head loss in terms of flow, pipe length, and pipe diameter. A form of the equation is shown below, as understanding the equation will be crucial to understanding the fundamental difficulty of calibrating parallel pipes.
This paper will focus on the utilization of advanced pipeline simulation software to proactively identify and rank distribution regulator stations based on their criticality to meet customer demand in an interconnected distribution system. A particularly complex distribution network modeled as a single system containing multiple subsystems operating at various pressure levels is analyzed over a large number of scenarios using PG&E’s Batch Analysis Tool (BAT).
BAT has the capability to run numerous simulations in a time efficient manner to discover critical regulator stations within an interconnected distribution system in one hydraulic model. A fundamental issue with performing regulator station failure analysis is shutting in regulator stations can cause the system to “crash,” that is, the model will not balance. If the model does not balance, it is not necessarily clear which portion of the system is affected based off of the software’s error log. For example, if a model doesn’t balance when an upstream regulator station is shut in, it may be difficult to conclude whether the upstream or the downstream subsystem is creating the hydraulic problem from the closure. With the help of BAT, system performance can be observed as temperatures decrease (demand increases) and the point at which the system crashes can be recorded. Furthermore, BAT can track the pressure at multiple locations to check at what point a certain area crashes.
Proactive gas system planning is to understand operating risks before they occur on the system. It is a goal of PG&E’s Gas System Planning department to utilize hydraulic simulation to gain broad system intelligence over a range of conditions and to identify facilities that are the most critical to system operations.
Hydraulic simulation requires the usage/input of Heating Degree Day (HDD), Peak Hour Hactor (PHF) and a demands file containing customer usage loads. PG&E designs its gas hydraulic system to Abnormal Peak Day (APD) conditions, which is an extremely cold day that has been recorded once in a 90 year time frame. From a previous PG&E paper presented on the BAT, it was shown that BAT can automate the entire loading process of an individual simulation. This paper will look at the impact of BAT on critical distribution regulator stations.
As the global population moves into cities, gas distribution networks are becoming more crowded. There may be cases when some of the pipelines reach critical saturation of clients and a single event such as an immediate pressure decrease wave can start a chain reaction. Pressure can fall below critical even for a short time, and that can cause problems in the whole grid.
In order to mitigate a risk of such a situation, users can model these events and prepare the grid for emergencies, for example by a small house gas storage. Pipeline simulators traditionally do not model such high speed dynamic cases. We have some real measured flow, pressure and temperature data with a sampling frequency of 20 Hz. We compared the real measured data with our modeled pressure undershoot wave.
We would like to show that the simulated data match the measured data to a high degree. This match especially depends on space and time discretization, when we tried time steps down to 0.1 millisecond. We have encountered some problems, such as very high demands on the calculation time and results database size and some oscillations due to numerical problems of very small numbers. We would also like to show a way how to minimise these demands.
The article presented will also contain tips and tricks and recommended simplifications, in order to be able to simulate very dynamic events and help gas distribution companies model transients in their network with linepack that is next to none.
In conclusion, we present a way how to adjust a pipeline simulator in order to be able to calculate events on a millisecond scale, so that there is an exact simulation of pressure decrease in case of an overcrowded distribution network. We will show that a more universal software can perform the desired calculation to a satisfying degree of precision, so that a specialized CFD package may not be needed.
Gas distribution networks are becoming complicated as the population in global cities is on the rise and many countries are implementing gasification for most of their city dwellers. This increase in complexity raises some specific challenges for simulation of gas distribution networks.
Assessing the discharged volume during third party digging damages with efficiency and acceptable accuracy is a challenge for multiple reasons. It involves complex physics phenomena, requires taking the right assumptions and has to be perform relatively quickly as the number of cases to compute each year is important. This paper summarizes the application of a comprehensive methodology used to perform this task.
The first part deals with the required field measurements needed to be obtained, the selection of an adequate physics equation as a function of the flow regime and the linkage between an analytical equation and a commercial CFD software to obtain a valid network pressure at the damage point.
In the second section, a validation attempt between computed results and field measurements is made using two different sets of data. First, for some very specific incidents where the pipeline damage is close to a gate station with SCADA recording, it is possible to obtain an hourly flow profile at the break. Second, simple configurations of pipe rupture have been replicated in laboratory and tested with air. For most of the cases, the described methodology shows a good match with experimental data with typical discharge coefficient values in the range of 0.61 to 0.92.
INTRODUCTION AND BACKGROUND
Gaz Metro « GM » is the main natural gas distributor in the province of Quebec on the east part of Canada. The territory is connected to the TCPL Mainline (figure 1) at the very end of the transmission system.
GM distributes 97% of all natural gas in Quebec (figure 2) to over 200 000 customers located in more than 300 municipalities . This can be achieved by an asset of 10 000 km of underground network pipeline with more than 90% being distribution mains and services, mostly small diameter plastic pipes. Along this distribution network, digging damages by third party represent by far the primary reason for unplanned emergency response and an important part of the total non-fugitive annual gas loss. Each year, 300 to 400 rupture cases are reported and analyzed.
When designing a piping system, normally the inherent control valve characteristics, e.g. linear or equal percentage valve opening/closing curves, are considered. However, inherent valve curves only consider the control valve as a “bobble”. The characteristics of the valve will change once it is installed with piping connections, meters, equipment, or other valves and fittings. The additional friction loss introduced by piping connections or valve combinations is normally a function of the flow rate instead of staying as constant. This will change the overall opening and closing characteristics of the control valve. It is well known that surge pressure is directly related to valve characteristics. The combination of control valve with other components may create undesirable surge scenarios in operation which is commonly neglected in the design.
This paper examines how the connections of the control valve with other piping components can influence the installed valve characteristics and surge pressure level in valve closings. The focus is on two aspects: how other components such as an ESD valve immediately upstream or downstream can influence the surge behavior of the control valve closing; how the upstream or downstream control valve influences the surge behavior of the ESD or Mainline Block Valve closing. The paper will present how the installed valve characteristics are different from the inherent characteristics and how significant the increase in the pressure surge was developed.
The results and conclusions provided in this paper will serve as a general guideline for valve arrangement and piping design for reducing potential surge pressure in liquid systems.
INTRODUCTION AND BACKGROUND
In piping design the control valves present unique influence to system hydraulics resistance. It is well known that once installed in the piping system, the control valve characteristics (the relationship between valve flow coefficient and valve opening will change) (Sines, 2009, Headley 2003). A so called installed valve coefficient is introduced to describe this behavior. The valve coefficient is normally tested in the shop as a “bobble”.
The objective of this paper is to describe a method that simulates an energy recovery system (ERS), which exploits water hydraulic power to boost inlet flow pressure. The impact of pipeline pressure surge (water hammer) on water treatment units was investigated. Surge pressure and pressure rise rate were calculated.
A novel methodology has been developed in this paper to simulate an energy recovery system and estimate pressure rise rate. This method integrated an energy recovery system into an existing pipeline simulation model. The energy recovery system model was developed using basic hydraulic pump equations. Actual system efficiency was used. Both maximum surge pressure and pressure rise rate are calculated each model time step. This same method can be used for other energy recovery systems hydraulic analysis.
In this study, a high-pressure feed pump with a discharge pressure of 630 psig was analyzed. The model was used to calculate the maximum surge pressure downstream of the ERS.
In this analysis, downstream of the ERS there is an RO (reverse osmosis) filtration system. The maximum pressure and rate of change of pressure must be controlled so as not to damage the filter membranes.
Different surge scenarios were investigated. For the cases analyzed it was possible to keep the maximum surge pressure to 1117 psig that is below the maximum membrane design pressure. It was also possible to keep the maximum pressure rise rate for all cases simulated to below 5.2 psi/second. The membrane warranty for the cases analyzed limited the pressure rise rate to 10 psi/second and stipulated a maximum pressure or 1200 psig. The simulation results also provide design parameters for sizing surge relief devices and designing the required control system.
Traditional surge analysis tools can properly estimate surge pressure within the pipeline system. However, energy recovery system behavior in a surge scenario was not simulated previously. The provided method can simulate energy recovery systems, calculate maximum surge pressure and pressure rise rate. The method sheds light on simulating energy recovery system and can be adopted for different simulation tools.
In recent years, pipeline operators have faced reduced production environments caused by declining brownfield operations and capital constraints induced by oil prices, among other factors, which have led to pipelines operating well under their designed capacity and challenges such as congealing—the precipitation of wax solids in a crude oil pipeline. This paper discusses how models are built using scientific principles and how simulation may be used to predict where congealing is or may occur inside a pipeline. Finally, a case study from a major oil and gas company’s site demonstrates how these modeling and simulation techniques may be effectively applied in the field.
INTRODUCTION AND BACKGROUND
Pipeline operators are currently challenged with operating pipelines safely in reduced production environments, which have been caused by declining brownfield operations, capital constraints brought on by oil prices, and the lack of drilling rigs to keep pipelines full. These present conditions result in pipelines operating well under their designed capacity and challenges such as congealing.
Congealing refers to the precipitation and nucleation of wax solids in a crude oil pipeline. It is initiated by a temperature gradient between the pipe wall and the centerline flow, leading to high-yield flow stress and causing changes in flow behavior.
This paper discusses the physical considerations that contribute and are necessary to detect congealing followed by a series of modeling steps to accurately simulate when and where congealing occurs in a pipeline while accounting for multiphase flow of differing compositions from multiple producers. In turn, this information can automatically be displayed as a visual pipeline profile, allowing operators to understand their entire pipeline operation from remote locations and view critical parameters and events, such as congealing, leak detection, and slugging.
These modeling and congealing algorithms were implemented and validated at a major oil and gas company’s site on a 150-km (~93.2 mi) commercial pipeline network used to transport roughly 50,000 BOPD (7,949 m3/day) from 11 gathering stations to a distribution tank farm. The main transportation pipeline was designed to transport 500,000 BOPD (79,490 m3/day). Congealing events were detected and verified by comparing the simulated and assayed pipeline data. Prediction time averaged between three and six hours in advance of the congealing event, allowing the pipeline operator take appropriate mitigation actions and reduce lost production opportunity (LPO).
Zlotnik, Anatoly (Los Alamos National Laboratory) | Rudkevich, Aleksandr M. (Newton Energy Group) | Goldis, Evgeniy (Newton Energy Group) | Ruiz, Pablo A. (Boston University) | Caramanis, Michael (Boston University) | Carter, Richard (DNV-GL) | Backhaus, Scott (Los Alamos National Laboratory) | Tabors, Richard (Tabors Caramanis Rudkevich) | Hornby, Richard (Tabors Caramanis Rudkevich) | Baldwin, Daniel (Kinder Morgan)
As dependence of the bulk electric power system on gas-fired generation grows, more economically efficient coordination between the wholesale natural gas and electricity markets is increasingly important. New tools are needed to achieve more efficient and reliable operation of both markets by providing participants more accurate price signals on which to base their investment and operating decisions.
Today’s Electricity energy prices are consistent with the physical flow of electric energy in the power grid because of the economic optimization of power system operation in organized electricity markets administered by Regional Transmission Organizations (RTOs). A similar optimization approach that accounts for physical and engineering factors of pipeline hydraulics and compressor station operations would lead to location- and time-dependent intra-day prices of natural gas consistent with pipeline engineering factors, operations, and the physics of gas flow.
More economically efficient gas-electric coordination is envisioned as the timely exchange of both physical and pricing data between participants in each market, with price formation in both markets being fully consistent with the physics of energy flow. Physical data would be intra-day (e.g., hourly) gas schedules (burn and delivery) and pricing data would be bids and offers reflecting willingness to pay and to accept. Here, we describe the economic concepts related to this exchange, and discuss the regulatory and institutional issues that must be addressed. We then formulate an intra-day pipeline market clearing problem whose solution provides a flow schedule and hourly pricing, while ensuring that pipeline hydraulic limitations, compressor station constraints, operational factors, and pre-existing shipping contracts are satisfied. Furthermore, in order to support the practical application of these concepts, we provide a computational example of gas pipeline market clearing on a small interpretable model, and validate the results using a commercial pipeline simulator. Finally, we validate the modeling by cross-verifying simulations with SCADA data measured on a real pipeline system.