Potential routes, pipeline sizes, pump spacing, and numerous more details are scrutinized when designing Greenfield and expansion pipeline projects. These details are analyzed to create an accurate cost estimate to determine the economics of building a new pipe. Billions of dollars are spent constructing pipelines based on these estimates with detailed studies. However, one important pipeline parameter is generally kept at a default value when designing Greenfield and expansion pipeline projects – pipeline roughness.
Absolute pipeline roughness of 0.0018 inch (0.04572 mm) is often selected by default based on published industry information for hydrocarbon liquids. In practice, default pipeline roughness can be shown to vary based on product when pipeline roughness is used as a tuning factor. Various friction factor equations can be selected to reduce the variation in tuned pipeline roughness. However, light hydrocarbon liquids are not well suited to existing friction factor equations. Pipeline pressure losses will be overestimated if the roughness value of 0.0018 inch (0.04572 mm) is used. Overestimating pressure losses results in overdesigning pipe and pump requirements for new pipelines. Proposed projects may fail to start due to excessive material costs and projects that do get completed may have installed equipment that is not used after construction. More research is needed in order to determine exact correlations between product type and friction factor equation results.
Introduction and background
Greenfield pipelines and expansions of existing pipelines are constantly analyzed. These projects are essential for providing pipeline access to newly developed oilfields and increasing transportation out of existing oilfields. Project budgets became much more restricted during the oil price collapse in 2015. As a result, projects to develop new Greenfield pipelines and to expand existing pipelines came under intensified scrutiny.
One way to improve the chance of a project’s success was to closely analyze the hydraulics of the projects. Close scrutiny of hydraulics from various projects revealed that pipeline roughness, which is used as a tuning factor, has consistently been kept at a default value of 0.0018 inches (0.04572 mm) and used the Colebrook-White friction factor equation for new liquid pipeline projects. Evaluation of existing pipelines showed that a default pipeline roughness value of 0.0018 inches (0.04572 mm) worked well for heavier hydrocarbon liquids. However, Natural Gas Liquids (NGLs) had tuned pipeline roughness values that were lower than the default value by as much as a magnitude of 10 with Colebrook-White. This observation showed that product type needed to be considered when deciding pipeline roughness for new pipeline design.
The objective of this paper is to describe a method that simulates an energy recovery system (ERS), which exploits water hydraulic power to boost inlet flow pressure. The impact of pipeline pressure surge (water hammer) on water treatment units was investigated. Surge pressure and pressure rise rate were calculated.
A novel methodology has been developed in this paper to simulate an energy recovery system and estimate pressure rise rate. This method integrated an energy recovery system into an existing pipeline simulation model. The energy recovery system model was developed using basic hydraulic pump equations. Actual system efficiency was used. Both maximum surge pressure and pressure rise rate are calculated each model time step. This same method can be used for other energy recovery systems hydraulic analysis.
In this study, a high-pressure feed pump with a discharge pressure of 630 psig was analyzed. The model was used to calculate the maximum surge pressure downstream of the ERS.
In this analysis, downstream of the ERS there is an RO (reverse osmosis) filtration system. The maximum pressure and rate of change of pressure must be controlled so as not to damage the filter membranes.
Different surge scenarios were investigated. For the cases analyzed it was possible to keep the maximum surge pressure to 1117 psig that is below the maximum membrane design pressure. It was also possible to keep the maximum pressure rise rate for all cases simulated to below 5.2 psi/second. The membrane warranty for the cases analyzed limited the pressure rise rate to 10 psi/second and stipulated a maximum pressure or 1200 psig. The simulation results also provide design parameters for sizing surge relief devices and designing the required control system.
Traditional surge analysis tools can properly estimate surge pressure within the pipeline system. However, energy recovery system behavior in a surge scenario was not simulated previously. The provided method can simulate energy recovery systems, calculate maximum surge pressure and pressure rise rate. The method sheds light on simulating energy recovery system and can be adopted for different simulation tools.
Assessing the discharged volume during third party digging damages with efficiency and acceptable accuracy is a challenge for multiple reasons. It involves complex physics phenomena, requires taking the right assumptions and has to be perform relatively quickly as the number of cases to compute each year is important. This paper summarizes the application of a comprehensive methodology used to perform this task.
The first part deals with the required field measurements needed to be obtained, the selection of an adequate physics equation as a function of the flow regime and the linkage between an analytical equation and a commercial CFD software to obtain a valid network pressure at the damage point.
In the second section, a validation attempt between computed results and field measurements is made using two different sets of data. First, for some very specific incidents where the pipeline damage is close to a gate station with SCADA recording, it is possible to obtain an hourly flow profile at the break. Second, simple configurations of pipe rupture have been replicated in laboratory and tested with air. For most of the cases, the described methodology shows a good match with experimental data with typical discharge coefficient values in the range of 0.61 to 0.92.
INTRODUCTION AND BACKGROUND
Gaz Metro « GM » is the main natural gas distributor in the province of Quebec on the east part of Canada. The territory is connected to the TCPL Mainline (figure 1) at the very end of the transmission system.
GM distributes 97% of all natural gas in Quebec (figure 2) to over 200 000 customers located in more than 300 municipalities . This can be achieved by an asset of 10 000 km of underground network pipeline with more than 90% being distribution mains and services, mostly small diameter plastic pipes. Along this distribution network, digging damages by third party represent by far the primary reason for unplanned emergency response and an important part of the total non-fugitive annual gas loss. Each year, 300 to 400 rupture cases are reported and analyzed.
Nicholas, Ed (Nicholas Simulation Services) | Carpenter, Philip (Great Sky River Enterprises LLC) | Henrie , Morgan (MH Consulting, Inc.) | Hung, Daniel (Enbridge Pipelines, Inc.) | Kundert, Kris (Enbridge Pipelines, Inc.)
Testing of pipeline leak detection systems can be challenging. It is also a critical activity which provides key information on the systems capability for communications to regulators and key stakeholders. The authors describe an API RP 1130 compliant test method that relies on the development of a limited number of realistic "leak signatures" that are superimposed on archived SCADA data in a way that preserves not only a faithful representation of the leak, but the real-world impacts of noise, calculation uncertainties, and measurement errors as well. In addition to maintaining high hydraulic fidelity, coverage and flexibility, this procedure is performed at low cost while potentially providing a greater degree of insight into the detailed performance of the leak detection system than can be achieved with other methods.
INTRODUCTION AND BACKGROUND
The Need for Testing of Pipeline Leak Detection Systems
A leak detection system (LDS) is a safety and integrity-critical component of an operating pipeline that is designed to help mitigate negative consequences following an unplanned commodity release. Its intended purpose is to reduce the potential negative impacts from a breach in pipeline hydraulic integrity (e.g., a leak with its resulting spill). Reducing these potential negative impacts is achieved by rapidly detecting the leak and determining its most probable location. Determination of these factors in as short as time frame as possible provides key information that is critical in terms of enabling the pipeline operator to respond faster, more effectively, and with greater precision. Note that the most commonly applied method for leak detection is via Computational Pipeline Monitoring (CPM) systems, which are the explicit focus of this document.
As part of the operator’s overall spill response plan the organization should be able to quantify the leak detection system’s predicted performance. This allows the operator to identify areas where further leak detection improvements are desirable and refine location specific response plans. It also provides a mechanism by which the LDS performance can be monitored and tracked over time.
Quantifying the leak detection performance requires testing. As stated in the American Petroleum Institute recommended practice 1130 (API 1130), “[t]he primary purpose of testing [quantifying] is to assure that the CPM system will alarm if a commodity release occurs.” Note that while API 1130 is specific to Computational Pipeline Monitoring leak detection systems, the quantification of system testing is applicable to all leak detection systems.
Zlotnik, Anatoly (Los Alamos National Laboratory) | Rudkevich, Aleksandr M. (Newton Energy Group) | Goldis, Evgeniy (Newton Energy Group) | Ruiz, Pablo A. (Boston University) | Caramanis, Michael (Boston University) | Carter, Richard (DNV-GL) | Backhaus, Scott (Los Alamos National Laboratory) | Tabors, Richard (Tabors Caramanis Rudkevich) | Hornby, Richard (Tabors Caramanis Rudkevich) | Baldwin, Daniel (Kinder Morgan)
As dependence of the bulk electric power system on gas-fired generation grows, more economically efficient coordination between the wholesale natural gas and electricity markets is increasingly important. New tools are needed to achieve more efficient and reliable operation of both markets by providing participants more accurate price signals on which to base their investment and operating decisions.
Today’s Electricity energy prices are consistent with the physical flow of electric energy in the power grid because of the economic optimization of power system operation in organized electricity markets administered by Regional Transmission Organizations (RTOs). A similar optimization approach that accounts for physical and engineering factors of pipeline hydraulics and compressor station operations would lead to location- and time-dependent intra-day prices of natural gas consistent with pipeline engineering factors, operations, and the physics of gas flow.
More economically efficient gas-electric coordination is envisioned as the timely exchange of both physical and pricing data between participants in each market, with price formation in both markets being fully consistent with the physics of energy flow. Physical data would be intra-day (e.g., hourly) gas schedules (burn and delivery) and pricing data would be bids and offers reflecting willingness to pay and to accept. Here, we describe the economic concepts related to this exchange, and discuss the regulatory and institutional issues that must be addressed. We then formulate an intra-day pipeline market clearing problem whose solution provides a flow schedule and hourly pricing, while ensuring that pipeline hydraulic limitations, compressor station constraints, operational factors, and pre-existing shipping contracts are satisfied. Furthermore, in order to support the practical application of these concepts, we provide a computational example of gas pipeline market clearing on a small interpretable model, and validate the results using a commercial pipeline simulator. Finally, we validate the modeling by cross-verifying simulations with SCADA data measured on a real pipeline system.
Performance test provides important information about whether a newly constructed pipeline can achieve the design capacity and transportation efficiency. The field performance test also provides mechanical performance correction factors for compressors as part of the commissioning procedure of natural gas transmission pipelines. The EPC contractors are usually asked to ensure that the pipeline can achieve the delivery pressures and flow rates under the protocols agreed upon. Pipelines are normally designed to achieve certain flow rates in different phases. However, it is not uncommon to see pipelines falling short on gas supply or user demand during the time period that the performance test is scheduled for. The available pipeline inlet pressure and flow rate are lower than the guarantee conditions described in the performance test protocols. The quality of the gas supply may also create unexpected problems on the interpretation and evaluation of the performance test results. Therefore, it is necessary to conduct the performance test at lower pressures and lower flow rate conditions. This paper presents the methodology, strategy, procedure as well as a real life example of a simulation based performance test of a gas pipeline. The method presented offers a practical tool to validate pipeline performance by evaluating the fundamental physical parameters. This method can also be used in assessing the pipeline hydraulics conditions for potential corrosion issues (ID change), roughness changes, BSW deposition or other issues influencing pressure drops in the pipeline.
INTRODUCTION AND BACKGROUND
Performance test is an important part of the commissioning procedure of natural gas transmission pipelines after their construction. The EPC contractors are responsible of ensuring the pipeline can achieve the delivery pressures and flow rates under the protocols agreed upon. Pipelines are normally designed to achieve certain flow rates in different phases. However, it is not uncommon to see pipelines falling short on gas supply during the time in which the performance test is scheduled. During the economic down turns, the pipeline users may also have difficulty accepting the gas at the flow rate required to achieve the pipeline guarantees. In these circumstances, the available pipeline inlet pressure and pipeline flow rate are lower than the guarantee conditions described in the performance test protocols. The quality of the gas supply may also create unexpected problems in the interpretation and evaluation of the performance test such as the presence of black powder. Therefore, it is necessary to conduct the performance test at lower than anticipated pressures and flow rate conditions. The question is how to assess whether the pipeline performance will achieve the guarantees with the data collected at a lower pressure and lower flow conditions? The answer lays in the assessment of the pipeline parameters using measurement data at lower pressures and flow rates, along with gas pipeline hydraulic simulations at guarantee conditions. This paper presents the methodology, strategy, procedure as well as a real life example of a simulation based performance test of a gas pipeline.
This paper examines both the method and results of a leak detection sensitivity analysis for a liquid pipeline. A fractional factorial design is used to quantify both primary effects as well as confounding effects between parameters. The analysis examines the impact of uncertainty and bias in pressure and flow measurements, as well as spatial and temporal discretization on leak flow estimation. These are considered under conditions of transient pressures, the presence of a leak and with altered SCADA poll frequency. The results of the parametric study as well as the applicability of the general approach are discussed.
INTRODUCTION AND BACKGROUND
The ability of pipeline operators to swiftly detect pipeline leaks is critical to the safeguarding of public and environmental interests. One of the prevalent tools for achieving this ability within industry is the use of a real time transient model of the pipeline. A primary benefit of utilizing a real time transient model for pipeline leak detection is the ability to accurately represent the pressure profile of the pipeline under transient conditions (Learn, 2015). A more accurate representation of pipeline transients leads to a more accurate estimation of linepack and hence a lower error in the leak flow estimate. As a result, alarm threshold values can be lowered without increasing false alarm frequency, and a better leak detection sensitivity can be achieved.
One of the more challenging roles for a leak detection engineer is to assess and understand the multitude of parameters affecting the error in leak flow estimation. The most widely applied standard, API1149 (1993), provided an excellent theoretical framework for estimating leak flow uncertainty as a function of time averaging window and telemetry uncertainty. However, the most recent update to this standard recognizes that potentially many different parameters affect leak flow uncertainty and recommends a perturbation approach against a reference model. (Salmatanis, 2015)
Given the number of parameters which may affect leak detection sensitivity, a more efficient method is needed to assess the impact of such parameters. Assessing all the potential effects of all parameters within a large quantity of scenarios can be time consuming. It can be onerous to perform this analysis on pipelines in the early stages of project development, during which certain other design assumptions are yet to be confirmed. In addition, many projects may never progress beyond the prospecting stage despite significant design and analysis.
SGN is the second largest gas distribution company in the UK owning and operating two gas distribution networks made up of three Local Distribution Zones (LDZs) delivering natural and green gas through over 46,000 miles (74,000 km) of pipeline to 5.9 million homes and businesses across Scotland and the south of England. The distribution networks receive gas from the UK’s National Transmission System (NTS), multiple biomethane producers and an LNG terminal.
The SGN Gas Control Centre has a key role to ensure there is a safe and reliable network to enable them to meet the daily demand whilst operating the networks within designed limits. National Grid Gas UK Transmission (NGG UKT), which operate the National Transmission System (NTS), require the provision of an hourly schedule of gas volumes (hourly profiles) taken through each of the NTS Offtakes into the LDZ in the form of Offtake Profile Notifications (OPNs). Although these profiles can be altered throughout the day to reflect changes in the demand, there are strict rules as to how much the profiles may change. Each OPN is scrutinized against the Uniform Network Code (UNC)(2) rules on submission to UKT.
To gain further insight and enable more efficient operation of its networks, SGN has deployed a real-time Gas Network Modelling System (GNMS) on the high pressure (100-1000 psig (7-69 barg)) parts of its networks. As well as providing a valuable stock monitoring tool, the GNMS utilizes data from the OPNs to predict the performance of the networks over the period of the current and next gas days.
This paper will discuss the OPN generation tool developed by SGN and Emerson and the implementation of the GNMS. Despite limitations in instrumentation in certain parts of the networks early indications are the GNMS is providing accurate results: this too will be discussed.
In recent years, pipeline operators have faced reduced production environments caused by declining brownfield operations and capital constraints induced by oil prices, among other factors, which have led to pipelines operating well under their designed capacity and challenges such as congealing—the precipitation of wax solids in a crude oil pipeline. This paper discusses how models are built using scientific principles and how simulation may be used to predict where congealing is or may occur inside a pipeline. Finally, a case study from a major oil and gas company’s site demonstrates how these modeling and simulation techniques may be effectively applied in the field.
INTRODUCTION AND BACKGROUND
Pipeline operators are currently challenged with operating pipelines safely in reduced production environments, which have been caused by declining brownfield operations, capital constraints brought on by oil prices, and the lack of drilling rigs to keep pipelines full. These present conditions result in pipelines operating well under their designed capacity and challenges such as congealing.
Congealing refers to the precipitation and nucleation of wax solids in a crude oil pipeline. It is initiated by a temperature gradient between the pipe wall and the centerline flow, leading to high-yield flow stress and causing changes in flow behavior.
This paper discusses the physical considerations that contribute and are necessary to detect congealing followed by a series of modeling steps to accurately simulate when and where congealing occurs in a pipeline while accounting for multiphase flow of differing compositions from multiple producers. In turn, this information can automatically be displayed as a visual pipeline profile, allowing operators to understand their entire pipeline operation from remote locations and view critical parameters and events, such as congealing, leak detection, and slugging.
These modeling and congealing algorithms were implemented and validated at a major oil and gas company’s site on a 150-km (~93.2 mi) commercial pipeline network used to transport roughly 50,000 BOPD (7,949 m3/day) from 11 gathering stations to a distribution tank farm. The main transportation pipeline was designed to transport 500,000 BOPD (79,490 m3/day). Congealing events were detected and verified by comparing the simulated and assayed pipeline data. Prediction time averaged between three and six hours in advance of the congealing event, allowing the pipeline operator take appropriate mitigation actions and reduce lost production opportunity (LPO).
Knowledge of natural gas quality in the short-term future (24 h) is expected by many of end users. Also, European Union Law requires to provide such information by Transmission System Operators. In case of multiloop network which is supplied with many sources and different gas compositions, the dynamic network simulation combined with forecasting of behavior all sources and offtakes, is necessary.
The article describes model of full chain of calculation. At entries, there are: productions, storages and interconnectors with more less stable gas compositions and LNG Terminals where gas composition smoothly change in function of time or significantly due to filing in from next vessel. At exits forecasting of demand relates to nomination processes (industrial end users) or forecasting systems (city gates to household areas). Between them the multiloop transmission network is dynamically simulated with full quality tracking model.
The paper contains also our practice experience based on Polish transmission system which has many entries from production, interconnectors, storages, more than 900 exits and new LNG Terminal. The multiloop network has also several compression stations and reduction points. There is a possibility of determining the degree of gas mixing i.e. providing clients with such information with simulation software. Such analyses are executed on a regular basis. Calculations are performed in three minute cycles (reconstruction network state) and future calculation are performed with the 15-minute step. For future calculations city gate exit points demand is obtained from the forecasting system (short-term forecasts - 10 days) or nominations used in supply points or industrial exits. Such values are compared up – to – date with the values obtained from chromatographs located in the transmission system network – reference chromatographs.
Finally, the article presents the case study of stream mixing degree depending on gas composition from different gas sources, exit point demand and settings of the non-linear network elements. The analyses were performed for both static and dynamic scenarios where one of the parameters is a dynamic change of the quantity and quality supply from LNG Terminal to the network.