|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Ibekwe, Achinike Uche (School of Marine Science & Technology, Newcastle University) | Pu, Yongchang (School of Marine Science & Technology, Newcastle University) | Dow, Robert S. (School of Marine Science & Technology, Newcastle University)
Gas pipeline internal surface typically undergoes degradation for a variety of reasons such as erosion, corrosion and deposits of objectionable materials that occasionally enter the gas stream at receipt points. Accurate monitoring of the pipe internal surface condition can hugely benefit the planning of cleaning activities. Theoretically the pipe wall roughness for a given pipe segment can be extracted based on measured flow data and other system parameters. The challenge lies in the fact that measured data all contain varying degrees of uncertainty, and the system becomes more complex to analyze when it contains different segments connected in series or parallel like many typical gas gathering and lateral networks. This paper demonstrates the application of the Error-in-Variable Model (EVM) using the Markov Chain Monte Carlo (MCMC) solution method in analyzing a complex pipeline network on the TransCanada NGTL System. EVM, a well-established Bayesian parameter estimation technique, accounts for uncertainties in the measured variables, such as flow and pressure data, when determining the most probable estimates of unknown parameters such as pipe internal wall surface roughness. In this work, the EVM problem is solved using the MCMC Metropolis-Hastings algorithm. The MCMC approach is demonstrated to be robust, easy to implement and capable of handling large quantities of data. It has the potential to analyze complex networks and monitor the pipe wall surface condition on-line with SCADA (Supervisory Control And Data Acquisition) data. Using this method, the internal wall surface roughness for the segments of interest in this network were extracted from measured data collected
before and after the pigging operation. Results demonstrate the model's capability in estimating the degradation of the pipe wall internal surface and the effectiveness of pigging. Details on implementation and challenges in applying such methodology to analyze complex gas networks are discussed.
Introduction and Background
Gas pipeline internal surface typically undergoes degradation under normal operating conditions for a variety of reasons such as erosion, corrosion and deposits of objectionable materials that occasionally enter the gas stream at receipt points. Such degradation leads to higher energy consumption and can limit the capacity of the pipeline system. Therefore, deploying cleaning pigs periodically to restore the internal pipe wall surface is common practice in industry. However, determining the frequency of pigging, which pipe segments require cleaning, and the effectiveness of cleaning in restoring the internal pipe wall surface to its original state can be a challenge.
Goodyear, Grant (Halliburton) | Sood, Avneet (Los Alamos National Laboratory) | Andrews, Madison (Los Alamos National Laboratory) | Solomon, Clell J. (Los Alamos National Laboratory) | Luycx, Mathilde (The University of Texas at Austin) | Torres-Verdín, Carlos (The University of Texas at Austin)
Nuclear borehole instruments are ubiquitous in well logging—from bare-bones logging-while-drilling (LWD) gamma to conventional triple-combo and more exotic neutron-gamma capture (Σ) and spectroscopy tools. Now, more than ever, nuclear modeling is being used in-stead of physical experiments to design new tools, to re-late nuclear measurements to formation properties and petrophysical parameters, and even to generate “digital” calibration standards.
This paper discusses significant improvements in software and hardware used to model measurements acquired with borehole nuclear instruments. These advances make nuclear modeling more accurate, more efficient, and substantially less expensive to perform.
For qualitative interrogations of complex rock formations, The University of Texas at Austin developed UTNuPro to compute macroscopic nuclear parameters. It is a successor to the venerable SNUPAR program that supports more elements and yields better matches with Monte Carlo calculations. An inversion-based interpretation involves adjusting formation properties until a predicted nuclear log matches what was measured. Parameters computed with UTNuPro can be utilized by some inversion routines to provide first-order trends without requiring proprietary knowledge of the tool geometry.
If the user possesses details of the tool geometry, then nuclear Monte Carlo simulations can provide quantitative results. Wondering whether a nuclear log makes sense in shale when drilling with an unusual mud? Model the shale (or bed boundaries or layers, etc.) before and after invasion. Designing a new tool? It is straightforward to model different nuclear sources and detector types.
This paper explores recent and forthcoming changes in the nuclear Monte Carlo n-particle code (MCNP) that are useful to the industry. Modeling the tool geometry is much easier because MCNP now supports an “unstructured mesh” geometry that can be rendered (almost) directly from a computer-aided design (CAD) file for the tool. An “importance map” of the geometry is constructed simultaneously to speed up simulations. Such maps can also be used for fast, quasilinear modeling to implement reliable inversion-based interpretations in real-time, akin to borehole resistivity, resulting in better rock compositional assessments. New helper tools improve detector processing and particle tracking (MCNPTools), compute more realistic post-processed detector spectra (DRiFT), and produce source descriptions for complicated decay chains, such as those for U and Th necessary for natural gamma modeling (ISC). New physics includes alpha particle tracking (vital for modeling 10B-based neutron detectors) and correlated fission event generators (CGMF and FREYA). Pulse-neutron spectral interpretation is simplified because MCNP can now “tag” each photon with its element of origin.
A comprehensive modeling program requires some type of computer cluster; accordingly, this paper provides best practices for choosing appropriate hardware as well as constructing and maintaining the cluster with mini-mum effort while nonetheless satisfying the strict US Department of Energy requirements for MCNP clusters.
This paper is based on the view that the calculation of error is an essential part of measurements and scientific calculations, and that it should be an integral part of petrophysical calculations. Presentation of error in petrophysical results is rarely applied in formation evaluation and reservoir characterization. Calculation of petrophysical error is commonly executed for input to the geostatistics, but often only after the petrophysical calculation is completed. Sometimes the calculation of petrophysical error is excluded altogether. Petrophysical results should include traceable and quantifiable error.
The motivation of this paper is to show that quantification of error can easily be integrated to the petrophysical results by including the first-order error propagation (FOEP) method as a part of the computer script that gives the petrophysical results. Error in the petrophysical results is related to the models used and error in model input. Calculation of petrophysical error involves understanding how the input error propagates through the functions to the end product. A commonly used method is Monte Carlo, while FOEP is less used. Different views exist with respect to needs, pros and cons for the various methods, but there also are some doubts regarding limitations around the use of FOEP. The FOEP solution is the chosen method in this paper because it is an analytical and more practical solution related to implementation into the script that computes the petrophysical results.
This paper includes an introduction to the theory of FOEP in matrix form and contains examples that illustrate the application of petrophysical functions. The mathematics shows how dependencies between variables and asymmetrical distributions are included in calculation of error. The mathematics, graphical user interfaces (GUI), and plot functionalities are scripted with the use of Python. An example of a GUI for petrophysical input, and example plots including presentation of the error and error propagation, are also presented.
The purpose of this paper is to increase the focus on petrophysical error calculations, and to demonstrate the advantages of error propagation as a standard part of the petrophysical results. The mathematical formulation in matrix form, which makes the computer script simpler and the computation faster, and allows the implementation of asymmetrical distributions, is not observed in the petrophysical literature.
ABSTRACT: The 1150 m deep man shaft at Boulby Mine in Yorkshire, UK was sunk between 1968 and 1974. At the 1056 m level in the Upper Halite, shortly after the excavation and close behind the shaft wall, several 4.5 m long multipoint extensometers were installed at various orientations to monitor the long-term shaft wall convergence. Monitoring of the extensometers occurred sporadically both prior to and after the installation of the lining over an approximately 2430 day duration. This data set provides important insight into the in-situ behavior of large scale creep deformations around the shaft over a long period of time. This paper reviews the long-term convergence data, previous studies of the data and provides several new methods and results from back-calculation exercises using a coupled Python-FLAC3D interface. The coupled interface allowed analysis of thousands of parameters sets with minimal human interaction. Comparisons with analytical solutions and deterministic model runs are also provided to verify that automation is an economical tool that can improve project efficiency.
Back-Analysis of observed or measured behavior for structures built in or on rock masses is a powerful tool to:
(i) define/validate design geotechnical properties;
(ii) test the appropriateness of constitutive models;
(iii) mimic realistic failure modes; and
(iv) capture ground-support interactions.
This paper presents a back-analysis undertaken to replicate the radial displacement of a deep shaft excavated as part of the Boulby Mine development and highlights the importance of measuring the long-term behavior of formations exhibiting time-dependent deformations.
The long-term response of the deep shaft was back-calculated using numerical modelling (2D & 3D) and analytical solutions and a series of parametric runs were performed using a Python-FLAC3D interface to investigate the sensitivity of the predicted shaft closure beyond the monitoring period. The study also provided insight, to some extent, on the influence of scale effects on creep properties by examining if laboratory scale derived visco-elastic properties can be used to predict the in-situ large scale shaft displacements.