|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
In July 2017, the SPE Board approved the CO2 Storage Resources Management System (SRMS). The document, written by a subcommittee of the Carbon Dioxide Capture, Utilization and Storage Technical Section (CCUS), establishes technically-based capacity and resources evaluation standards. Would you like to be kept informed about new publications and activities related to this topic? Please complete the form below.
Introduction Plunger lift has become a widely accepted and economical artificial-lift alternative, especially in high-gas/liquid-ratio (GLR) gas and oil wells (Figure 1.1). Plunger lift uses a free piston that travels up and down in the well's tubing string. It minimizes liquid fallback and uses the well's energy more efficiently than does slug or bubble flow. As with other artificial-lift methods, the purpose of plunger lift is to remove liquids from the wellbore so that the well can be produced at the lowest bottomhole pressures. Figure 1.1--Plunger installed in Canada. Whether in a gas well, oil well, or gas lift well, the mechanics of a plunger-lift system are the same. The plunger, a length of steel, is dropped through the tubing to the bottom of the well and allowed to travel back to the surface. It provides a piston-like interface between liquids and gas in the wellbore and prevents liquid fallback--a part of the liquid load that effectively is lost because it is left behind. Because the plunger provides a "seal" between the liquid and the gas, a well's own energy can be used to lift liquids out of the wellbore efficiently. A plunger changes the rules for liquid removal. In a well without a plunger, gas velocity must be high to remove liquids, but with a plunger, gas velocity can be very low. Thus, the plunger system is economical because it needs minimal equipment and uses the well's gas pressure as the energy source. Used with low line pressures or compression, plunger lift can produce many types of wells to depletion. In recent years, the advent of microprocessors and electronic controllers, the studies detailing the importance of plunger seal and velocity, and an increased focus on gas production have led to a much wider use and broader application of plunger lift. Microprocessors and electronic controllers have increased the reliability of plunger lift. Earlier controllers were on/off timers or pressure switches that needed frequent adjustment to deal with operating-condition changes such as line pressures, plunger wear, variable production rates, and system upsets. This frustrated many operators and caused failures, and thus limited plunger use. New controllers contain computers that can sense plunger problems and make immediate adjustments. Techniques with telemetry, electronic data collection, and troubleshooting software continue to improve plunger-lift performance and ease of use.
The analysis of gas production from fractured ultra-low permeability (ULP) reservoirs is most often accomplished using numerical simulation, which requires large 3D grids, many inputs and typically long execution times. We propose a new hybrid analytical-numerical method that reduces 3D equation of gas flow into either a simple Ordinary Differential Equation (ODE) in time or a 1D Partial Differential Equation (PDE) in space and time without compromising the strong non-linearity of the gas flow relation, thus vastly decreasing the size of the simulation problem and the execution time. In the proposed hybrid Partial Transformational Decomposition Method (PTDM), successive Finite Cosine Transforms (FCTs) are applied to the pseudo-pressure-based 3D diffusivity equation of gas flow, leading to the elimination of the corresponding physical dimensions. For production under a constant-or time-variable rate (q) regime, 3 levels of FCTs yield a 1st-order ODE in time. For production under a constant-or time-variable pressure (p wf) regime, 2 levels of FCTs lead to a 1D 2nd-order PDE in space and time. The fully-implicit numerical solutions for the FCT-based equations in the multi-transformed spaces are inverted, providing solutions that are analytical in 2 or 3-space dimensions and account for the nonlinearity of gas flow. The PTDM solution was coded in a FORTRAN95 program that used (a) the Laplace Transform Analytical solution for the q-problem and (b) a Finite Difference Method for the p wf -problem in their respective multi-transformed spaces. Using a 3D stencil (the minimum repeatable element in the horizontal well and hydraulically-fractured system), solutions over an extended production time and a substantial pressure drop were obtained for (a) a range of isotropic and anisotropic matrix and fracture properties, (b) constant and time-variable q and p wf production schemes, and (c) combinations of SRV and non-SRV subdomains. The results were compared to the numerical solutions from a widely-used, fully-implicit 3D simulator that involved a finely-discretized (high-definition) 3D domain involving 220,000 elements.
Edge—or, in-field, device-level—computing is being driven by the need for data from individual wells to be analyzed and processed at the wellsite instead of in data centers for early and accurate decision making. Enabled by a hybrid model of the industrial internet of things (IIoT), edge computing puts relatively small central processing units and disks into devices that sit “at the edge,” or where data are generated (i.e., the field). The devices are connected to computers that communicate, either wirelessly or through wires, with an enterprise site someplace else. Analyzing data at the edge enables significant decreases in the overall volume of data (upward of 2TB/day at some wellsites). Schlumberger is among companies taking edge computing seriously.
Fukuda, D. (Hokkaido University) | Mohammadnejad, M. (University of Tasmania) | Liu, H. (University of Tasmania) | Haoyu, H. (University of Tasmania) | Kodama, J. (Hokkaido University) | Fujii, Y. (Hokkaido University)
The combined finite-discrete element method (FDEM) has been widely used for various engineering applications including the field of rock engineering as one of the promising hybrid methods since this method can simulate the processes of intact/continuous rock deformation, transition from continuum to discontinuum (i.e., rock fracturing) and discontinuous interactions between material surfaces (e.g., rock fragmentation). However, this method is notorious for its computationally expensive nature, especially when modeling rock fracture processes under quasi-static loading. The authors have recently developed a 2-dimensional/3-dimensional(3D) Y-HFDEM code through a research collaboration between university of Tasmania and Hokkaido university. An important feature of this code includes the incorporation of parallelization scheme using a general-purpose-graphics-processing-unit (GPGPU) to achieve a good speed-up of FDEM simulations of rock fracture processes, and the 3D Y-HFDEM code can achieve up to 284 times speed-up compared with the sequential code if it is run on appropriate GPGPU accelerators. In addition, the code is free for non-commercial/non-military applications, which should be useful for many young researchers in the field of rock engineering.
This paper demonstrates our recent developments/achievements based on the Y-HFDEM code especially by introducing 3D modeling of rock fracture processes in uni- and tri-axial compression tests using 3D Y-HFDEM code since the number of applications of 3D FDEM for this class of problems has been very limited. One important issue regarding on the timing of contact activation is also discussed.
Over recent two and half decades, the combined finite-discrete element method (FDEM), proposed by (Munjiza et al., 1995; Munjiza, 2004), has been further developed and applied in the field of rock engineering due to its capability of incorporating the advantages of both continuum- and discontinuum-based methods in terms of simulating rock fracture and fragmentation (Mohammadnejad et al., 2018). Transition from an assembly of continuum finite elements to discontinuum is facilitated by initially zero-thickness cohesive elements which are inserted along the boundaries of continuum finite elements. One of the most representative FDEM code is an open-source research code, Y code (Munjiza, 2004), and several attempts have been made to actively extend the original code into both open access and commercial codes. We have also self-developed Y-HFDEM IDE (Liu et al. 2016; Liu et al. 2016) code based on Y code. Since FDEM is notorious for its computational expensive characteristics, we have recently introduced the parallel computation scheme into the Y-HFDEM IDE code using the GPGPU accelerator controlled by CUDA C/C++ for both two-dimensional (2D) (Fukuda et al., 2019a) and threedimensional (3D) (Fukuda et al., 2019b) applications. Although there are many publications on applying the 2D FDEM to simulate the fracturing process of rocks especially under quasi static loading condition, there is a limited number of publications on the application of 3D FDEM in simulating the failure process of rocks under quasi-static and dynamic loading conditions, which is due to the intensive computational nature of the 3D FDEM. Brazilian indirect tensile strength (BTS) and uniaxial compressive strength (UCS) tests have frequently been utilized for the calibration of the 2D/3D FDEM simulations. While conducting experiments of UCS tests is not time-consuming, conducting 3D FDEM simulation of such tests requires significant computational effort since the involving process includes the intact deformation in the pre-peak regime, transition from continuum to discontinuum (emergence of non-linearity around the peak) and completely discontinuous deformation (post peak regime involving with sudden release of stress and significant contact process between newly created macro fractures) along with the fact that very slow loading must be applied to simulate the quasi-static loading. Considering that the FDEM is based on explicit finite element method (FEM), the simulation becomes more computationally expensive when 3D FDEM is applied to simulate triaxial compression tests with high confining pressures since, with the very slow loading rate being kept, more computational steps are needed to model the failure process of rock with higher compressive strength. Meanwhile, the 3D FDEM simulation of the triaxial compression test is necessary to calibrate the input parameters such as cohesion and internal friction angle of rocks along with the UCS and BTS tests. Therefore, some remedies should be used to make 3D FDEM simulation of triaxial compression tests more affordable.
Physicists have been talking about the power of quantum computing for more than 30 years, but the questions have always been: Will it ever do something useful, and is it worth investing in? For such large-scale endeavors, it is good engineering practice to formulate decisive short-term goals that demonstrate whether the designs are going in the right direction. So, scientists at Google devised an experiment as an important milestone to help answer these questions. This experiment, referred to as a quantum supremacy experiment, provided direction for the team to overcome the many technical challenges inherent in quantum systems engineering to make a computer that is both programmable and powerful. To test the total system performance, a sensitive computational benchmark was selected that fails if just a single component of the computer is not good enough. The results of this quantum supremacy experiment have been published in the Nature article “Quantum Supremacy Using a Programmable Superconducting Processor.”
Quantum computing uses a new way to store, process, and measure information in computer systems to record the results. This new way depends on the quantum-mechanic state of subatomic particles such as electrons. Quantum states are represented by quantum bits (qubits) to represent computer information. In classical computing architecture, bits represent information in either 0s or 1s. However, in quantum-computing architecture, qubits could represent information in 60% 1 and 40% 0 state, for example, or 85% 1 and 15% 0, and so on.
Quantum computers exploit the peculiar behavior of objects at the atomic scale and use the qubit as the basic unit of quantum computing. A quantum computer with only 100 qubits would, theoretically, be more powerful than all the supercomputers on the planet combined, and a few hundred qubits could perform more calculations instantaneously than there are atoms in the known universe. A computer with 79 qubits has already been built. In this article, after discussing one of the most popular methods to build qubits, PGS chief scientist and technology commentator Andrew Long then addresses the critical phenomena of superposition, entanglement, and interference that allow quantum circuits built from qubits to be so powerful. Consideration is given to one possible way of building qubits with superconducting circuits and how such devices may be programmed.
Reverse time migration (RTM) is commonly regarded as a memory intensive operation because of the need to access source and receiver wavefields synchronously, which in principle require a large amount of storage. Many methods are proposed to manipulate the source wavefield efficiently, such as storing the wavefield in several layers around the computational domain for reconstruction. By using the integral solution of the representation theorem, we can reduce the boundary storage to a single layer without compromising accuracy. We propose such reconstruction of the source wavefield for elastic imaging to reduce the memory and the computational costs. Numerical RTM examples show that our proposed wavefield reconstruction method using a single boundary layer is comparable to the full storage of the source wavefield. We verify our method for elastic RTM and LSRTM on a distributed acoustic sensing (DAS) vertical seismic profile (VSP) dataset from the Eagle Ford shale formation.
Presentation Date: Monday, September 16, 2019
Session Start Time: 1:50 PM
Presentation Start Time: 4:20 PM
Presentation Type: Oral