Thiele, Christopher (Rice University) | Araya-Polo, Mauricio (Shell International Exploration & Production, Inc.) | Alpak, Faruk Omer (Shell International Exploration & Production, Inc.) | Riviere, Beatrice (Rice University)
Direct numerical simulation of multiphase pore-scale flow is a computationally demanding task with strong requirements on time-to-solution for the prediction of relative permeabilities. In this paper, we describe the hybrid-parallel implementation of a two-phase two-component incompressible flow simulator using MPI, OpenMP, and general-purpose graphics processing units (GPUs), and we analyze its computational performance. In particular, we evaluate the parallel performance of GPU-based iterative linear solvers for this application, and we compare them to CPUbased implementations of the same solver algorithms. Simulations on real-life Berea sandstone micro-CT images are used to assess the strong scalability and computational performance of the different solver implementations and their effect on time-to-solution. Additionally, we use a Poisson problem to further characterize achievable strong and weak scalability of the GPU-based solvers in reproducible experiments. Our experiments show that GPU-based iterative solvers can greatly reduce time-to-solution in complex pore-scale simulations. On the other hand, strong scalability is currently limited by the unbalanced computing capacities of the host and the GPUs. The experiments with the Poisson problem indicate that GPU-based iterative solvers are efficient when weak scalability is desired. Our findings show that proper utilization of GPUs can help to make our two-phase pore-scale flow simulation computationally feasible in existing workflows.
In this work, the scalability of the Algebraic Multiscale Solver (AMS) (Wang et al. 2014) for the pressure equation arising from incompressible flow in heterogeneous porous media is investigated on the GPU massively parallel architecture. The solvers robustness and scalability is compared against its carefully optimized implementation on the shared-memory multi-core architecture (Manea et al. 2016), which this work is directly extending. Although several components in the AMS algorithm are directly parallelizable, its scalability on GPU's depends heavily on the underlying algorithmic details and data-structures design of each step, where one needs to ensure favorable control-and data-flow on the GPU, while extracting enough parallel work for a massively parallel environment. In addition, the type of the algorithm chosen for each step greatly influences the overall robustness of the solver. Taking all these constraints into account, we have developed a GPU-based AMS that exploits the parallelism in every module of the solver, including both the setup phase and the solution phase. The performance of AMS--with our carefully optimized algorithmic choices on the GPU for both the setup phase and the solution phase, is demonstrated using highly heterogeneous 3D problems derived from the SPE10 Benchmark (Christie et al. 2001). Those problems range in size from millions to tens of millions of cells. The GPU implementation is benchmarked on a massively parallel architecture consisting of NVIDIA Kepler K80 GPU's, where its performance is compared to the multi-core CPU architecture using an optimized multi-core AMS implementation (Manea et al. 2016) running on a shared memory multi-core architecture consisting of two packages of Intel's Haswell-EP Xeon(R) CPU E5-2667. While the GPU-based AMS parallel implementation shows good scalability for the solution stage, its setup stage is less efficient compared to the CPU, mainly due to the dependence on a QR-based basis functions solver.
The Processing Problem: Can Computers Keep Up With Industry Demand? Big data is one of the big buzzwords in oil and gas operations today. Operators cannot get enough of it. Managing a successful venture requires the ability to extract valuable information from massive data sets and process that information in a quick and efficient manner. High-performance computing (HPC) is critical in making these things happen.
Ice-structure interaction (ISI) is a complex process, which requires a thorough understanding of the underlying physics to ensure safe operations in the ice-covered regions. Application of discrete element method (DEM) to compute ice loads on structures is a widely accepted approach, where the equations of rigid body motions are solved for all ice pieces in the computational domain. In most ISI simulations, the ice zone is assumed to be resting on a static water foundation omitting the hydrodynamic effects (added mass, water drag, wave damping) of the interacting bodies. This assumption can introduce erroneous results to simulations of the floating ice floes behavior, which in turn will incur uncertainties in planning ice management activities.
In this paper, a smooth particle hydrodynamics (SPH) based computational fluid dynamics (CFD) code is coupled with a three-dimensional DEM model to take the hydrodynamic effects of the interacting bodies including the ice pieces into account. The ice zone is modeled as discrete elements, which allows computing interaction forces by considering contact laws. The water foundation is modeled using smooth particles, which are modelled with the Naiver-Stokes equations.
Several applications of ship and offshore structures interacting with level ice and pack ice are simulated. A scenario of an offshore supply vessel operating in the marginal ice zone (MIZ) that is subject to wave forces is also simulated to show how this approach can be used for modelling complex real-world problems. This scenario is unique in a sense that it yields a multi-physics solution, where ice-structure-wave are all included in a single CFD simulation as a fully coupled analysis. The cost of the simulation is significantly reduced by running the computations on a Graphics Processing Unit (GPU) instead of a typical CPU workstation. Some of the initial results of ice-structure interactions are presented in this paper and a reasonable agreement with reduced scale model test results are found.
Liu, H. Y. (University of Tasmania) | Fukuda, D. (University of Tasmania / Hokkaido University) | Mohammadnejad, M. (University of Tasmania) | Han, Haoyu (University of Tasmania) | Chan, Andrew H. C. (University of Tasmania)
Combined finite-discrete element method has become one of the most powerful numerical methods for modelling rock failure process in recent decades. However, most of studies focus on two-dimensional combined finite-discrete element modelling of the rock failure process. This paper further develops a hybrid finite-discrete element method proposed early by the authors for three-dimensional modelling of the rock failure processes in Brazilian tests and uniaxial compression test. The further developed three-dimensional hybrid finite-discrete element method is then parallelized using compute unified device architecture - based general purpose graphic processing unit parallel method to conduct a full-scale three-dimensional modelling of rock spalling failure process in the single Hopkinson pressure bar test. It is concluded that the three-dimensional hybrid finite-discrete element method provides a valuable numerical tools for modelling rock fracture and fragmentation and the parallelization makes it possible to be applied in the large-scale rock mass instability engineering application.
The study on rock failure process has been a challenging but hot topic since rock fracture has applications in not only breaking the rock mass for extracting valuable natural resources in mining, geothermal, and oil &; gas industries but also preventing geotechnical engineering structures such as tunnels, slopes and dams from failure and collapse. In recent decades, numerical method has been one of the most powerful tools for studying rock failure process and the combined finite-discrete element method initially proposed by Munjiza (2004) has become one of the most powerful numerical methods for modelling the rock failure process. Compared with the finite element method, the combined finite-discrete element method is more robust in modelling rock failure, especially fracture, fragmentation, and fragment movements resulting in tertiary fractures. Compared with the discrete element method, the combined finite-discrete element method is more versatile in dealing with irregular-shaped, deformable and breakable particles. However, most of studies in literatures focus on modelling the rock failure process using two-dimensional (2D) finite-discrete element methods (Mahabadi et al., 2010; Liu, 2013; Lisjak et al., 2014; Liu et al., 2015 and 2016; Mahabadi et al., 2016; An et al., 2017). Thanks to the rapid development of computing power, interactive computer graphics and topological data structure, three-dimensional (3D) finite-discrete element modelling of the rock failure process has attracted the attention of more and more researchers. Rougier et al. (2014) simulated the dynamic rock failure process in dynamic Brazilian test using a 3D combined finite-discrete element method, i.e. the so-called MUNROU (Munjiza-Rougier) code running on a supercomputer with a few hundreds of CPUs at Los Alamos National Laboratory. Mahabadi et al. (2014) implemented a 3D combined finite-discrete element method to investigate the rock failure process in Brazilian disc test and uniaxial compression test although their 3D modelling of the uniaxial compression test is far from satisfactory. Hamdi et al. (2014) simulated the complete 3D fracture process during conventional laboratory testing including Brazilian indirect tension and uniaxial and biaxial compression using a combined finite-discrete element method called ELFEN developed Rockfield Ltd. In this study, a hybrid finite-discrete element method proposed by Liu et al. (2015) on the basis of Munjiza’s (2004) open-source combined finite-discrete element libraries are further developed for three-dimensional modelling of the rock failure processes in Brazilian tests and uniaxial compression test, which extends a recent study on the 3D hybrid finite-discrete element modelling conducted by the authors (Liu et al., 2018). Moreover, the further developed 3D hybrid finite-discrete element method is parallelized using the GPGPU (general purpose graphic processing unit) parallel method initially implemented in the DFPA (dynamic failure process analysis) code (Fukuda et al., 2016) to conduct a full-scale 3D modelling of the single Hopkinson pressure bar test on the rock spalling failure process. Unlike Rougier et al.’s (2014) and probably Hamdi et al.’s (2014) (although unclear since not stated in their paper) modellings completed in the supercomputer with hundreds of CPUs, all of 3D modellings reported in this paper are completed in PC although the rock spalling test is modelled using a PC with a powerful GPU.
Fukuda, D. (Hokkaido University / University of Tasmania) | Liu, H. (University of Tasmania) | Mohammadnejad, M. (University of Tasmania) | Chan, A. (University of Tasmania) | Cho, S. H. (Chonbuk National University) | Min, G. J. (Chonbuk National University) | Kodama, J. (Hokkaido University) | Yoshiaki, F. (Hokkaido University)
This paper introduces the Y-HFDEM code based on two-dimensional combined finite-discrete-element method (FEM/DEM) for numerical simulation of fracturing process in brittle and semi-brittle materials including rocks. The code has been successfully employed to simulate rock breakage under both quasi-static (e.g. uniaxial compression) and dynamic (e.g. rock blasting) loading conditions. However, the most challenging part in the application of the original Y-HFDEM code was its simulation time required to solve large-scale problems with massive number of nodes, elements and contact interactions. To overcome this limitation, this paper demonstrates the application of GPGPU (General Purpose Graphic Processing Unit) and CUDA (Compute Unified Device Architecture) C/C++ to parallelize the original sequential 2-D Y-HFDEM code along with related numerical algorithms using the GPGPU. Obtained results from verification examples demonstrate the capability of the proposed Y-HFDEM code in modelling larger scale problems in which massive computational effort is required.
Understanding the mechanism of the fracture process in rocks is important in the field of civil and mining engineering. Numerical methods have been increasingly applied recently to analyze the fracture process of rocks. For a realistic simulation of the fracture process of rock, numerical techniques must be capable of capturing crack onset and arbitrary crack growth, correct crack length within a given time interval as well as the propagating directions. In recent years, increasing attention has been paid on the techniques which bring together the advantages of the continuum-based and discontinuum-based computational methods. The combined finite-discrete element method (FEM/DEM) proposed by Munjiza (2004) has been employed successfully to model problems dealing with transition process from continuum to discontinuum such as rock fracturing and fragmentation (Mohammadnejad et al., 2018). ELFEN(2D/3D) (Rockfield, 2005) and Y(2D/3D) code (Munjiza, 2004) are two main implementations of the combined FEM/DEM. Several attempts have been made to extend the Y code such as Y-GEO(Mahabadi et al., 2012), IRAZU(Mahabadi et al., 2016), SOLIDITY (Xiang et al., 2016), HOSS with MUNROU (Rougier et al., 2014) and authors’ Y-hFdEM (Liu et al., 2015). The principles of the combined FEM/DEM are based on both continuum mechanics, cohesive zone modelling and contact mechanics which make it computationally expensive. Therefore, developing a capable parallel computation schemes is important in order to deal with larger scale problems with massive number of nodes, elements and contact interactions.
Choi, Youngsu (Ship & Offshore Research Institute, Samsung Heavy Industries Co., Ltd.) | Heo, Heeyoung (Ship & Offshore Research Institute, Samsung Heavy Industries Co., Ltd.) | Lee, Jeonghoon (Ship & Offshore Research Institute, Samsung Heavy Industries Co., Ltd.) | Park, Inha (Ship & Offshore Research Institute, Samsung Heavy Industries Co., Ltd.) | Park, Jungseo (Ship & Offshore Research Institute, Samsung Heavy Industries Co., Ltd.)
In this paper, we propose an interactive paperless solution for providing shipbuilding information using a mobile device. This solution not only retrieves product and fabrication information such as 3D models, drawings and ERP data through a mobile device but also digitizes stamping and production data that are created by field workers. The digitized information is stored in a central DB. Therefore, it can be referenced whenever and wherever it is needed. Furthermore, this solution can overcome the disadvantages of using printed drawings. The developed solution was applied to an actual fabrication shop, and the productivity and quality are improved.
SUMMARY Quantum computers hold the promise of being able to solve some challenging problems sometime in the next decade or two. Here we explore how to exploit their potential power in an unusual tomographic challenge: Can one estimate material percentages such as net-to-gross or sand-shale ratios using sparse offset-traveltime information? By restricting the constituent materials in an isotropic horizontally-stratified medium to a specific set with wellknown acoustic (or elastic) properties, transmission tomography can, in principle, provide the relative fraction of each material within the set of layers over which the ray traverses. In this paper, we formulate an algorithm for this "super-resolution" calculation suitable for quantum computing. INTRODUCTION In a horizontally-stratified isotropic medium, one may arbitrarily permute the layers between source and receivers without changing the recorded traveltime of the direct arrival.
Moradi, Shahpoor (University of Calgary, Department of Geoscience, Calgary, Canada) | Trad, Daniel (University of Calgary, Department of Geoscience, Calgary, Canada) | Innanen, Kristopher A. (University of Calgary, Department of Geoscience, Calgary, Canada)
Accurate modeling of seismic wave propagation in the subsurface of the earth is essential for understanding earthquake dynamics, characterizing seismic hazards on global scales and hydrocarbon reservoir exploration and monitoring on local scales. These are among the most challenging computational problems in geoscience. Despite algorithmic advances and the increasingly powerful computational resources currently available, including fast CPUs, GPUs and large volumes of computer memory, there are still daunting computational challenges in simulating 3D seismic wave propagation in complex earth environments. Recent advances in quantum computing are suggestive that geoscience may soon begin to benefit from this promising field. For example, Finite Difference (FD) modeling is the most widely used method to simulate seismic wave propagation. In the frequency domain, FD methods reduce solutions of the wave equation into systems of linear equations; such systems are just the type that quantum algorithms may be capable of solving with exponential speedup, in comparison with classical algorithms. For the computational geophysicist, to prepare to take advantage of these speed-ups, which could arrive in as few as 5-10 years, the tasks at hand are (1) to become familiar with the logic and concepts associated with quantum computing, and (2) to map our key computational algorithms (e.g., frequency domain FD) to this domain.
Presentation Date: Monday, October 15, 2018
Start Time: 1:50:00 PM
Location: 204B (Anaheim Convention Center)
Presentation Type: Oral
Duan, Peiran (School of Geosciences, China University of Petroleum-East China) | Gu, Bingluo (School of Geosciences, China University of Petroleum-East China) | Li, Zhenchun (School of Geosciences, China University of Petroleum-East China)
Based on the full two-way wave equation for wavefield extrapolation, reverse time migration (RTM) is considered as a powerful imaging technique that avoids the approximation of wave equation, without dip and extreme lateral variation of velocity limitation. However, this algorithm suffers from very expensive computational costs and high storage requirement. In this study, we introduce curvilinear coordinate system such that the depth of the Cartesian coordinate was converted to vertical time domain to overcome oversampling problems at higher speed regions, and derive the 1st order velocity-stress seismic wave equation in vertical time domain. This feature has a more profound effect on the areas with large speed differences, especially in the mid-deep high-speed areas. Besides, we deduce the difference formula of the approximate source wavefield using the optimization operator, and reconstructs the differential-order layer wavefield in the calculation area by storing the optimization operator of each point around each time slice, and uses this optimization operator boundary storage strategy to improve 3D RTM algorithm in vertical time domain. Our RTM algorithm is success in accurately imaging of complex structures and reducing 68.4% of the storage and 35% of the computation times.
Presentation Date: Tuesday, October 16, 2018
Start Time: 1:50:00 PM
Location: 207A (Anaheim Convention Center)
Presentation Type: Oral