|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Chemistry solutions and equipment technologies company ChampionX is combining its research and development capabilities with nanotechnology company Modumetal, and its materials and processes, in an exclusive collaboration agreement to drive production-related technology developments. The collaboration will introduce ChampionX’s Norris rod couplings coated with Modumetal nanolaminate technology.
As the engine for seismic imaging algorithms, stencil kernels modeling wave propagation are both compute- and memory-intensive. This work targets improving the performance of wave equation based stencil code parallelized by OpenMP on a multi-core CPU. To achieve this goal, we explored two techniques: improving vectorization by using hardware SIMD technology, and reducing memory traffic to mitigate the bottleneck caused by limited memory bandwidth. We show that with loop interchange, memory alignment, and compiler hints, both icc and gcc compilers can provide fully-vectorized stencil code of any order with performance comparable to that of SIMD intrinsic code. To reduce cache misses, we present three methods in the context of OpenMP parallelization: rearranging loop structure, blocking thread accesses, and temporal loop blocking. Our results demonstrate that fully-vectorized high-order stencil code will be about 2X faster if implemented with either of the first two methods, and fully-vectorized low-order stencil code will be about 1.2X faster if implemented with the combination of the last two methods. Our final best-performing code achieves 20%∼30% of peak GFLOPs/sec, depending on stencil order and compiler.
Baskaran, Muthu (Reservoir Labs Inc.) | Vasilache, Nicolas (Reservoir Labs Inc.) | Meister, Benoit (Reservoir Labs Inc.) | Datta, Kaushik (Reservoir Labs Inc.) | Hartono, Albert (Reservoir Labs Inc.) | Lethin, Richard (Reservoir Labs Inc.)
The need for solving discretized partial differential equations over large volumes of data in seismic computing comes with the need for improving the efficiency of such computations through high-performance computing techniques. Recently, GPUs have emerged as suitable candidates for high-performance accelerators for seismic computing applications. In fact, there is tremendous focus in supercomputing research on efficiently using GPUs to speed up such applications. Most of the research works focus on writing hand-optimized GPU parallel codes. In this paper, we describe our technique to
We present several strategies to optimize the modeling of the acoustic wavefield with the time-domain finite-difference method. An efficient vectorization of the computations is achieved on the Intel xeon computing core via the modification of the implementation concerning the absorbing layers. We propose also to increase the computational speed by using high orders in space and by solving the second-order wave equation instead of the first-order formulation. These strategies combined together allow for a reduction of the computation time by a factor of 27 for the modeling in the SEG SEAM II model. We show that the optimized algorithm has a quasi-perfect scalability on one dual-socket node.
Geospatial techniques of change detection by using temporal high-resolution satellite imagery and advanced GIS helps in understanding and monitoring local, regional and national landscape dynamics, including land uses and land covers like social-economic and natural characteristics on the earth surface, which are mainly used to monitor economic activities and natural disasters over time. The paper initially reviews how the current landuse change detection workflow and land management system in Saudi Aramco are designed and implemented with advanced geospatial technology, which have been operating for a decade. And also, some of new operational challenges that we are facing will be briefly discussed. The study comes with an innovative pilot by applying the advanced big data analytics and machine learning techniques to tackle those new operational challenges. The pilot aims to boost the innovations to find a breakthrough technique and define a new workflow through processing very-high-resolution optical satellite images and intelligently detecting landuse changes by seamless integration of "GPU-accelerated" machine learning models, which are mainly referred to deep neural networks (MxNet) and Support Vector Machine (SVM) in R and Python, especially characterized with automated landuse change detection and encroachment recognition. Finally, the paper will report the promising results as a prototype in a purely GPU-accelerated computing environment, which demonstrate obviously potential capability to deal with time-serial raster imagery for landuse change detection through deep machine learning.