Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Results
Abstract In the S-Field operations office, a daily battle ensues in the quest to increase production and maximize profits from waterflooding. One of the main control mechanisms applied to optimize the waterflooded reservoirs is by controlling the water injection and pumping rates of producers to balance patterns, maximize sweep, and maintain reservoir pressure. The reservoir surveillance team has been using a simple spreadsheet analytical approach that was quite limiting as the number of injection patterns increased, and the flood matured, leading to a water breakthrough. There was a need for a more sophisticated approach that could leverage artificial intelligence (AI) technology, especially since the entire asset was undergoing significant digitalization of its operations. This paper presents various innovations in bringing real applications of AI for waterflood management. This includes innovations in business processes, application of design thinking methodology, agile development, and AI. The AI waterflood management solution combines cloud technologies, big data processing, data analytics, machine learning algorithms, robotics, sensors and monitoring system, automation, edge gateways, and augmented and virtual reality (AR/VR). Design thinking principles and a human-centric approach within an agile innovation framework were utilized for rapid prototyping and deployment. A waterflood management framework that addressed the business's operational, tactical, and strategic aspects created the backdrop for designing the solution architecture. New injector-producer modeling techniques that leveraged AI and were fit-for-purpose for reservoir surveillance and production engineers were prototyped. An interactive pattern flood management tool, adapted from streamline simulation-based waterflood analysis methods, was developed for injection pattern analysis and intelligent optimization workflow. Field pilot testing for over a year proved that the prototype could reliably detect injector-production interactions and recommend operating set points in relevant time. Reduced time to decision, improved analysis efficiency and reliability of short-term forecasts, reduced field visits and health-safety-environment (HSE) exposure, and finally ease-of-use has been experienced. The learnings from this project are being leveraged to develop a deployable solution and move the needle toward autonomous waterflood operations.
- Asia > Middle East (0.46)
- Europe > United Kingdom > North Sea > Central North Sea (0.24)
- Energy > Oil & Gas > Upstream (1.00)
- Water & Waste Management > Water Management > Lifecycle > Disposal/Injection (0.49)
- Water & Waste Management > Water Management > Lifecycle > Treatment (0.46)
- Europe > Netherlands > German Basin (0.99)
- Europe > Germany > German Basin (0.99)
- Europe > Denmark > German Basin (0.99)
- Asia > China > Xinjiang Uyghur Autonomous Region > Junggar Basin > Huoshaoshan Field (0.99)
Coupling Numerical Dynamic Models of PTA and RTA with Detailed Geological Models Integrating Quantitative Analysis, Using Rock Physics, Seismic Inversion and Nonlinear Techniques in Siliciclastic Reservoirs
Alcantara, Ricardo (PEMEX E&P) | Santiago, Luis Humberto (PEMEX E&P) | Melquiades, Ilse Paola (PEMEX E&P) | Ruiz, Blanca Estefani (PEMEX E&P) | Ricardez, Jorge (PEMEX E&P) | Mendez, Cesar Israel (PEMEX E&P) | Mercado, Victor (SEG)
Abstract The study area comprises an oil play with numerous opportunities, identifying sandstone sequences with proven potential. The main sequence was deposited in the Upper Miocene within a transitional environment (external neritic), resulting in the formation of bars in deltaic facies and channels, which represents an excellent quality and lateral extension of the storage rock, but also complexity due to internal variability. The trap is structural with closure against faults, formed in an extensive tectonic regime giving rise to normal faulting and increasing the degree of complexity for the characterization of the reservoirs. Seismic data have accurate information about these characteristics, however, it is insufficient to solve the variability in the vertical scale, so incorporating all the information obtained in the wells through seismic inversion is essential when characterizing highly heterogeneous reservoirs with thin thickness. Furthermore, the geostatistical inversion combines Bayesian inference with a sampling algorithm called Markov Chain Monte Carlo (MCMC) that allows incorporating all the information from well logs, geological information, geostatistical parameters, and seismic data, generating models that honor the input data (Hameed et al., 2011). Additionally, the method provides a solution to the problem of non-uniqueness of the results, based on a statistical distribution of the multiple realizations derived from the initial model. This work proposes a flow that integrates quantitative analysis, establishing a direct link between seismic measurements and well logs, which additionally, when combined with non-linear techniques such as geostatistical seismic inversion, can minimize the differences in scales, obtaining better models, more predictive and with quantification of uncertainty. The static workflow used consists of 6 main components: Pre-stacked gather conditioning, curve modeling by rock physics (Vp, Vs and Rho), geostatistical seismic inversion (impedance P, Vp/Vs ratio, density), determination of facies cubes (oil-sand, brine-sand and shale) and petrophysical properties (Vcl, Phie, permeability) using a robust algorithm combining Bayesian inference and Markov Chain Monte Carlo (MCMC), quantification of uncertainty and volumetric estimation by ranking multiple realizations (P10, P50, P90) and transfer to a geological mesh (upscaling) ready for numerical simulation without the use of typical extrapolation algorithms such as kriging or Sequential Gaussian Simulation (SGS), managing to minimize the scale differences, obtaining better models, more predictive and capable of estimating uncertainty. With the results obtained, redefined geo-bodies were extracted, already discretizing the sandstones with good rock quality from the sandstones with good rock quality and bearing hydrocarbons to have greater precision in the development of these fields. Subsequently, the dynamic information was coupled to analyze the existing Pressure Transient Analyses (PTA) that have identified pseudo-steady state and the Rate Transient Analyses (RTA) to numerically model the response, checking the volumes obtained previously. Additionally, a benchmarking was considered with more than 590 oil producing fields in siliciclastics worldwide, considering the main properties of the fluid, porosity, facies and depositional environments and drive mechanisms, thus identifying new development opportunities with less uncertainty.
- North America > United States > Texas (0.28)
- Asia > Middle East (0.28)
- Geology > Sedimentary Geology (1.00)
- Geology > Rock Type > Sedimentary Rock > Clastic Rock > Sandstone (1.00)
- Geology > Rock Type > Sedimentary Rock > Clastic Rock > Mudrock > Shale (0.55)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling > Seismic Inversion (1.00)
- Geophysics > Seismic Surveying > Seismic Interpretation (0.94)
- Geophysics > Seismic Surveying > Seismic Processing (0.93)
- North America > United States > Louisiana > Lula Field (0.99)
- Asia > Middle East > Kuwait > Ahmadi Governorate > Arabian Basin > Widyan Basin > Umm Gudair Field > Marrat Formation > Sargelu Formation (0.99)
- Asia > Middle East > Kuwait > Ahmadi Governorate > Arabian Basin > Widyan Basin > Umm Gudair Field > Marrat Formation > Najmah Formation (0.99)
- (4 more...)
Abstract Stress-induced borehole wall failure during oil and gas wells drilling can lead to over-gauged wellbore. Borehole instability leading to excessively enlarged wellbore can not only increase risk of drilling difficulties but can also cause completions integrity issues. For instance, there is a high probability of poor cementing or compromised zonal isolation across over-gauged wellbore sections in cased and cemented completions or it can result in loss of packers sealing in open hole multi-stage (MSF) completions. Both of these inadequacies of poor cementing and insufficient packers sealing can jeopardize well stimulation operations. Hence, knowing accurate wellbore shape and diameter is critical for multistage well completion requirements. While wireline multi-arm mechanical caliper tools can be run in open hole after drilling to get direct measurement of hole size, not always such measurements are possible due to prevailing drilling difficulties. Under such circumstances, making completion decisions become even more difficult with higher risk of failure. Moreover, a solo wireline logging run to acquire caliper data requires additional cost including rig time. The objective of this paper is to predict synthetic caliper using logging while drilling (LWD) measurements (notably azimuthal density and petrophysical properties) with the help of artificial intelligence (AI) and machine learning algorithms that can be used in place of wireline caliper data thereby saving rig and logging costs and hence reducing overall field development expenditure. This paper describes a machine learning algorithm to predict the mechanical caliper logs using LWD data. Available LWD data for different wells were used to build a robust machine learning algorithm in Python. Different logging parameters from LWD were tested to quantify their sensitivity towards the caliper data and then predict the maximum hole diameter (Cmax) for different wells. The LWD data combines different parameters that have direct effect on the caliper log with machine learning model combining them for different wells to train the model and then predict caliper log based on these parameters. The parameters used in the prediction are gamma ray, four azimuthal photoelectric factors, sonic data, porosity, formation bulk density, four azimuthal formation densities, and mineralogy. The methodology is tested for horizonal wells drilled using 5-7/8" drill-bit. The predicted caliper is compared with measured caliper data so that it satisfies completion requirements for both MSF and cased hole completions.
- Asia > Middle East (0.28)
- North America > United States > Texas > Coleman County (0.24)
- Well Drilling > Drilling Measurement, Data Acquisition and Automation > Logging while drilling (1.00)
- Well Completion (1.00)
- Reservoir Description and Dynamics > Formation Evaluation & Management > Open hole/cased hole log analysis (1.00)
- Data Science & Engineering Analytics > Information Management and Systems > Artificial intelligence (1.00)
Machine Learning Sweet Spot Identification and Performance Validation Utilising Reservoir and Completion Data from Unconventional Reservoir in British Columbia, Canada
Leem, Junghun (PETRONAS) | Musa, Ikhwanul Hafizi (PETRONAS) | Mazeli, Abd Hakim (PETRONAS) | Che Yusoff, M Fakharuddin (PETRONAS) | Jowett, David (PETRONAS Canada) | Redpath, Darcy (PETRONAS Canada) | Saltman, Peter (PETRONAS Canada)
Abstract Production in an unconventional reservoir varies widely depending on reservoir characteristics (e.g., thickness, permeability, brittleness, natural fracturing), and completion design (e.g., well spacing, frac spacing, proppant volume). A comprehensive method of data analytics and predictive Machine Learning (ML) modeling was developed and deployed in the Montney unconventional siltstone gas reservoir, British Columbia, Canada to identify production zone "sweet spots" from reservoir quality data (i.e., geological, geophysical, and geomechanical) data and completion quality data (e.g., frac spacing, fluid volume, and proppant intensity), which were utilized to enhance and optimize production performance of this unconventional reservoir. Typical data analytics and predictive ML modeling utilizes all the reservoir quality data and completion quality data together. The completion quality data tends to dominate over the reservoir quality data, because of a higher statistical correlation (i.e., weight) of the completion data to observed production. Hence, resulting predictive ML models commonly underestimate the effects of the reservoir quality on production, and exaggerate the influence of the completion quality data. To overcome these shortcomings, the reservoir quality data and the completion quality data are separated and normalized independently. The normalized reservoir and completion quality data are utilized to identify sweet spots and optimize completion design respectively, through predictive ML modelling. This novel methodology of predictive ML modeling has identified sweet spots from key controlling reservoir quality data and as well as prescribed optimal completion designs from key controlling completion quality data. The trained predictive ML model was tested by a blind test (R=79.0%) from 1-year of cumulative production from 6 Montney wells in the Town Pool, which was also validated by recent completions from 6 other Town Montney Pool wells (R=78.7%).
- North America > Canada > British Columbia (1.00)
- North America > Canada > Alberta (0.91)
- Geology > Geological Subdiscipline > Geomechanics (1.00)
- Geology > Rock Type > Sedimentary Rock > Clastic Rock > Mudrock (0.36)
- North America > United States > West Virginia > Appalachian Basin > Marcellus Shale Formation (0.99)
- North America > United States > Virginia > Appalachian Basin > Marcellus Shale Formation (0.99)
- North America > United States > Pennsylvania > Appalachian Basin > Marcellus Shale Formation (0.99)
- (5 more...)
- Well Completion > Hydraulic Fracturing > Fracturing materials (fluids, proppant) (1.00)
- Reservoir Description and Dynamics > Unconventional and Complex Reservoirs (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization (1.00)
- Data Science & Engineering Analytics > Information Management and Systems (1.00)
Rapid Simulation and Optimization of Geological CO2 Sequestration Using Coarse Grid Network Model
Aslam, Billal (King Abdullah University of Science and Technology) | Yan, Bicheng (King Abdullah University of Science and Technology) | Tariq, Zeeshan (King Abdullah University of Science and Technology) | Krogstad, Stein (SINTEF Digital) | Lie, Knut-Andreas (SINTEF Digital)
Abstract Large-scale CO2 injection for geo-sequestration in deep saline aquifers can significantly increase reservoir pressure, which, if not appropriately managed, can lead to potential environmental risk. Brine extraction from the aquifer has been proposed as a method to control the reservoir pressure and increase storage capacity. However, iterative optimization of the well controls for this scenario using high-resolution dynamic simulation models can be computationally expensive. In this paper, we demonstrate the application of a so-called coarse–grid network model (CGNet) as a reduced-order model for efficient simulation and optimization of CO2 sequestration with brine extraction. As a proxy, CGNet is configured by aggressively coarsening the fine-scale grid and then tuning the parameters of the associated simulation graph (transmissibility, pore volumes, well indices, and relative permeability endpoints) by minimizing the mismatch of well-response data (rates, bottom-hole pressure) and saturation distribution from the fine-scale model. Calibration and optimization procedures are automated using gradient-based optimization methods that leverage automatic differentiation capabilities in the reservoir simulator in the same way backpropagation methods are used in training neural networks. Once calibrated, CGNet is employed for well-control optimization. Validation with the fine-scale model shows that CGNet closely matches the optimized net-present value (NPV). Numerical examples using the Johansen model, available as a public dataset, shows that the optimization can be accelerated up to seven times using CGNet compared with a fine-scale model. (Using a compiled language will likely result in significantly larger speedups as small models suffer from a disproportionately high computational overhead when executed in MATLAB.) This study implies that a reduced-order model such as CGNet can be a powerful data-driven tool for faster evaluation of CO2 geo-sequestration simulation, combined with proper reservoir monitoring program.
A Surrogate Model for Numerical Reservoir Simulation of CO2 Flooding and Storage Based on Deep Learning
Dong, Peng (China University of Petroleum at Beijing) | Liao, Xinwei (China University of Petroleum at Beijing) | Zhang, Lingfeng (China University of Petroleum at Beijing) | Zhang, Heng (No.9 oil production plant, Changqing Oilfield, CNPC) | Zhao, Xurong (China University of Petroleum at Beijing) | Xue, Qishan (China University of Petroleum at Beijing)
Abstract Numerical simulation is an important tool for CO2 flooding and storage simulation, which allows to obtain global approximate solutions of governing equation. However, the simulations often suffer from significant computational costs and convergence problems, especially considering the pseudo-component and CO2 storage mechanisms. This makes the scheme optimization tedious. Therefore, we propose a deep learning-based surrogate model to efficiently implement numerical simulation of CO2-flooding and storage. Proposed method consists of automatic encoder and prediction part. The auto-encoder consists of VQ-VAE model, which projects the reservoir's 3D properties into 2D space. The prediction part consists of ConvLSTM models, which accepts reservoir variables. Finally, the surrogate model outputs the dynamic characteristics of production and different CO2 storage forms. The results show that the original reservoir properties can be restored with high fidelity after autoencoder training. The correlation coefficient between the decoded attribute and the original attribute is greater than 0.98. For prediction part, ConvLSTM can accurately predict the dynamic characteristics of production and different CO2 storage forms. The average relative errors of the predictions in the training and validation sets were 4.37% as well as 8.91%. In addition, for computational efficiency, the surrogate model is two orders of magnitude faster than the numerical model. It is proved that the surrogate model can effectively replace the numerical simulation model and greatly improve the computational efficiency.
- Research Report > New Finding (0.34)
- Research Report > Experimental Study (0.34)
- Asia > China > Shanxi > Ordos Basin > Changqing Field (0.99)
- Asia > China > Shaanxi > Ordos Basin > Changqing Field (0.99)
- Asia > China > Ningxia > Ordos Basin > Changqing Field (0.99)
- (2 more...)
Abstract Unstructured data is considered complex due to the varieties of formats, quality and file sizes, correspondingly, it incurs a high latent and hidden storage cost. It is commonly underutilized for industry best practices’ patterns analysis. Traditionally, engineers and geoscientists extract historical well information manually which consume time and resources, limiting the investigation to a small subset of wells. In addition, the success rate to extend benchmarking analysis to hundreds of wells information within a short period of time is low, since the manual process is prone to human error and human bias. To maximize the utilization of unstructured data, a machine learning (ML) based solution is proposed to develop new sand control best practices to enhance oil and gas production in new and mature fields. The ‘multi-modal’ solution utilizes ML technology stack such as Deep-Convolutional-Neural-Network (DCNN) for image processing and image classification, Natural-Language-Processing (NLP) and Optical-Character-Recognition (OCR) for text and entity processing and recognition applied in an enterprise-scale setting. The automated end-to-end data pipeline processing flow is designed to transform unstructured data into structured data that is readily available for domain expert users’ evaluation, research and timely decision-making. In this paper, the solution is applied to generate in-depth sand production and sanding pattern analysis throughout Norwegian basins and develop large-scale industry ‘best practices’. Data are ingested for three hundred sixty-one (361) wells, and a total of sixteen (16) days was utilized to analyse the wells presented in this study. Intelligent analytical tools including heat map, knowledge graph, deep search, lithology frequency count, and well information summary contributes significantly to provide a comprehensive understanding of sand production trends, causation and develop sand control management best practice. Maximizing unstructured data accessibility has proven to be effective in reducing sand management research time and effort, while providing a holistic analysis that can be applicable to nearby offset wells planned to be drilled in future.
- Geology > Geological Subdiscipline (0.69)
- Geology > Rock Type > Sedimentary Rock (0.47)
- Europe > Norway > Norwegian Sea > Vøring Basin (0.99)
- Europe > Norway > North Sea > Heimdal Formation (0.99)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Vision > Optical Character Recognition (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Pattern Recognition (0.34)
Uncovering the Actual Safety Integrity Level (SIL) of Critical Safeguards in Aging Hydrocarbon Facilities as the Basis for the Development of a Performance-Based Maintenance Strategy of Critical Safeguards; Lessons Learned from Rokan Block
Safrudin, Safrudin (PT Pertamina Hulu Rokan) | Yudhy, Muhammad Riandhy Anindika (PT Pertamina Hulu Rokan) | Joelianto, Endra (Institut Teknologi Bandung)
Abstract Equipment aging is one of the main challenges faced by companies operating oil and gas processing facilities with a brownfield engineering approach. As equipment ages, it breaks down more quickly, making it more susceptible to failure. The consequences of equipment failure can be disastrous, especially if the equipment is listed as a critical safeguard that protects the system from hazardous conditions. Failure of critical safeguards will increase the likelihood of hazardous scenarios occurring and thus impact the overall risk ranking of operating the facility. As a risk management effort, it is important to validate the safety integrity level (SIL) of the critical safeguards currently installed at the facilities and design an appropriate maintenance strategy to maintain the SIL value as originally designed. However, given the vast amount of equipment that is scattered over a large area, this is a very challenging effort. This paper discusses a novel attempt to validate the actual critical safeguards SIL in brownfield facilities by conducting a hybrid of an extensive reliability-centered maintenance study and a regression-based machine learning model to obtain the actual equipment failure rates based on incomplete historical failure data stored at the Computerized Maintenance Management System (CMMS). A recalculation of the current SIL level and comparing the results with the original SIL design will provide information on the level of degradation that occurred and the appropriate solution. To illustrate the application of the technique, a case study is presented based on the experience in implementing the life cycle of a safety instrumented system (SIS) according to the IEC 61508/61511 guidance as a maintenance strategy to maintain the SIL of a critical safeguard. Through these efforts, critical safeguard SILs are maintained on designed SILs and are capable of achieving credit risk reduction aimed at hazardous scenarios while setting the database for further improvement in the adoption of digital technology.
- Asia > Indonesia > Sumatra > South Sumatra Basin > Rokan Block > Rokan Block (0.99)
- Asia > Indonesia > Sumatra > Central Sumatra Basin > Rokan Block > Rokan Block (0.99)
Artificial Intelligence and Data Analysis Approach to Deliver 5800 BOPD from Cyclic Steam Stimulation Job in Krakatau Field
Nainggolan, Selvyn Anggreini Stephanie (PT Pertamina Hulu Rokan) | Wilantara, Dedi (PT Pertamina Hulu Rokan) | Mutiaranti, Reza (PT Pertamina Hulu Rokan) | Pangastuti, Resiayu Kinasih (PT Pertamina Hulu Rokan)
Abstract Krakatau field was discovered in 1941 with an active steam flood operating in 1985. To maintain oil production, well intervention jobs become critical activities to perform at 6000 producers. Cyclic steam stimulation is one type of well intervention job that is a relatively low-cost and efficient method to improve well performance. The process involves injecting steam which allows oil heated by the steam to flow more easily. Started 2015, cyclic stimulation activity levels raised up from 50 jobs/month to 350 jobs/month. The type of cyclic stimulation performed is called short cyclic stimulation, in which the objective is cleaning the wellbore from asphalt, resin, and paraffin deposit. The review candidate process is done by Petroleum Engineer manually, which potentially delivers inefficiency and inconsistent processes among the Engineers. Having this condition, the team search for opportunities to deliver shorter cyclic time of reviews and improved job success rate by developing an Artificial Intelligence Tool using an Artificial Neural Network (ANN) algorithm. In 2022, cyclic jobs in Krakatau field successfully hit the highest execution at 525 jobs/month. The combination of an Artificial Intelligence Tool and visualization data tool in the form of Power BI for data analysis is very impactful to increase productivity and improve the job success ratio. The team will elaborate on the subsurface parameters to develop Artificial Intelligence and analyze data processes to deliver cyclic gain performance in level 5,800 BOPD.
- Asia > Indonesia > Sumatra > South Sumatra Basin > Rokan Block > Rokan Block (0.99)
- Asia > Indonesia > Sumatra > Central Sumatra Basin > Rokan Block > Rokan Block (0.99)
Abstract Advances in machine learning algorithm and the discoveries of various open source codes in recent years have led to a new insight in utilizing legacy data to delve into the unknown generating something out of poor data availability and presence. Subsurface evaluation of reservoir parameters in many ways are affected by uncertainties in the tool measurement, calibration accuracy and subjective narrative of geological processes involved. Often, difference in measurement scale among log, core, seismic acquisition and reservoir model construction will aggravate the uncertainties in petrophysical evaluation. This paper will highlight the novel approach of utilizing legacy core data from operator’s assets to predict and capture the uncertainties range of reservoir parameters used through the application of machine learning model. Several case studies demonstrating the accuracy of the prediction in multiple geological settings such as clastic, carbonate, CO2 storage site and deep water thin bed deposits will be also presented. The overall methodology can be divided into three phases where phase 1 involves data digitization into a structured format followed by data quality check based on industry standards and various rules applied to identify the outliers. Phase 2 involves features extraction from core images data utilizing deep learning network to evaluate core quality condition while phase 3 demonstrates the predictive capability of various machine learning and regression models to predict reservoir properties away from the wellbore and establish an acceptable uncertainty range of input parameters to be used in the reservoir modeling job. Predicted properties are validated and blind-tested with the newly acquired core data to further evaluate the accuracy and applicability of the machine learning approach. In addition, benchmarking against the producing reservoir analogs is also conducted to gauge the impact to hydrocarbon volume calculation and economic viability of the project. Results show that accuracy of properties prediction are acceptable for various reservoir depositional environment and comparison with producing reservoir analogs indicate 10% to 20% difference in the properties range which is acceptable considering the scarce data availability and overlap in the input parameters used in the machine learning model. Based on the established parameters range hydrocarbon volume in place, reserves and economics of the field development project could be further sensitized and evaluated prior to FID decision. In conclusion, machine learning algorithm proves its viability in predicting and quantifying the reservoir parameters uncertainties for a more holistic subsurface evaluation work. Application of legacy core data to generate valuable subsurface insights particularly in those limited data areas has assisted to delineate certain reservoir properties for certain basin, certain field and certain geological settings which serve the purpose of finding new exploration plays and monetizing the development opportunities.
- Asia > Malaysia (0.29)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.15)
- Geology > Sedimentary Geology > Depositional Environment (1.00)
- Geology > Rock Type > Sedimentary Rock > Clastic Rock (1.00)
- Geology > Geological Subdiscipline (1.00)
- Geophysics > Borehole Geophysics (1.00)
- Geophysics > Seismic Surveying (0.88)