A recently launched joint industry project (JIP) is working to improve petrophysical analysis methods to reduce the time and expense of characterizing tight sandstone gas reservoirs for exploration, appraisal, and production. Failure to prioritize objectives and improper selection of candidate wells can have significant implications for both derived value and potential risk. Data mining for production optimization in unconventional reservoirs brings together data from multiple sources with varying levels of aggregation, detail, and quality. A data-driven approach to successfully analyze and evaluate production-fluid impact during facility system divert events is presented. The work flow effectively identifies opportunities for prompt event mitigation and system optimization.
This paper presents an interdisciplinary approach to the description of tectonic dislocations made on the basis of interpretation of seismic data, petrophysical analysis of well-logging data in horizontal wells, and inversion of a multifrequency propagation tool. This work presents a systematic geosteering work flow that automatically integrates a priori information and real-time measurements to update geomodels with uncertainties and uses the latest model predictions in a decision-support system (DSS). The use of intelligent software is on the rise in the industry and it is changing how engineers approach problems. A series of articles explores the potential benefits and limitations of this emerging area of data science. A new multilayer boundary‑detection service has been introduced to resolve the geological uncertainty associated with horizontal wells in Bohai Bay.
The contract is helping to solidify Europe’s offshore sector as the focal point for the rise of automated drilling technology. Drilling: What Can We Do To Thrive? Falling oil prices are the acid test of drilling efficiency. SPE Technical Director Jeff Moss of ExxonMobil talks about ways to build in lasting savings as part of this special report. The latest example of the offshore sector's march toward automated wellbore construction will take shape later this year in the North Sea.
A challenging problem of automated history-matching work flows is ensuring that, after applying updates to previous models, the resulting history-matched models remain consistent geologically. This paper discusses a project with the objective of leveraging prestack and poststack seismic data in order to reconstruct 3D images of thin, discontinuous, oil-filled packstone pay facies of the Upper and Lower Wolfcamp formation. The Oklahoma City independent has a new-look portfolio and new operational and financial priorities. And now it has enlisted an energy research firm to leverage advanced analytics and machine learning to help get the most out of its assets. The objective of this case study is to describe a specific approach to establishing an exploration strategy at the initial stage on the basis of not only uncertainty reduction, but also early business-case development and maximization of future economic value.
This year, as part of the Opening Ceremony, SPE brings you two panel sessions that will focus on the conference theme “Co-operating Towards a More Competitive Environment to Encourage Investment Projects.” The panels will represent two different perspectives—the investors and operators in the region. Digitalisation is emerging as a technological driver of change around the world and is transforming how companies in the oil and gas industry operate. A wave of digital technologies and initiatives are leading this new era of innovation and opportunity. Investments in programmes such as analytics, data science, artificial intelligence, cloud computing, and other emerging technologies are being pursued to improve safety, reliability, and efficiency with the expectation of delivering significant value through improved processes and systems.
Low-Energy Processes for Unconventional Oil Recovery addresses reservoir engineering, low-energy recovery processes, and resource access challenges for abundant unconventional oil. This includes bitumen, shale, heavy, and viscous oil. View sample pages and order your copy today! Data-Driven Reservoir Modeling introduces new technology and intelligent systems for applying data analytics to solve real-world reservoir engineering problems. The book describes how to use machine-learning-based algorithmic protocols to reduce large quantities of data down to actionable, tractable quantities.
Our reservoirs’ health is paramount to the future of our industry. More knowledge, more representative physical models, smarter data acquisition, and higher computation capabilities are vital elements to maximising value. This suggest that opportunities for more effective, efficient, and profitable approaches are yet to come. In this event, the focus on introducing new digital concepts enables our organisation to reach targets expectation faster, better, and easier. Our industry successfully managed to acquire sophisticated data sets, however extracting even more usable information is not at its optimum.
Themed “Reservoir Modelling Digital Technology Transformation”, the seventh edition of the SPE Reservoir Characterisation Conference and Exhibition will continue the momentum of the previous six editions and bring together reservoir experts to share expertise in specific topics that most burden the industry. Themed “Reservoir Modelling Digital Technology Transformation”, the seventh edition of the SPE Reservoir Characterisation Conference and Exhibition will continue the momentum of the previous six editions and bring together reservoir experts to share expertise in specific topics that most burden the industry. "Our reservoirs’ health is paramount to the future of our industry. More knowledge, more suitable representative physical models, smarter data acquisition, and higher computation capabilities are vital elements to maximising value." "The SPE Reservoir Characterisation and Simulation Conference and Exhibition represents a strategic industrial event in the region—focusing on reservoir characterisation and simulation innovative processes addressed by senior industry experts. It provides an opportunity looking beyond the industry, focusing on collaboration for companies through creating industrial, out-of-discipline, and academic partnership."
Multiple point statistical (MPS) simulation is a modern pattern-based geostatistical approach for describing and stochastically simulating geologic formations with complex connectivity patterns. In MPS geostatistical simulation, a template containing data patterns around each simulation cell is used to extract and store the local conditional probabilities from a training image (TI). To generate a simulated sample, a random path is generated to sequentially visit all unsampled grid cells and draw conditional samples from the corresponding stored conditional probabilities. The grid-based implementation of MPS simulation offers several advantages for integration of hard and soft data. In the Single Normal Equation SIMulation (SNESIM) implementation of MPS for facies simulation, it has been observed that the integration of soft data can result in many facies realizations that do not provide consistent patterns with the incorporated probability map. This is partly explained by the Markov property that only considers probabilities that are co-located with the simulation node, and hence ignoring spatial information from neighboring cells. In addition to this effect, we show another important mechanism is in play in the SNESIM algorithm that explains the observed behavior. Specifically, at the early stage of the simulation when the first few percentage of the simulation nodes on the random path are visited the local conditioning data are limited and the resulting conditional probabilities that are obtained from the TI are not strictly constrained. Hence the conditional probabilities cover a wide range of values in the range [0,1]. However, after this initial stage, as the simulated data populate more cells in the model grid, they tend to severely constrain the conditional probabilities to assume extreme values of 0 or 1. With these extreme values at the later stages of the simulation the probability values that are included in the soft data (as secondary source of information) tend to be disregarded and the facies types are predominantly determined by the TI. We demonstrate and discuss this behavior of the SNESIM algorithm through several examples and present strategies that can be adopted to compensate for this effect. The presented examples are related to indirect integration of the flow data by first inferring probabilistic information about facies types and using the results as soft data for integration into SNESIM algorithm.