|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Explaining Traditional Engineering Models It is a well-known fact that models of physical phenomena that are generated through mathematical equations can be explained. This is one of the main reasons behind the expectation of engineers and scientists that any potential model of the physical phenomena should be explainable. Explainability of the traditional models of physical phenomena is achieved through the solutions of the mathematical equations that are used to build the models. Explanations of such models are achieved through analytical solutions (for reasonably simple mathematical equations) or numerical solutions (for complex mathematical equations) of the mathematical equations. Solutions of the mathematical equations provide the opportunities to get answers to almost any question that might be asked from the model of the physical phenomena. Solutions of the mathematical equations are used to explain why and how certain results are generated by the model. It allows examination and explanation of the influence and effect of all the involved parameters (variables) on one another and on the model's results (output parameters).
Artificial intelligence (AI) and machine learning (ML) can expect a flourishing future as the new generation of engineers and scientists are exposed to, and start using, this technology in their everyday life. The solutions to clarify and distinguish the application of this technology to physics-based disciplines and to demonstrate the useful and game-changing applications of AI and ML in engineering and industrial applications is to develop a new generation of engineers and scientists who are well versed in the application of this technology. In other words, the objective should be to train and develop engineers who understand and are capable of efficiently applying data-driven analytics to engineering problem-solving. Engineering and nonengineering problems have differences. To gain the expertise to address and solve engineering-related problems, humans attend universities and achieve engineering degrees.
Abstract Managers, geologists, reservoir and completion engineers are faced with important challenges and questions when it comes to producing from and operating shale assets. Some of the important questions that need to be answered are: What should be the distance between wells (well spacing)? How many clusters need to be included in each stage? What is the optimum stage length? At what point we need to stop adding stages in our wells (what is the point of diminishing returns)? At what rate and at what pressure do we need to pump the fluid and the proppant? What is the best proppant concentration? Should our completion strategy be modified when the quality of the shale (reservoir characteristics) and the producing hydrocarbon (dry gas, vs. condensate rich, vs. oil) changes in different parts of the field? What is the impact of soak time (starting production right after the completion versus delaying it) on production? Shale Analytics is the collection of the state of the art data driven techniques including artificial intelligence, machine learning, and data mining that addresses the above questions based on facts (field measurements) rather than human biases. Shale Analytics is the fusion of domain expertise (years of geology, reservoir, and production engineering knowledge) with data driven analytics. Shale Analytics is the application of Big Data Analytics, Pattern Recognition, Machine Learning and Artificial Intelligence to any and all Shale related issues. Lessons learned from the application of Shale Analytics to more than 3,000 wells in Marcellus, Utica, Niobrara, and Eagle Ford is presented in this paper along with a detail case study in Marcellus Shale. The case study details the application of Shale Analytics to understand the impact of different reservoir and completion parameters on production, and the quality of predictions made by artificial intelligence technologies regarding the production of blind wells. Furthermore, generating type curves, performing "Look-Back" analysis and identifying best completion practices are presented in this paper. Using Shale Analytics for re-frac candidate selection and design was presented in a previous paper .
Frac-Hit is referred to the inter-well communication between the parent well (Offset well that is currently under production) and the Child well (the focal well that is being completed – Hydraulic Fractured). Today, traditional numerical reservoir simulation and modeling is used to model Frac-Hit. Overwhelming majority of operating companies simply use the distance between wells in order to address Frac-Hit. As the distance between the child well and the parent well is increased the probability of Frac-Hit between the wells may decrease. However, by increasing the distance between the wells (well spacing and stacking) the operating companies are reducing the recovery of the hydrocarbon from the shale formations. Conventional techniques (Rate Transient Analysis [RTA], Numerical Reservoir Simulation [NRS]) have proven not to have any serious contributions to the modeling and analysis of unconventional resources1. The large degree of assumptions and simplifications that form the foundations of these traditional technologies in their application to unconventional resources makes them unable to provide any reasonable contribution to analysis and modeling of Frac-Hit.
Artificial Intelligence and Machine Learning is a purely fact-based predictive modeling technology that can address the current negative impact of Frac-Hit that has resulted in the reduction of the production and recovery of hydrocarbon from shale formations.
Hydrocarbon production from shale plays is a function of the contact between the drilled wells and the shale reservoir due to tightness of the shale rock properties. The main reason that caused shale wells to be highly productive is inclusion of large number of hydraulic fracturing clusters that significantly increases the contact between the drilled wells and the shale reservoir. Another shale characteristic that significantly contributes to the increase of the contact between the drilled wells and the shale reservoir is the existence of the networks of natural fractures in the shale formations. Natural fracture networks significantly increase the expansion of the induced fracture into the shale formation. Nevertheless, existence of the system of natural fracture networks in the shale formations that has generated the possibility of significant increase in hydrocarbon production from shale wells has another characteristic that has lately been negatively impacting hydrocarbon production and recovery from shale wells. Frac-Hit is a function of the system of natural fracture networks in the shale formations.
Mishra, Srikanta (Battelle Memorial Institute) | Schuetter, Jared (Battelle Memorial Institute) | Datta-Gupta, Akhil (Texas A&M University) | Bromhal, Grant (National Energy Technology Laboratory, US Department of Energy)
Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy. It is useful to start with some definitions to establish a common vocabulary. Data analytics (DA)—Sophisticated data collection and analysis to understand and model hidden patterns and relationships in complex, multivariate data sets Machine learning (ML)—Building a model between predictors and response, where an algorithm (often a black box) is used to infer the underlying input/output relationship from the data Artificial intelligence (AI)—Applying a predictive model with new data to make decisions without human intervention (and with the possibility of feedback for model updating) Thus, DA can be thought of as a broad framework that helps determine what happened (descriptive analytics), why it happened (diagnostic analytics), what will happen (predictive analytics), or how can we make something happen (prescriptive analytics) (Sankaran et al. 2019). Although DA is built upon a foundation of classical statistics and optimization, it has increasingly come to rely upon ML, especially for predictive and prescriptive analytics (Donoho 2017). While the terms DA, ML, and AI are often used interchangeably, it is important to recognize that ML is basically a subset of DA and a core enabling element of the broader application for the decision-making construct that is AI. In recent years, there has been a proliferation in studies using ML for predictive analytics in the context of subsurface energy resources. Consider how the number of papers on ML in the OnePetro database has been increasing exponentially since 1990 (Fig. 1). These trends are also reflected in the number of technical sessions devoted to ML/AI topics in conferences organized by SPE, AAPG, and SEG among others; as wells as books targeted to practitioners in these professions (Holdaway 2014; Mishra and Datta-Gupta 2017; Mohaghegh 2017; Misra et al. 2019). Given these high levels of activity, our goal is to provide some observations and recommendations on the practice of data-driven model building using ML techniques. The observations are motivated by our belief that some geoscientists and petroleum engineers may be jumping the gun by applying these techniques in an ad hoc manner without any foundational understanding, whereas others may be holding off on using these methods because they do not have any formal ML training and could benefit from some concrete advice on the subject. The recommendations are conditioned by our experience in applying both conventional statistical modeling and data analytics approaches to practical problems.