|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
JJ De Paep is a graduate of Texas A&M University and the director of strategy and marketing at Astra Innovations, a company focused on delivering next-generation remote collaboration and data analytics tools for upstream oil and gas. Before joining Astra Innovations, he spent 9 years with National Oilwell Varco, and subsequently made a pivot to the technology industry managing software development before a passion for innovation and collaborative tools prompted a return to the energy industry.
Big data analytics is a big deal right now in the oil and gas industry. This emerging trend is on track to become an industry best practice for good reason: It improves exploration and production efficiency. With the help of sensors, massive amounts of data already are being extracted from exploration, drilling, and production operations, as well as being leveraged to shed light on sophisticated engineering problems. So, why shouldn't a similar approach be applied when it comes to worker health and safety; especially when it's the norm across a wide variety of other industries? While the International Association of Oil and Gas Producers came out with a safety performance report that showed fatalities and injuries for the industry were down in 2019, the US Occupational Safety and Health Administration (OSHA) says that the oil and gas industry's fatality rate is 7 times higher than all other industries in the US.
Abstract In multi-stage plug-and-perf horizontal well completions, there are a multitude of moving parts and variables to consider when evaluating performance drivers. Properly identifying performance drivers allows an operator to focus their efforts to maximize the rate of return of resource development. Typically, well-to-well comparisons are made to help identify performance drivers, but in many cases the differences are not clear. Identifying these drivers may require a better understanding of performance variability along a single lateral. Data analytics can help to identify performance drivers using existing data from development activities. In the case study below, multiple diagnostics are utilized to identify performance drivers. A combination of completion diagnostics including oil and water tracers, stimulation data, reservoir data, 3D seismic, and borehole image logs were collected on a set of wells in the early appraisal phase of a field. Using oil tracers as the best indication of stage level performance along the laterals, data analytics is applied to uncover the relationships between the tracers and the numerous diagnostics. After smoothing was applied to the dataset, trends between oil tracer recovery, several independent variables and features seen in image logs and 3D seismic were identified. All the analyses pointed to decreasing tracer recovery, and likely decreased oil production, near faulted areas along each lateral. A random forest model showed a moderate prediction power, where the model's predicted tracer recovery on blind stages was able to explain 54% of the variance seen in the tracer response (r=0.54). This analysis suggests the identification of certain faulted areas along the wellbore could lead to ways of improving individual well economics by adjusting completion design in these areas.
Abstract Leaks and ruptures are the most important possible risks for operational oil and gas pipelines. Due to their hazardous effects on the environment, much research has been conducted to prevent and detect possible ruptures on a pipeline to protect people and the environment by enhancing safe operation. Any improvement in leak detection technologies to increase the accuracy and sensitivity while eliminating false leak and rupture alerts will protect the environment and assure hazard-free operation. Data mining algorithms are widely used in many industries, including the energy industry. They have already been implemented as computational leak detection methodologies. To increase confidence and improve accuracy and sensitivity, different algorithms may be introduced to detect ruptures. In our study, a 36" crude oil pipeline with two pump stations was configured in a pipeline simulator. The pipeline parameters of flow, pressure, and temperature were computed for several leak and rupture cases, and data science algorithms such as Logistic Regression, Neural Network, and Multivariate Adaptive Regression Splines were used as classifiers to detect the leaks and ruptures. Multivariate Adaptive Regression Splines (MARS) is an important statistical learning tool for both classification and regression. MARS is nonparametric, adaptive, and effective in high dimensional problems with a proven record for fitting nonlinear multivariate functions. The contribution from the basis functions together with interaction effects between the predictors are used to determine the response variable: MARS produces a resultant model as an explicit formula. MARS proves itself as a comparative classifier to the already known logistic regression and neural network methods as a new leak and rupture detection computation data science technique for pipeline operators.
Alzahabi, A. (University of Texas-Permian Basin) | Alexandre Trindade, A. (Texas Tech University) | Kamel, A. A. (University of Texas-Permian Basin) | Harouaka, A. (University of Texas-Permian Basin) | Baustian, W. (Camino Natural Resources) | Campbell, C. (Camino Natural Resources)
Summary One of the enduring pieces of the jigsaw puzzle for all unconventional plays is drawdown (DD), a technique for attaining optimal return on investment. Assessment of the DD from producing wells in unconventional resources poses unique challenges to operators; among them the fact that many operators are reluctant to reveal the production, pressure, and completion data required. In addition to multiple factors, various completion and spacing parameters add to the complexity of the problem. This work aims to determine the optimum DD strategy. Several DD trials were implemented within the Anadarko Basin in combination with various completion strategies. Privately obtained production and completion data were analyzed and combined with well log analysis in conjunction with data analytics tools. A case study is presented that explores a new strategy for DD producing wells within the Anadarko Basin to optimize a return on investment. We use scatter-plot smoothing to develop a predictive relationship between DD and two dependent variables—estimated ultimate recovery (EUR) and initial production (IP) for 180 days of oil—and introduce a model that evaluates horizontal well production variables based on DD. Key data were estimated using reservoir and production variables. The data analytics suggested the optimal DD value of 53 psi/D for different reservoirs within the Anadarko Basin. This result may give professionals additional insight into more fully understanding the Anadarko Basin. Through these optimal ranges, we hope to gain a more complete understanding of the best way to DD wells when they are drilled simultaneously. Our discoveries and workflow within the Woodford and Mayes Formations may be applied to various plays and formations across the unconventional play spectrum. Optimal DD techniques in unconventional reservoirs could add billions of dollars in revenue to a company’s portfolio and dramatically increase the rate of return, as well as offer a new understanding of the respective producing reservoirs.
Abstract Leveraging publicly available data is a crucial stepfor decision making around investing in the development of any new unconventional asset.Published reports of production performance along with accurate petrophysical and geological characterization of the areashelp operators to evaluate the economics and risk profiles of the new opportunities. A data-driven workflow can facilitate this process and make it less biased by enabling the agnostic analysis of the data as the first step. In this work, several machine learning algorithms are briefly explained and compared in terms of their application in the development of a production evaluation tool for a targetreservoir. Random forest, selected after evaluating several models, is deployed as a predictive model thatincorporates geological characterization and petrophysical data along with production metricsinto the production performance assessment workflow. Considering the influence of the completion design parameters on the well production performance, this workflow also facilitates evaluation of several completion strategies toimprove decision making around the best-performing completion size. Data used in this study include petrophysical parameters collected from publicly available core data, completion and production metrics, and the geological characteristics of theNiobrara formation in the Powder River Basin. Historical periodic production data are used as indicators of the productivity in a certain area in the data-driven model. This model, after training and evaluation, is deployed to predict the productivity of non-producing regions within the area of interest to help with selecting the most prolific sections for drilling the future wells. Tornado plots are provided to demonstrate the key performance driversin each focused area. A supervised fuzzy clustering model is also utilized to automate the rock quality analyses for identifying the "sweet spots" in a reservoir. The output of this model is a sweet-spot map that is generated through evaluating multiple reservoir rock properties spatially. This map assists with combining all different reservoir rock properties into a single exhibition that indicates the average "reservoir quality"of the formation in different areas. Niobrara shale is used as a case study in this work to demonstrate how the proposed workflow is applied on a selected reservoir formation whit enough historical production data available.
Abstract Oil and Gas operations are now being "datafied." Datafication in the oil industry refers to systematically extracting data from the various oilfield activities that are naturally occurring. Successful digital transformation hinges critically on an organization's ability to extract value from data. Extracting and analyzing data is getting harder as the volume, variety, and velocity of data continues to increase. Analytics can help us make better decisions, only if we can trust the integrity of the data going into the system. As digital technology continues to play a pivotal role in the oil industry, the role of reliable data and analytics has never been more consequential. This paper is an empirical analysis of how Artificial Intelligence (AI), big data and analytics has redefined oil and gas operations. It takes a deep dive into various AI and analytics technologies reshaping the industry, specifically as it relates to exploration and production operations, as well as other sectors of the industry. Several illustrative examples of transformative technologies reshaping the oil and gas value chain along with their innovative applications in real-time decision making are highlighted. It also describes the significant challenges that AI presents in the oil industry including algorithmic bias, cybersecurity, and trust. With digital transformation poised to re-invent the oil & gas industry, the paper also discusses energy transition, and makes some bold predictions about the oil industry of the future and the role of AI in that future. Big data lays the foundation for the broad adoption and application of artificial intelligence. Analytics and AI are going to be very powerful tools for making predictions with a precision that was previously impossible. Analysis of some of the AI and analytics tools studied shows that there is a huge gap between the people who use the data and the metadata. AI is as good as the ecosystem that supports it. Trusting AI and feeling confident with its decisions starts with trustworthy data. The data needs to be clean, accurate, devoid of bias, and protected. As the relationship between man and machine continues to evolve, and organizations continue to rely on data analytics to provide decision support services, it is imperative that we safeguard against making important technical and management decisions based on invalid or biased data and algorithm. The variegated outcomes observed from some of the AI and analytics tools studied in this research shows that, when it comes to adopting AI and analytics, the worm remains buried in the apple.
Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy. It is useful to start with some definitions to establish a common vocabulary. Thus, DA can be thought of as a broad framework that helps determine what happened (descriptive analytics), why it happened (diagnostic analytics), what will happen (predictive analytics), or how can we make something happen (prescriptive analytics) (Sankaran et al. 2019). Although DA is built upon a foundation of classical statistics and optimization, it has increasingly come to rely upon ML, especially for predictive and prescriptive analytics (Donoho 2017).
Mishra, Srikanta (Battelle Memorial Institute) | Schuetter, Jared (Battelle Memorial Institute) | Datta-Gupta, Akhil (Texas A&M University) | Bromhal, Grant (National Energy Technology Laboratory, US Department of Energy)
Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy. It is useful to start with some definitions to establish a common vocabulary. Data analytics (DA)—Sophisticated data collection and analysis to understand and model hidden patterns and relationships in complex, multivariate data sets Machine learning (ML)—Building a model between predictors and response, where an algorithm (often a black box) is used to infer the underlying input/output relationship from the data Artificial intelligence (AI)—Applying a predictive model with new data to make decisions without human intervention (and with the possibility of feedback for model updating) Thus, DA can be thought of as a broad framework that helps determine what happened (descriptive analytics), why it happened (diagnostic analytics), what will happen (predictive analytics), or how can we make something happen (prescriptive analytics) (Sankaran et al. 2019). Although DA is built upon a foundation of classical statistics and optimization, it has increasingly come to rely upon ML, especially for predictive and prescriptive analytics (Donoho 2017). While the terms DA, ML, and AI are often used interchangeably, it is important to recognize that ML is basically a subset of DA and a core enabling element of the broader application for the decision-making construct that is AI. In recent years, there has been a proliferation in studies using ML for predictive analytics in the context of subsurface energy resources. Consider how the number of papers on ML in the OnePetro database has been increasing exponentially since 1990 (Fig. 1). These trends are also reflected in the number of technical sessions devoted to ML/AI topics in conferences organized by SPE, AAPG, and SEG among others; as wells as books targeted to practitioners in these professions (Holdaway 2014; Mishra and Datta-Gupta 2017; Mohaghegh 2017; Misra et al. 2019). Given these high levels of activity, our goal is to provide some observations and recommendations on the practice of data-driven model building using ML techniques. The observations are motivated by our belief that some geoscientists and petroleum engineers may be jumping the gun by applying these techniques in an ad hoc manner without any foundational understanding, whereas others may be holding off on using these methods because they do not have any formal ML training and could benefit from some concrete advice on the subject. The recommendations are conditioned by our experience in applying both conventional statistical modeling and data analytics approaches to practical problems.
Oil industry executives surveyed last year ranked the potential positive impact of big data analytics at the top of the list of trends, higher than even changes in oil demand. That bold conclusion was from a survey by accounting firm Ernst and Young (EY), putting big data analytics among the top trends that could aid business growth in the next 3 years, even above the demand swings that move oil prices. The survey may have reflected the mood last summer when the outlook for oil consumption looked so weak that cost saving was the only path to better results. “The survey speaks to a high-level ambition across the operator community to use digital as a mechanism to drive down costs,” said Toby Summers executive director for EY. The promise there is that digital can allow them to scale up operations with fewer hires in good times and scale back with fewer layoffs when the cycle turns down. These projects also cost less than other cost-cutting options. “Digitization is one of the cheapest ways to get the business more resilient,” said Patrick von Pattay, a vice president for Wintershall Dea, a Germany-based independent, and chairman of the Digital Transformation Committee of SPE’s Digital Energy Technical Section. Process changes supported by digital analysis can cost a couple hundred thousand dollars; that is not a lot of money in a business where a single offshore well often costs hundreds of millions. What is not obvious is who does the work. The rush to digital has scrambled traditional relationships with oilfield service companies and brought in new players, from Silicon Valley giants to a flurry of startups in the oil business, a few of which have become established players. As a result of the change in the technology, and the business models of the upstarts, oil company technical teams can and do play a more active role in digital technology development and use than in the past. Changes began in 2014, when the sudden end of $100/bbl oil forced oil companies to drop their long-time reliance on owning their own computer systems. Oil companies finally joined the decade-old shift to buying data storage and processing as a service from giants such as Amazon and Google. That facilitated digital innovations by centralizing their data, eliminating splintered storage systems that hindered analysis. The giant looming over the service business now is Amazon Web Services (AWS). The cloud storage arm of the online retail and logistics giant has grown exponentially, doing everything from selling an array of digital tools to promoting a list of preferred energy providers. Digital newcomers disrupted relations with service companies that had built software solutions and sold equipment programmed using proprietary coding. Increasingly user-friendly tools for visualizing and analyzing data, plus the ability for smaller companies to buy data capacity, allowed engineers to do more and allowed midsized companies to act like big ones. “There has been a shift now; the independents have access to the same tech as the big guys,” Summers said.