Schlumberger and IBM announced the industry's first commercial hybrid cloud data management for the OSDU data platform. The hybrid cloud offering is designed to expand access to customers globally, including those in locations where data residency requirements and local regulations may affect the use of a global public cloud. "This collaboration is a game changer for energy operators to drive higher performance and greater efficiencies by now enabling integrated work flows and innovation using AI. The hybrid cloud solution allows clients to maintain the sovereignty of their data and also gives them options as to how they choose to leverage the solution, with the freedom to deploy on a range of infrastructures or a regional cloud provider," said Manish Chawla, global industry managing director for energy, resources, and manufacturing at IBM. The platform will provide energy operators with full interoperability, making their data accessible by any application within their exploration and production (E&P) environment through the OSDU common data standard to enable easy sharing of information between teams.
AlSaadi, Hamdan (ADNOC Offshore) | Rashid, Faisal (ADNOC Offshore) | Bimastianto, Paulinus (ADNOC Offshore) | Khambete, Shreepad (ADNOC Offshore) | Toader, Lucian (ADNOC Offshore) | Landaeta Rivas, Fernando (ADNOC Offshore) | Couzigou, Erwan (ADNOC Offshore) | Al-Marzouqi, Adel (ADNOC Offshore) | El-Masri, Hassan (Schlumberger) | Pausin, Wiliem (Schlumberger)
Abstract Big data analytics is the often complex process of examining large andvaried data sets to uncover information. The aim of this paper is to describe how Real TimeOperation Center structuring drilling data in an informative and systematic manner throughdigital solution that can help organizations make informed business decisions and leverage business value to deliver wells efficiently and effectively. Real Time Operation Center process of collecting largechunks of structured/unstructured data, segregating and analyzing it and discovering thepatterns and other useful business insights from it. The methods were based on structuringa detailed workflow, RACI, quality check list for every single process of the provision of real-timedrilling data and digitally transform into valuable information through robust auditableprocess, quality standards and sophisticated software. The paper will explain RTOC DataManagement System and how it helped the organization determining which data is relevantand can be analyzed to drive better business decisions in the future. The big data platform, in-house built-in software, andautomated dashboards have helped the company build the links between different assets,analyzing technical gaps, creating opportunities and moving away from manual data entry(e.g. Excel) which was causing data errors, disconnection between information and wastedworker hours due to inefficiency. These solutions leverage analytics and unlock the valuefrom data to enhance operational efficiency, drive performance and maximize profitability. As a result, the company has successfully delivered 160 wells in 2019 (6% higher than 2019 Business Plan and 10% higher than number of delivered wellsin 2018) more efficiently with 28.2 days per 10kft fornew wells (10% better than 2018), without compromising the well objectives and quality of the wells. Moreover, despite increasing complexity, the highest level ofconfidence on data analytics has permitted the company to go beyond their normaloperating envelop and set a major record for drilling the world's fifth longest well as amilestone in 2019.
Abstract Decades of subsurface exploration and characterisation have led to the collation and storage of large volumes of well related data. The amount of data gathered daily continues to grow rapidly as technology and recording methods improve. With the increasing adoption of machine learning techniques in the subsurface domain, it is essential that the quality of the input data is carefully considered when working with these tools. If the input data is of poor quality, the impact on precision and accuracy of the prediction can be significant. Consequently, this can impact key decisions about the future of a well or a field. This study focuses on well log data, which can be highly multi-dimensional, diverse and stored in a variety of file formats. Well log data exhibits key characteristics of Big Data: Volume, Variety, Velocity, Veracity and Value. Well data can include numeric values, text values, waveform data, image arrays, maps, volumes, etc. All of which can be indexed by time or depth in a regular or irregular way. A significant portion of time can be spent gathering data and quality checking it prior to carrying out petrophysical interpretations and applying machine learning models. Well log data can be affected by numerous issues causing a degradation in data quality. These include missing data - ranging from single data points to entire curves; noisy data from tool related issues; borehole washout; processing issues; incorrect environmental corrections; and mislabelled data. Having vast quantities of data does not mean it can all be passed into a machine learning algorithm with the expectation that the resultant prediction is fit for purpose. It is essential that the most important and relevant data is passed into the model through appropriate feature selection techniques. Not only does this improve the quality of the prediction, it also reduces computational time and can provide a better understanding of how the models reach their conclusion. This paper reviews data quality issues typically faced by petrophysicists when working with well log data and deploying machine learning models. First, an overview of machine learning and Big Data is covered in relation to petrophysical applications. Secondly, data quality issues commonly faced with well log data are discussed. Thirdly, methods are suggested on how to deal with data issues prior to modelling. Finally, multiple case studies are discussed covering the impacts of data quality on predictive capability.
Alipour K, Mehdi (Halliburton) | Dai, Bin (Halliburton) | Price, Jimmy (Halliburton) | Jones, Christopher Michael (Halliburton) | Gascooke, Darren (Halliburton) | VanZuilekom, Anthony (Halliburton) | Tahani, Hoda (Halliburton) | Ahmed, Fahad (Halliburton) | Hamza, Farrukh (Halliburton) | Sungkorn, Radompon (Halliburton) | Rekully, Cameron (Halliburton)
Abstract Measuring formation pressure and collecting representative samples are the essential tasks of formation testing operations. Where, when and how to measure pressure or collect samples are critical questions which must be addressed in order to complete any job successfully. Formation testing data has a crucial role in reserve estimation especially at the stage of field exploration and appraisal, but can be time consuming and expensive. Optimum location has a major impact on both the time spent performing and the success of pressure testing and sampling. Success and optimization of rig-time paradoxically requires careful and extensive but also quick pre-job planning. The current practice of finding optimum locations for testing heavily rely on expert knowledge. With nearly complete digitization of data collection, the oil industry is now dealing with massive data flow giving rise to the question of its application and the necessity to collect. Some data may be so called “dark data” of which a very tiny portion is used for decision making. For instance, a variety of petrophysical logs may be collected in a single well to provide measures of formation properties. The logs may include conventional gamma ray, neutron, density, caliper, resistivity or more advanced tools such as high-resolution image logs, acoustic, or NMR. These data can be integrated to help decide where to pressure test and sample, however, this effort is nearly exclusively driven by experts and is manpower intensive. In this paper we present a workflow to gather, process and analyze conventional log data in order to optimize formation testing operations. The data is from an enormous geographic distribution of wells. Tremendous effort has been performed to extract, transform and load (ETL) the data into a usable format. Stored files contains multi-million to multi-billions rows of data thereby creating technology challenges in terms of reading, processing and analyzing in a timely manner for pre-job planning. We address the technological challenges by deploying cutting-edge data technology to solve this problem. Upon completion of the workflow we have been able to build a scalable petrophysical interpretation log platform which can be easily utilized for machine learning and application deployment. This type of data base is invaluable asset especially in places where there is a need for knowledge of analogous wells. Exploratory data analysis on worldwide data on mobility and some key influencing features on pressure test and sampling quality, is performed and presented. We further show how this data is integrated and analyzed in order to automate selection of locations for which to formation test.
Big data analytics is a big deal right now in the oil and gas industry. This emerging trend is on track to become an industry best practice for good reason: It improves exploration and production efficiency. With the help of sensors, massive amounts of data already are being extracted from exploration, drilling, and production operations, as well as being leveraged to shed light on sophisticated engineering problems. So, why shouldn't a similar approach be applied when it comes to worker health and safety; especially when it's the norm across a wide variety of other industries? While the International Association of Oil and Gas Producers came out with a safety performance report that showed fatalities and injuries for the industry were down in 2019, the US Occupational Safety and Health Administration (OSHA) says that the oil and gas industry's fatality rate is 7 times higher than all other industries in the US.
Soares, Ricardo Vasconcellos (NORCE Norwegian Research Centre and University of Bergen (Corresponding author) | Luo, Xiaodong (email: firstname.lastname@example.org)) | Evensen, Geir (NORCE Norwegian Research Centre) | Bhakta, Tuhin (NORCE Norwegian Research Centre and Nansen Environmental and Remote Sensing Center (NERSC))
Summary In applications of ensemble-based history matching, it is common to conduct Kalman gain or covariance localization to mitigate spurious correlations and excessive variability reduction resulting from the use of relatively small ensembles. Another alternative strategy not very well explored in reservoir applications is to apply a local analysis scheme, which consists of defining a smaller group of local model variables and observed data (observations), and perform history matching within each group individually. This work aims to demonstrate the practical advantages of a new local analysis scheme over the Kalman gain localization in a 4D seismic history-matching problem that involves big seismic data sets. In the proposed local analysis scheme, we use a correlation-based adaptive data-selection strategy to choose observations for the update of each group of local model variables. Compared to the Kalman gain localization scheme, the proposed local analysis scheme has an improved capacity in handling big models and big data sets, especially in terms of computer memory required to store relevant matrices involved in ensemble-based history-matching algorithms. In addition, we show that despite the need for a higher computational cost to perform model update per iteration step, the proposed local analysis scheme makes the ensemble-based history-matching algorithm converge faster, rendering the same level of data mismatch values at a faster pace. Meanwhile, with the same numbers of iteration steps, the ensemble-based history-matching algorithm equipped with the proposed local analysis scheme tends to yield better qualities for the estimated reservoir models than that with a Kalman gain localization scheme. As such, the proposed adaptive local analysis scheme has the potential of facilitating wider applications of ensemble-based algorithms to practical large-scale history-matching problems.
We are living a continuous and fast technology evolution, maybe this evolution goes faster than our capacity to assimilate what we can do with it, but the potential is clear and the future will be for those who identifies the right technology with the right application. In the information era, we are literally swimming in an ocean of structured and not structured data and thanks to the evolution in the Telecommunications technologies, all that information can be used from everywhere. However, information means nothing without the capability to analyze, extract conclusions and learn from it, which is way the technologies like treatment of Big Data and the Artificial Intelligence are crucial. Imagine how these technologies shall allow engaging the design of a part or any concept by applying rules, which will facilitate the design significantly, how the integration of the validation of the structural models by the Classification Societies will be linked directly by cloud applications. Imagine all the benefits of this two simple examples that can be implemented thanks to the potential of these technologies. The way we work with shipbuilding CAD tools is also changing thanks to the ubiquitous access to the information and the different hardware available to explode that information: AR, VR, MR, Smartphones, tablets, etc. Nevertheless, not only the way we work, but also the way we interact with shipbuilding CAD tools is changing, with technologies like natural language processes that allows having a direct conversation with the applications. The concepts that are absolutely clear from now to the future in shipbuilding is the use of Data Centric model and the concept of Digital Twin, a real and effective synchronization between what we design, what we construct, by covering the complete life cycle of the product thanks to technologies like IoT and RFID. This paper tries to explain the importance to understand how the new generations of naval architects and marine engineers are immersed in a technological world in constant and rapid evolution. The way they interacts with this ecosystem will determine the way we should define the new rules of the shipbuilding CAD systems.
Ma, Kuiqian (Tianjin Branch of CNOOC (China) Co., Ltd) | Chen, Cunliang (Tianjin Branch of CNOOC (China) Co., Ltd) | Zhang, Wei (Tianjin Branch of CNOOC (China) Co., Ltd) | Liu, Bin (Tianjin Branch of CNOOC (China) Co., Ltd) | Han, Xiaodong (CNOOC Ltd and China University of Petroleum, Beijing)
Abstract Performance prediction is one of the important contents of oilfield development. It is also an important content affecting investment decision-making, especially for offshore oilfields with large investment. At present, most oilfields in China have entered high water cut stage or even extra high water cut stage, which requires higher prediction accuracy. Water drive curve is an important method for predicting performance. Traditional methods are based on exponential formulas, but these methods have poor adaptability in high water cut period. Because traditional methods deviate from straight line in high water cut period. In this paper, a robust method for predicting performance of offshore oilfield in high water cut period based on big data and artificial intelligence is proposed. Firstly, the reasons for the "upward warping" phenomenon of traditional methods deviating from the straight line are analyzed. It is found that the main reason for the deviation is that the relationship between the relative permeability ratio of oil to water and the water saturation curve no longer conforms to the exponential relationship. So a new percolation characteristic characterization equation with stronger adaptability is proposed, which focuses on the limit of high water flooding development. On this basis, the equation of the new water drive characteristic curve is deduced theoretically, and the dynamic prediction method is established. What's more, the solution of the method is based on large data and AI algorithm. This method has been applied to many high water flooding phase permeability curves, and the coincidence rate is more than 95.6%. The new water drive characteristic curve can better reflect the percolation characteristics of high water cut reservoirs. At the same time, the performance of adjustment wells and measures on the curve of development dynamic image is analyzed. Curve warping indicates that adjustment wells or measures are effective. Field application shows that the prediction error of the new method is less than 6%, which is more in line with the needs of oilfield development. Because of the application of artificial intelligence algorithm, the application is more convenient and saves a lot of time and money. This is a process of self-learning and self-improvement. As the oil field continues over time, each actual data will be recalculated into the database. Then the fitting and correction are carried out, and then the solution is learned again. This method has been applied to several oil fields in Bohai. And the effect is remarkable, which provides a good reference for the development of other oil fields.
Abstract Oil and Gas operations are now being "datafied." Datafication in the oil industry refers to systematically extracting data from the various oilfield activities that are naturally occurring. Successful digital transformation hinges critically on an organization's ability to extract value from data. Extracting and analyzing data is getting harder as the volume, variety, and velocity of data continues to increase. Analytics can help us make better decisions, only if we can trust the integrity of the data going into the system. As digital technology continues to play a pivotal role in the oil industry, the role of reliable data and analytics has never been more consequential. This paper is an empirical analysis of how Artificial Intelligence (AI), big data and analytics has redefined oil and gas operations. It takes a deep dive into various AI and analytics technologies reshaping the industry, specifically as it relates to exploration and production operations, as well as other sectors of the industry. Several illustrative examples of transformative technologies reshaping the oil and gas value chain along with their innovative applications in real-time decision making are highlighted. It also describes the significant challenges that AI presents in the oil industry including algorithmic bias, cybersecurity, and trust. With digital transformation poised to re-invent the oil & gas industry, the paper also discusses energy transition, and makes some bold predictions about the oil industry of the future and the role of AI in that future. Big data lays the foundation for the broad adoption and application of artificial intelligence. Analytics and AI are going to be very powerful tools for making predictions with a precision that was previously impossible. Analysis of some of the AI and analytics tools studied shows that there is a huge gap between the people who use the data and the metadata. AI is as good as the ecosystem that supports it. Trusting AI and feeling confident with its decisions starts with trustworthy data. The data needs to be clean, accurate, devoid of bias, and protected. As the relationship between man and machine continues to evolve, and organizations continue to rely on data analytics to provide decision support services, it is imperative that we safeguard against making important technical and management decisions based on invalid or biased data and algorithm. The variegated outcomes observed from some of the AI and analytics tools studied in this research shows that, when it comes to adopting AI and analytics, the worm remains buried in the apple.
Yue, Baolin (Tianjin Branch of CNOOC, China Co., Ltd) | Liu, Bin (Tianjin Branch of CNOOC, China Co., Ltd) | Shi, Hongfu (Tianjin Branch of CNOOC, China Co., Ltd) | Shi, Fei (Tianjin Branch of CNOOC, China Co., Ltd) | Zhang, Wei (Tianjin Branch of CNOOC, China Co., Ltd)
Abstract The prediction of reservoir fluid production law play a key role in offshore oil field development plan design. It determines the parameter selection of pump displacement, oilfield submarine pipe capacity, platform fluid handling capacity, power generation equipment, etc. If the liquid production forecast is too low, the capacity will be expanded later, while if the forecast is too high, it will result in a waste of investment, which directly affects the fixed investment in oilfield development. Based on the statistical analysis of big data, this paper applies the dynamic data of all single wells and full life cycle of the oil field to analyze the dimensionless liquid production index (DLPI) law, and further establish the liquid production index prediction formula on this basis. Thus, the different types of Bohai plate and statistical table of the characteristics of the DLPI of the reservoir are completed. The results show that the DLPI of Bohai Sea heavy oil reservoir are following: water cut < 60 % indicates the trend is flat; water cut between 60 ∼ 80 % illustrates the slow growth (water cut 80 % is 2.5∼3 times); water cut > 80 % shows rapid growth (water cut 95% is 5.5∼6 times). The DLPI of Bohai Sea conventional oil reservoir are as following: when the water cut < 60%, the DLPI drops first, and then increase when the water cut is about 30% (the lowest point (0.7∼0.9 times)). When the water cut rise to 60%, the DLPI returns to 1 times; When the water cut is 60∼80%, it grows slowly (1.5∼2 times); when the water cut > 80 %, it grows rapidly (water cut 95% is 2∼3 times). The study may provide a guidance to the prediction of the amount of fluid in offshore oilfields, provide a basis for the design of new oilfield development schemes and increasing the production of old oilfields.