|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Abstract Standardization of drilling procedures is a key element for success in the development of unconventional oil and gas fields today. We propose approximate string matching as a data analytic technique that can be used to measure standardization across wells in a field. We describe the implementation of approximate string matching, providing details of how it can be contextualized for oil and gas drilling using operation codes, and how results can be normalized to allow for comparison between disproportionate sequences of operations. We use this technique to develop a measure for operational variability, the first objective metric for evaluating and comparing the rate and consistency of standardization across different wells, sections of wells, and rigs, informing decision-making based on the existing large streams of feedback from field operations. We provide examples of operational variability using analysis of data from about two hundred horizontal wells drilled by a single operator in a major North American unconventional field, and develop the concept of the "standardization curve" from this to demonstrate the importance of this metric in understanding learning and movement along the "learning curve" during a drilling campaign. Additionally, we outline some specific ways that these approaches can be used to automate the data-driven comparison, disaggregation, and assimilation of field learning to better manage improvement within drilling campaigns. This tool provides a foundation upon which future data analytic and machine learning techniques can build, developing learning programs for "smart" oil and gas fields that make better use of available data to enable rapid adaptation and greater overall drilling efficiency.
The field is subsalt and complexly faulted. Since the Phase In 2018, BP acquired an ocean-bottom-node (OBN) dataset 2 development is heavily dependent on water injection, at Mad Dog field in the Gulf of Mexico. We used these data identifying fault compartments is critical. Finite difference to rebuild the salt and sediment velocity model, employing a modelling suggested that full azimuth and longer offset data recently developed full-waveform inversion (FWI) were necessary to improve the image, and OBN acquisition algorithm. This resulted in a step-change improvement in was the best way to achieve those goals. Also, critically, it image quality, as well as significant time savings compared had previously been shown that application of FWI on OBN to traditional salt model building workflows. We conclude data can produce a significantly improved salt velocity that FWI applied to OBN data is the preferred method for model (Shen, et al., 2017, Michell et al., 2017), pointing to salt model building.
Abstract The Hibiscus field situated off the North coast of Trinidad is a large, stratigraphically isolated and well-connected gas field which has 14 years of production history. Notwithstanding this extensive production history and overall recovery, a number of key subsurface uncertainties have been identified. The scope of this study was to better understand reservoir complexity and define subsurface risk and opportunity. An integrated and iterative multidisciplinary approach to reservoir modelling was applied in an effort to meet these objectives. A modern suite of workflows such as Monte Carlo Petrophysical analysis, conditioning of models to seismic attributes, experimental design based uncertainty analysis and assisted history matching were used to generate new static and dynamic reservoir models. The key aspect of this workflow was 20+ major static to dynamic model iterations and a large number of deterministic simulations to rank and asses the validity of static concepts. The learnings were subsequently applied to create a robust reference case, static and dynamic uncertainties framed and a static probabilistic uncertainty workflow developed to QC the deterministic case. A full probabilistic assisted history matching exercise on the dynamic model enabled a refinement of the volumetric ranges and provided critical insight through analysis of posterior uncertainty distributions. The iterative workflow allowed concepts to be validated dynamically and it was demonstrated that high quality history matches could be achieved even after the removal of almost all dynamic multipliers – a common issue in simulation models. Significant improvements to pore volume distribution, the use of geologically derived dynamic baffles, and permeability distributions were amongst the key learnings on the static side. The probabilistic dynamic modelling was characterized by a strong GIIP convergence with a reduction of history match error, resulting in a refined volumetric range and better characterization of uncertainty ranges through posterior analysis. The application of modern integrated and iterative workflows to a mature field has better defined uncertainty ranges, understanding of reservoir behavior and overall resulted in a more robust suite of models. Key learnings identified were highlighted to support future reservoir model rebuilds. Ultimately this process has demonstrated the value of revisiting existing datasets in late life assets by generating higher confidence in remaining reserve estimates and business plan forecasts.