|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Scale Inhibitor Squeeze treatments are some of the most common techniques to prevent oilfield mineral scale deposition in oil producers. A squeeze treatment design's effectiveness and lifespan is determined by the scale inhibitor (SI) retention, which can be described using a pseudo-isotherm adsorption, commonly derived from coreflooding experiments, although in some certain circumstances a new isotherm will need to be re-derived to match the field return concentration profile, once the treatment is deployed and samples are collected to measure SI return concentration. This new isotherm is used to design the next treatment. The objective of this manuscript is to quantify the uncertainty, which depends of the number of samples analyzed. In any inverse problem, there might not be a unique solution, which is in our context a pseudo-isotherm matching the return concentration profile. As a consequence, there will be a certain level of uncertainty predicting the next squeeze treatment lifetime. Solving this inverse problem in Bayesian formulation, incorporating the prior information, and the likelihood involving the return concentration profile, it is possible to quantify the posterior distribution, and therefore calculate the uncertainty range, commonly known as P90/P50/P10, based on the Randomized Maximum Likelihood (RML) approach. The P90/P50/P10 was calculated as a function of the number of samples available, differentiating from the early production and late production.
The results suggest that there is a correlation between the P90/P50/P10 interval and the number of samples, i.e. the differences between the P10 and P90 in terms of the forecast squeeze lifetime was wider the smaller number of samples. The methodology proposed may be used to determine the number of samples required to reduce the level of uncertainty predicting the lifetime of the next squeeze treatment. Although taking more samples may increase the cost per barrel for a treatment, the ability to predict accurately treatment lifetime will be more cost effective in the long term, as production might not be affected.
I have processed a total of nine thousand microseismic events to investigate the stochastic process of the stress mechanism as well as frequency-magnitude relation between microseismic, volcanic and intraplate events. The processed data was composed of all available three components data with magnitude range from 0.4 M 6.5 extracted from the Incorporated Research Institution of Seismology (IRIS) and from a producing resevoir in the Middle East with magnitude range from -2.3 M -0.5. Seismic b-values are estimated using the maximum likelihood method (Aki, 1965) while the stress drops are derived from Brune's source model (Brune, 1970). Volcanic and intraplate regions were selected in order to understand the fracture mechanism between the two regions with the hope to further correlate with the recorded microseismic events in the hydrocarbon reservoir. The results show that b values and stress drop for the volcanic region are lower than those of the intraplate region, while b-values for microseismic events are the highest. The results also show strong dependency with seismic moment and corner frequency values from each regions. From all the results, this study provides a better understanding and some possible features to distinguish microseismic events during hydraulic stimulation.
Nair, Aravind (DNV GL) | Jaiswal, Vivek (DNV GL) | Fyrileiv, Olav (DNV GL) | Vedeld, Knut (DNV GL) | Zheng, Haining (ExxonMobil) | Huang, Jerry (ExxonMobil) | Tognarelli, Michael (BP) | Goes, Rafael (Petrobras) | Bruschi, Roberto (Saipem) | Bartolini, Lorenzo (Saipem) | Vitali, Luigio (Saipem)
To date, there are no publicly available, validated tools or industry accepted guidelines for the assessment of Vortex-Induced Vibration (VIV) fatigue of rigid Jumper (spool) systems. The existing state of practice has been to treat rigid jumper systems as free spanning pipelines and apply the associated design principles in DNV GL recommended practice DNV-RP-F105/DNVGL-RP-F105 (Free Spanning Pipelines). However, widely used rigid jumper systems such as the M-shape jumper systems are subjected to complex flow fields around their legs and bends and fall outside of the test data used to generate the free-span response model in DNV GL Recommended Practice (RP). A Joint Industry Project (JIP) ‘Jumper VIV JIP’ that included BP, ExxonMobil, Petrobras, Saipem and DNV GL was conducted between Dec. of 2014-2016 to collectively tackle the technical issues related to the VIV design of rigid jumper systems.
Through the JIP study, measured responses from ExxonMobil's jumper tow test data were used to develop new response curves for jumper systems in pure-current condition. Curves for in-line and cross-flow responses were initially developed by classifying the measured responses into in-line or cross-flow directions and compared against the existing DNVGL-RP-F105 response curves. Due to potential ambiguity in classification and application to Jumper Design, a more general curve that does not rely on directional classification has also been generated. Due to the differences in behavior of rigid jumper systems to that of free spanning pipelines, a new VIV guidance report was developed as part of the JIP deliverable. Principles and philosophies in the DNV-RP-F105 were followed in the development, but with the intent of identifying unique behavior of jumper systems for a subsequent update of the RP.
This paper presents the Guidance notes from the JIP and forms the first release of Jumper VIV fatigue assessment approach to the Industry. ExxonMobil's model test data, the only known test data available in the industry, was used in the development of unique response model and the new design guidance. The paper includes the new response model along with VIV screening, safety factors and unique considerations required for fatigue assessment of jumper systems.
Multiphase flow meters are often built based on one or many single-phase flow metering technologies. Following the trend, Coriolis meters are being increasingly used in upstream applications in conjunction with an independent water cut meter to measure multiphase flow. Coriolis meters are well-known for fiscal metering applications as they offer unparalleled accuracy without having to input detailed information on the fluid being metered. They offer two distinct measurements: density, and mass flow rate, which is often not possible with other metering technologies. Subsequently, under multiphase flow, the biggest problem with liquid Coriolis meters is their tendency to stall when large amounts of gas flows through them. Many manufacturers over the last 10 years have developed techniques to adjust the drive gain to enhance the ability of these meters to handle increasing amounts of gas. There have also been several developments in using advanced signal processing and machine learning methods to help the meters to self-calibrate and correct for the presence of gas. These methods range from a simple error analysis on certain raw measurements to more sophisticated "Digital Twin" based concepts to simulate the behavior of the Coriolis meter internally. The paper describes the concept of "Digital Twin" in detail and outlines the reasons for the superiority of such an approach.
De-manning of offshore processing facilities presents an opportunity for significant CAPEX and OPEX savings for operators, is inherently safer and may be an enabler for marginal oil and gas developments.
Unmanned wellhead platforms are common, however de-manning of significant oil and gas processing facilities has not been achieved to-date.
This paper presents a conceptual design for a Normally Unmanned Installation (NUI) with full processing of well fluids to export specifications. By using novel and innovative solutions, a design with reduced equipment count, platform weight and attendance is proposed. The design is technically ready for implementation whilst meeting the functional requirements of an offshore processing facility.
A functional specification was developed for the platform; it was important that the design is transferrable, therefore this functional specification was kept broad. The platform was expected to process well fluid to typical pipeline export specifications. To test the design, commerciality was tested against a "Reference Case", traditional manned platform. Full subsea processing was investigated but discounted due to a lack of technical readiness.
Traditional approaches to platform design were challenged at every level, from platform access methods to the requirement for platform power generation. The key drivers for manning were investigated, identifying the target areas for simplification. An optimal platform design was then identified integrating the latest techniques and technology.
The NUI CAPEX was compared versus the Reference Case. Ground-up OPEX estimates were prepared to quantify the reduction in attendance and achievable savings.
The resulting NUI design achieves an approximate 60% reduction in equipment weight compared with a traditional manned platform, leading to a CAPEX reduction of around 30%. Significant reduction is achieved by removal of living quarters, and therefore life support systems from the facility. Power import allows maintenance-heavy power generation to be removed.
The optimised NUI delivers a reduction in OPEX of around 50% when compared to a traditional manned facility. Digital strategies such as predictive maintenance, robotic inspection and remote monitoring reduce the reactive maintenance hours offshore by up to 85% while maintaining platform availability. Walk-to-Work platform access is a mature technology and offers cost savings when compared with helicopter access and there is the potential for sharing of walk-to-work systems between assets and industries. Walk-to-Work access has the advantage that no permanent living quarters are required. A significant reduction in fabric maintenance manhours is achieved by carrying out inspection remotely.
The truly innovative and disruptive solution for unmanned facilities is the subsea factory on the floor. At the time of writing, the technology is not of the required maturity and it does not offer a viable solution. As this technology matures the potential exits to create a truly differentiated, facility of the future.
The investigation of failure data typically involves manual interpretation of free text maintenance system records. Even if failure classification codes exist within an organization's system, they often do not include enough detail or accuracy to group or identify trends. The objective of this study is to develop an automated method to identify functional failures from maintenance record data. The result is a reduction in workload for manual analysis, as well as improved identification of failures and trends.
This study's methodology involves the sequential arrangement of multiple text-mining techniques. The techniques include: term frequency-inverse document frequency (TF-IDF), clustering, association rules, term matrix creation, and lexicon development for pre-processing text. In isolation these techniques have shown to be effective in non-industrial pursuits, such as marketing and retail sales. This study serves to apply them in the domain of equipment reliability. They are iteratively implemented and refined on maintenance system records, including work orders (which may or may not represent failures), as well as failure report records. The ability to identify failure modes, failed components, and trends is then evaluated.
The techniques were successfully implemented, and the effectiveness was evaluated for each when applied to science of equipment reliability. Text mining was shown to be partially effective in the pursuit of identifying failure modes from maintenance record free text. Certain sub-techniques were shown to be quite effective, in particular the clustering technique's ability to group failed components and failure modes. Hierarchical clustering is a promising technique for technical and industrial themed free text. It was also shown that the outputs of clustering can achieve different and valuable insights based on the types of text records implemented, and the types of pre-processing available to the organization. The association rules method was somewhat effective relative to clustering, as it was able to identify certain failure modes; however, this method still requires a degree of manual intervention and interpretation at this time. The overall results are promising. There is great opportunity for continued study along multiple fronts, including additional techniques such a sentiment analysis and topic modelling.
Well integrity is very important for well operation in oil and gas industry. Advanced technologies and analyses have been developed to ensure well integrity. One of the most important analysis to be done is the evaluation of the cement bond which measures the presence and bonding of cement between casing and the formation in a particular depth or interval. This evaluation is critical for hydraulic isolation to withstand subsequent completion and production operation.
Usually, Cement bond evaluation is done from wireline logging, with a tool that transmits acoustic waves and compute the acoustic energy propagating through the casing, the cement and the formation. The amplitude and the attenuation of the returned wave can be correlated with Bond Index (BI), due to the difference in acoustic impedance between casing and cement.
Over years, different wireline technologies have been developed and matured for cement bond evaluation. Nowadays, it can be found scenarios where wireline operation is not feasible, such as in highly deviated wells or when a quick turnaround is needed.
Acoustic technology in Logging While Drilling (LWD) has been developed to measure the compressional and shear slowness of the formation. These devices are able to transmit acoustic wave at different frequencies and receive them after propagation through a medium, applicable in cased hole condition. This paper presents a case study where the LWD Acoustic technology was successfully used to evaluate the cement bond in a high deviated well in Brazil and the results show the capabilities of the technology to generate a qualitative cement bond data quality. These results were successfully validated using sonic and ultrasonic wireline technologies. This comparison, between LWD acoustic and wireline results, is shown in this paper, including details of the methodology applied to the test, parameters considered and the data processing.
Long slugs arriving in separators/slug catchers is a major flow assurance concern in the offshore oil production industry, potentially causing flooding and/or severe separation problems. The sizing of the receiving facilities is determined by the longest slugs, so the economic implications of slug length predictions can be substantial. Slugs may also over time cause serious fatigue issues in free-span pipe sections, as large load variations can drastically reduce the lifetime of the flange connections. In most laboratory experiments reported in the literature, slugs rarely become longer than around 30-40 pipe diameters, while in many oil production fields, slugs can be considerably longer. Consequently, there is a clear need to better understand how and why such long slugs appear in production systems, and in this paper we present results that shed some light on this matter.
We present a unique set of two- and three-phase slug flow experiments conducted in a 766 meter long 8" pipe at 45 bara pressure. The first half of the pipe was horizontal, while the second half was inclined by 0.5 degrees. A total of ten narrow-beam gamma densitometers were mounted on the pipe to study flow evolution, and in particular slug length development. In addition, the average phase fractions were measured using two traversing gamma densitometers, and one 160 meter long section with shut-in valves. The pressure drop was also measured along the loop using a total of twelve pressure transmitters.
The results show that the mean slug length initially increases with the distance from the inlet, but this increase slows down and the mean slug length typically reaches a value between 20 and 50 diameters at the outlet. At low flow rates, the slug length distributions tend to be extremely wide, sometimes with standard deviations approaching 100%. The longest slugs that we observed were over 250 pipe diameters (50 meters). At higher flow rates, the slug length distributions are generally narrower. The effect of the water cut on the slug length distribution is significant, but complex, and it is difficult to establish any general trends regarding this relationship. Finally, it was observed that slug flow often requires a very long distance to develop. Specifically, in most of the slug flow experiments, the flow regime 50 meters downstream of the inlet was not slug flow.
The reported experiments are the first three-phase slug flow experiments ever conducted in a large-scale setup. By using a long, heavily instrumented pipe, we were able to study the evolution of slug length distributions over a long distance. We believe that these experiments can be of considerable value for developing tools for predicting slug lengths in multiphase transport systems, which is a critical matter for oil field operators.
The objective of this study was to accurately measure the degradation of equipment indirectly (without the use of sensor or machine data). This was done in a semi-autonomous manner by leveraging the bulk data in the organization's maintenance system and its underlying metadata relationships. This study included a novel combination of two traditional techniques for asset management: The P-F curve from Condition Based Maintenance (CBM), and the "Crow – United States Army Material Systems Analysis Activity (Crow-AMSAA)". The P-F curve is a widely accepted tool to represent the degradation and eventual failure of assets. It is used to approximate the lead time to a specific failure occurring, where "P" represents the point where a failure can first be detected, and "F" represents the point where the actual functional failure occurs. Traditionally this curve is derived in a CBM system using knowledge-based or data-driven approaches that depend heavily on field sensor data. This study instead uses a statistical based method (Crow-AMSAA) to approximate the curve based on properties of maintenance record data and the intensity of data generation within the maintenance system. As theorized in previous studies, we can indeed observe changes in maintenance record properties prior to rig stoppage events (equipment failures) and approximate the P-F curve. The underlying input variable from the maintenance system was shown to be the Mean Time Between Corrective Maintenance (MTBCM), which is derived from qualitative and quantitative properties of maintenance records (work orders or jobs). When this value is visually modelled over a period of time based on the Crow-AMSAA method, a visual representation of the P-F curve can be easily identified. It was also shown as possible to determine a threshold value for taking action prior to failure. To represent this, the Crow-AMSAA output graphics were augmented with additional metadata and features that underly the maintence records. With these proper visual aids and regression techniques implemented, it is possible to identify threshold values in units of MTBCM, as well as days (unit of time), for specific assets. This allows for actions to be implemented prior to imminent failures caused by degradation growth.
Model-based prediction of vessel response is valuable for planning and execution of marine operations. Response-based operation criteria are expected to give less downtime and cheaper and safer operations than criteria based on wave height and wave period. At least this holds if the response model has sufficient accuracy. The accuracy can in principle be improved by tuning the model using measured inputs and outputs. It is envisioned that an advisory system for planning and execution of marine operations will contain a module for continuous model improvement based on on-site measurements of excitation and response. A premise for this is that the measurements be of sufficiently high quality. To test the potential of automatic model tuning an established numerical vessel model is subjected to tuning with high-precision data from a model test with wave disturbance only. This may give information of how well modelling can be achieved using tuning under favourable conditions and serve as a benchmark for tuning under noisy conditions. In addition the results may give suggestions for improvement of the mathematical model. A prototype tuning software is written in Matlab. The tuning principle is to minimize the output error, i.e. the difference between measured and simulated response, by adjusting the model's parameters. For the minimization a quasi-Newton method is used. The tuning software is tested with data from the model test and found to function as intended. Examples of tuning are given.