|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Abstract In previous frac designs, proppant tracer logs revealed poor proppant distribution between clusters. In this study, various technologies were utilized to improve cluster efficiency, primarily focusing on selecting perforations in like-rock, adjusting perforation designs and the use of diverters. Effectiveness of the changes were analyzed using proppant tracer. This study consisted of a group of four wells completed sequentially. Sections of each well were divided into completion design groups characterized by different perforating methodologies. Perforation placement was primarily driven by RockMSE (Mechanical Specific Energy), a calculation derived from drilling data that relates to a rock's compressive strength. Additionally, the RockMSE values were compared alongside three different datasets: gamma ray collected while drilling, a calculation of stresses from accelerometer data placed at the bit, and Pulsed Neutron Cross Dipole Sonic log data. The results of this study showed strong indications that fluid flow is greatly affected by rock strength as mapped with the RockMSE, with fluid preferentially entering areas with low RockMSE. It was found that placing clusters in similar rock types yielded an improved fluid distribution. Additional improved fluid distribution was observed by adjusting hole diameter, number of perforations and pump rate.
Reboul, Sébastien (Varel Energy Solutions) | Al-Mulaifi, Mohammad (Kuwait Oil Co.) | Al-Othman, Ahmad (Kuwait Oil Co.) | Shata, Atef (Kuwait Oil Co.) | Mohamed, Ibrahim (Kuwait Oil Co.) | Pelfrene, Gilles (Varel Energy Solutions) | El-Genidy, Ehab (Varel Energy Solutions) | Gamal, Mohamed (Varel Energy Solutions) | Cuillier, Bruno (Varel Energy Solutions) | Amran, Fatheeni A. (Varel Energy Solutions) | Maïga, Abiboulaye (Varel Energy Solutions)
Abstract Torsional vibrations are a very common phenomenon affecting drilling operations by limiting efficiency, increasing the risk of downhole equipment failure and generating additional costs, particularly when their most severe form is encountered, the stick-slip. It is less known that torsional vibrations also strongly affect directional drilling operations reducing directional stability and tool face control. In this paper, the highly variable solicitation induced by torsional vibrations is addressed with a statistical approach. This approach, used successfully in Kuwait applications, resulted in an operational savings of 30% of the cost per foot over a panel of more than 15 runs analyzed. Steerability and directional stability is critical on directional wells, especially when using push-the-bit systems with PDC bit due to side force distributed unevenly over one bit revolution. Most of today bit design comparisons are made with an average steerability factor computed over one full revolution of the bit. The method described in this paper is going further in details and looks at the evolution of directional performance indicators within one bit revolution. With the help of a state-of-the-art 3D bit-rock interaction model, which simulates the drilling environment considering the drive system mechanism and both the drill bit and the hole being drilled as a set of 3D meshed surfaces, an accurate picture of the directional stability of the bit design is available. This approach is complemented by a statistical analysis which allows to simulate a multitude of input parameters combinations and to map the directional response of a bit design in a more robust way. Based on the results of the statistical analysis, an optimized design was selected and manufactured for a 12 ¼’-in. rotary steerable system (RSS) directional application known for having torsional vibration limitations. As revealed by the simulation results, this design was expected to exhibit a better directional stability than previous bit design iterations. This optimized design was run on RSS and positive displacement motor (PDM) assemblies and successfully drilled several wells in different fields of Kuwait operations ground. It experienced smooth and stable directional control while reducing the risk for torsional vibrations and resulted in tremendous reduction of the overall cost per foot. PDC bit selection and design process have considerably evolved in the last decade with the use of increasingly accurate simulations models. This paper presents the next step of evolution dedicated to delivering the best adapted solution to any given scenario by examining in greater detail the directional response of a drill bit.
Summary This paper presents the application of reliability‐based approaches to the survival design of critical wells, in particular deepwater and high‐pressure/high‐temperature (HPHT) wells. First, the concept of survival design is discussed. As in other structural design disciplines, a distinction is made between operating (service) loads and survival loads. In essence, survival loads are extreme magnitude loads with low probability of occurrence, but with potentially severe consequences if failure occurs. Survival scenarios falling into this category in critical wells are presented. It is shown that the current practice of using standard working stress design (WSD) approaches for survival scenarios, even with reduced design factors, fails to quantify the risk of failure and can lead to design practices and outcomes that are not risk consistent or optimal. Reliability‐based design (RBD) explicitly quantifies the risk of failure of a given design. This paper describes RBD and the prevalence of its use in other structural design codes and shows how it can be used for survival design in critical wells. It is argued that a probabilistic approach in which a deterministic load at its extreme survival magnitude is compared with stochastic strength (from data on strength parameters) is a rational approach to survival load design. Regardless of how low the probability of occurrence of the load is at its survival magnitude, well integrity is demonstrated by assuming that such a load occurs. The method can be implemented by constructing resistance distributions using limit state equations such as the Klever-Stewart rupture limit, and the Klever-Tamano collapse limit equations (API TR 5C3/ISO/TR 10400). Statistical strength parameter data can be obtained from API TR 5C3 (ISO/TR 10400), manufacturer reports, or direct material and dimensional measurements. Statistical approaches to constructing such distributions are presented. The deterministic survival load is then compared with this resistance distribution, and a probability of failure is calculated. This probability of failure then becomes the basis for design. The goal in survival design is to demonstrate survival rather than continued operability. On the basis of this, acceptable probabilities of failure for typical survival loads are recommended and contextualized with other design codes. Particular attention is given to worst case discharge (WCD) and well containment loads, which have become design‐dictating survival loads in many deepwater well designs and are driving design choices of tubulars and connections. The applicability of this approach to connection selection and brittle failure is also demonstrated. A deepwater well example is presented to illustrate using the approach. It is shown that designing to an acceptable probability of failure leads to more robust and risk‐consistent designs in critical wells. Furthermore, such an approach allows designers to focus on the specific design or well construction changes that enhance survival. It is noted that the approach is applicable in its entirety to HPHT wells, where similar challenges are present. The approach described in this paper provides a quantitative basis to examine design adequacy of wells under survival scenarios. The approach is in keeping with the traditional practice of allowing using all available strength in designing to survival loads. Using stochastic strength data rather than deterministic strength estimates provides a probabilistic basis for design, thus quantifying risk. The authors believe that this is a needed rational and quantitative approach to optimize design of critical wells under increasingly demanding loads.
Abstract With the ever-increasing pressure to drill wells efficiently at lower costs, the utilization of downhole sensors in the Bottom Hole Assembly (BHA) that reveal true downhole dynamics has become scarce. Surface sensors are notoriously inaccurate in translating readings to an accurate representation of downhole dynamics. The issue of 1 to 1 interpretation of surface to downhole dynamics is prevalent in all sensors and creates a paradigm of inefficient drilling practices and decision making. Intelligent mapping of downhole dynamics (IMoDD) is an analytical suite to address these inefficiencies and maximize the use of surface sensors, thus doing more with less. IMoDD features a new zeroing beyond the traditional workflows of zeroing the surface sensors related to weight and torque at the connection. A new method, Second-order Identifier of Maximum Stand-pipe-pressure: SIMS, is introduced. The method examines changes in stand-pipe pressure and identifies the point before bit-wellbore contact, using a set of conditions. The resulting calculations of weight and torque are verified with measured values of downhole weight and torque, for multiple stands of drilling in vertical, curve-lateral drilling. After the new zero, the deviation of torque-weight correlations is further examined to reveal the downhole weight changes confirmed also by the downhole sensor data. It is demonstrated that an intelligent mapping system that improves downhole characterizations would improve decision making to facilitate smoother energy transfer thus reducing Non-Productive Time (NPT) and increasing BHA life span.
Abstract It has been well documented that magnetic models and Measurement-while-Drilling (MWD) directional sensors are not free from error. It is for this reason that directional surveys are accompanied by an error model that is used to generate an ellipse of uncertainty (EOU). The directional surveys represent the highest probable position of the wellbore and the EOU is meant to encompass all of the possible wellbore positions to a defined uncertainty level. The wellbore position along with the individual errors are typically presumed to follow a Normal (Gaussian) Distribution. In order for this assumption to be accurate, 68.3% of magnetic model and directional sensor error should fall within plus or minus one standard deviation (1σ), 95.5% within two standard deviations (2σ), and 99.7% within three standard deviations (3σ) of the limits defined in the error model. It is the purpose of this study to evaluate the validity of these assumptions. The Industry Steering Committee on Wellbore Survey Accuracy (ISCWSA) provides a set of MWD error models that are widely accepted as the industry standard for use in wellbore surveying. The error models are comprised of the known magnetic model and MWD directional sensor error sources and associated limits. It is the purpose of this paper to determine whether the limits defined in the ISCWSA MWD error models are representative of the magnitude of errors observed in practice. In addition to the ISCWSA defined error model terms, this research also includes an analysis of the sensor twist error term and the associated limits defined in the Fault Detection, Isolation, and Recovery (FDIR) error model. This study is comprised of 138 MWD runs that were selected based on the criteria that they were processed using FDIR with overlapping gyro surveying to ensure highly accurate and consistent estimated values. The error magnitudes and uncertainties estimated by FDIR were compiled and analyzed in comparison to the expected limits outlined in the error models. The results conclude that the limits defined in the ISCWSA error models are not always representative of what is observed in practice. For instance, in U.S. land the assumed magnitudes of several of the error sources are overly optimistic compared to the values observed in this study. This means that EOUs with which wells are planned may not be large enough in some scenarios which could cause the operator to assume unanticipated additional risk. The final portion of this analysis was undertaken to test the hypothesis that preventative measures such as additional non-magnetic spacing are generally being taken by operators and directional service providers to minimize additional injected error when survey corrections are not being implemented while drilling the well. This hypothesis was tested by dividing the 138 MWD runs into Historical (survey corrections were not utilized in real-time) and Real-Time (survey corrections were utilized in real-time) categories. The results indicate that there are no significant differences in the error estimates between the Historical and Real-Time categories. This result in combination with the determination that the majority of the error model error terms should be categorized as fat-tail distributed indicate that proper well spacing and economics calculated using separation factor alone are insufficient without the use of survey corrections in Real-Time.
Decision tree analysis and Monte Carlo simulation are the most commonly used tools in decision and risk analysis. But other tools such as optimization, options analysis, and combinations of these various tools can also be useful. This article examines the importance of data analysis and the nature and application of these other tools. Regardless of the principal tool used in risk analysis--Monte Carlo simulation or decision trees--empirical data may play an important role. Estimating the probabilities and values for a decision tree is often done by examining historical data. Similarly, the input distributions selected for a Monte Carlo model are easier to justify when analogous data is available to support the choices of distribution type and value of defining parameters, such as mean and standard deviation.
Silva Neto, Gilson Moura (Petrobras, University of Campinas, and NORCE Norwegian Research Centre) | Soares, Ricardo Vasconcellos (NORCE Norwegian Research Centre, University of Bergen) | Evensen, Geir (NORCE Norwegian Research Centre and Nansen Environmental and Remote Sensing Center) | Davolio, Alessandra (University of Campinas) | Schiozer, Denis José (University of Campinas)
Summary Time-lapse-seismic-data assimilation has been drawing the reservoir-engineering community's attention over the past few years. One of the advantages of including this kind of data to improve the reservoir-flow models is that it provides complementary information compared with the wells' production data. Ensemble-based methods are some of the standard tools used to calibrate reservoir models using time-lapse seismic data. One of the drawbacks of assimilating time-lapse seismic data involves the large data sets, mainly for large reservoir models. This situation leads to high-dimensional problems that demand significant computational resources to process and store the matrices when using conventional and straightforward methods. Another known issue associated with the ensemble-based methods is the limited ensemble sizes, which cause spurious correlations between the data and the parameters and limit the degrees of freedom. In this work, we propose a data-assimilation scheme using an efficient implementation of the subspace ensemble randomized maximum likelihood (SEnRML) method with local analysis. This method reduces the computational requirements for assimilating large data sets because the number of operations scales linearly with the number of observed data points. Furthermore, by implementing it with local analysis, we reduce the memory requirements at each update step and mitigate the effects of the limited ensemble sizes. We test two local analysis approaches: one distance-based approach and one correlation-based approach. We apply these implementations to two synthetic time-lapse-seismic-data-assimilation cases, one 2D example, and one field-scale application that mimics some of the real-field challenges. We compare the results with reference solutions and with the known ensemble smoother with multiple data assimilation (ES-MDA) using Kalman gain distance-based localization. The results show that our method can efficiently assimilate time-lapse seismic data, leading to updated models that are comparable with other straightforward methods. The correlation-based local analysis approach provided results similar to the distance-based approach, with the advantage that the former can be applied to data and parameters that do not have specific spatial positions.
Zhang, Kai (China University of Petroleum (Corresponding author) | Zhang, Jinding (email: email@example.com)) | Ma, Xiaopeng (China University of Petroleum) | Yao, Chuanjin (China University of Petroleum) | Zhang, Liming (China University of Petroleum) | Yang, Yongfei (China University of Petroleum) | Wang, Jian (China University of Petroleum) | Yao, Jun (China University of Petroleum) | Zhao, Hui (China University of Petroleum)
Summary Although researchers have applied many methods to history matching, such as Monte Carlo methods, ensemble-based methods, and optimization algorithms, history matching fractured reservoirs is still challenging. The key challenges are effectively representing the fracture network and coping with large amounts of reservoir-model parameters. With increasing numbers of fractures, the dimension becomes larger, resulting in heavy computational work in the inversion of fractures. This paper proposes a new characterization method for the multiscale fracture network, and a powerful dimensionality-reduction method by means of an autoencoder for model parameters. The characterization method of the fracture network is dependent on the length, orientation, and position of fractures, including large-scale and small-scale fractures. To significantly reduce the dimension of parameters, the deep sparse autoencoder (DSAE) transforms the input to the low-dimensional latent variables through encoding and decoding. Integrated with the greedy layer-wise algorithm, we set up a DSAE and then take the latent variables as optimization variables. The performance of the DSAE with fewer activating nodes is excellent because it reduces the redundant information of the input and avoids overfitting. Then, we adopt the ensemble smoother (ES) with multiple data assimilation (ES-MDA) to solve this minimization problem. We test our proposed method in three synthetic reservoir history-matching problems, compared with the no-dimensionality-reduction method and the principal-component analysis (PCA). The numerical results show that the characterization method integrated with the DSAE could simplify the fracture network, preserve the distribution of fractures during the update, and improve the quality of history matching naturally fractured reservoirs.
Recent casing failures in the Granite Wash play in the western Anadarko Basin have sparked deep concerns for operators in North Texas and Oklahoma. Hydrostatic tests made in the field show that current API standards do not assure adequate joint and bursting strength to meet deep-well requirements. This paper is part of an ongoing effort to minimize the likelihood of failure using data-mining and machine-learning algorithms. Casing failure has long presented a challenge to the industry. The combined effects of design, dynamic borehole conditions, metallurgy, and handling have been challenging to quantify and predict accurately.
In this second article of a series covering water management in hydraulic fracturing (HF) in unconventional resources, the properties and characteristics of the flowback fluids are discussed, together with the general categories of technologies that are best suited to treat them. As discussed in the June column, the water quality required to make up the HF fluid is a key criterion for selecting the recycle water treatment technology. In some cases, fresh water is required. If the HF flowback water has high salinity, then some form of desalination must be applied. In other cases where salinity can be tolerated, then removal of suspended material is sufficient for recycle. In addition to the recycle water quality, the properties and characteristics of the flowback fluids are important in the selection of water treating equipment. While this may seem like an obvious statement, it requires some justification. There is a need in the oil and gas industry to find a single, flexible, and multipurpose water treatment technology that is capable of handling most flowback fluid types. This would simplify the selection, purchase, deployment, and operation of equipment in the field. From an operations standpoint, the search for a multipurpose technology is justified. However, such a technology has not yet been identified, although there are technologies that come close to meeting the need.