The global economy continues its journey of evolution and progression driven by industrialism as its primary force. With such a fast pace of development and recovery from several recessions over a number of years, dependency on energy sources became inevitable to satisfy the rising demand. This paper represents a proposed global energy price model that has the flexibility of modeling the energy price, using data from specific regions of the world, as well as the global energy pricing equation. The ANM (Alternate Novel Model) is presented here.
The model focuses mainly on oil price modeling, since oil accounts for more than 84% of the current world energy supply. The model duration is 50 years; starting from 1980 to 2030, model matching period from 1980 to 2011, and the prediction period is from 2012to 2030.
The modeling approach used in ANM adopts weighted averaging of individual factors and it relies on line regression technique. Therefore, future trends are being predicted based on the cyclic nature of the market and historical data "the future is reflection of the past??. ANM can then preduct the future oil prices, depending on the factors and variables that have been placed in the process for the output results.
The paper aims to propose a reliable model that accounts for most governing factors in the global energy pricing equation. All steps followed and assumptions made will be discussed in detailto clarify the working mechanism for this model and pave the road for any future modifications.
This paper presents a novel implementation for evolutionary algorithms in oil and gas reservoirs history matching problems. The reservoir history is divided into time segments. In each time segment, a penalty function is constructed that quantifies the mismatch between the measurements and the simulated measurements, using only the measurements available up to the current time segment. An evolutionary optimization algorithm is used, in each time segment, to search for the optimal reservoir permeability and porosity parameters. The penalty function varies between segments; yet the optimal reservoir characterization is common among all the constructed penalty functions. A population of the reservoir characterizations evolves among subsequent time segments through minimizing different penalty functions. The advantage of this implementation is two fold. First, the computational cost of the history matching process is significantly reduced. Second, problem constraints can be included in the penalty function to produce more realistic solutions. The proposed concept of dynamic penalty function is applicable to any evolutionary algorithm. In this paper, the implementation is carried out using genetic algorithms. Two case studies are presented in this paper: a synthetic case study and the PUNQ-S3 field case study. A computational cost analysis that demonstrates the computational advantage of the proposed method is presented.
Determining the optimum location of wells during waterflooding contributes significantly to efficient reservoir management. Often, Voidage Replacement Ratio (VRR) and Net Present Value (NPV) are used as indicators of performance of waterflood projects. In addition, VRR is used by regulatory and environmental agencies as a means of monitoring the impact of field development activities on the environment while NPV is used by investors as a measure of profitability of oil and gas projects. Over the years, well placement optimization has been done mainly to increase the NPV. However, regulatory measures call for operators to maintain a VRR of one (or close to one) during waterflooding.
A multiobjective approach incorporating NPV and VRR is proposed for solving the well placement optimization problem. We present the use of both NPV and VRR as objective functions in the determination of optimal location of wells. The combination of these two in a multiobjective optimization framework proves to be useful in identifying the trade-offs between the quest for high profitability of investment in oil and gas projects and the desire to satisfy regulatory and environmental requirements. We conducted the search for optimum well locations in three phases. In the first phase, only the NPV was used as the objective function. The second phase has the VRR as the sole objective function. In the third phase, the objective function was a weighted sum of the NPV and the VRR. A set of four weights were used in the third phase to describe the relative importance of the NPV and the VRR and a comparison of how these weights affect the optimized NPV and VRR values is provided.
We applied the method to determine the optimum placement of wells using two sample reservoirs: one with a distributed permeability field and the other, a channel reservoir with four facies. Two evolutionary-type algorithms: the covariance matrix adaptation evolutionary strategy (CMA-ES) and differential evolution (DE), were used to solve the optimization problem. Significantly, the method illustrates the trade-off between maximizing the NPV and optimizing the VRR. It calls the attention of both investors and regulatory agencies to the need to consider the financial aspect (NPV) and the environmental aspect (VRR) of waterflooding during secondary oil recovery projects. The multiobjective optimization approach meets the economic needs of investors and the regulatory requirements of government and environmental agencies. This approach gives a realistic NPV estimation for companies operating in jurisdiction with requirement for meeting a VRR of one.
Facies modeling forms an integral part of geological numerical modeling. Over the last two decades, different facies modeling methods have been developed using geostatistical algorithms. Most of these methods rely on the assumption of discrete or binary modeling during which each model cell is assigned a single facies. In this study, the size of the cells is on average 100 meters by 100 meters laterally by one meter thick. Based on comparisons to outcrops and subsurface data, such cells should, in fact, include a mixture of facies.
The discrete-facies approach assumes a single facies per cell. The distribution of the facies between wells is described using classical categorical geostatistical algorithms. Reservoir properties are then populated by facies within mapped environments of deposition. This process is well-established and straightforward, especially with regard to tying well data, handling property trends, and applying net rock cut-offs.
A mixed-facies approach can be performed using effective property modeling in which multiple small, fine-scale models are built for each environment of deposition. These models are re-sampled to the full-field cell volume using static and flow-based upscaling methods. The resulting statistics are then used with geostatistics, conditioned to the proportion of each facies present, to populate the full-field model. Such models allow the incorporation of core-scale heterogeneity potentially important in improved oil recovery projects, and may reduce modeling cycle times, especially when multiple iterations are required, such as during history-matching or uncertainty analysis.
This paper compares the impact on simulated fluid flow of modeling facies using discrete modeling versus a mix of facies per cell. Shoreface and subordinate fluvial environments of deposition facies, and five reservoir lithofacies, were modeled.
Fluid-flow simulation of the mixed-facies model, under both primary depletion and pressure maintenance conditions, was smooth and uniform, with a highly conformable flood front. The discrete model was more stratified, with faster and less conformable water movement.
The assignment of discrete facies to large model cells (few hundred meters laterally & few meters vertically) takes less time than a mixed-facies approach and does a better job of preserving organized extremes of permeability important at the production timescale. In the early stages of field development, when there is much uncertainty and a rapid, scenario-based modeling approach is desirable, the discrete approach can be used to flag heterogeneity-related risks more quickly and confidently than the mixed-facies technique. Inaccuracies in performance parameters resulting from the assignment of unscaled discrete values can be corrected using fine-scale sector models tailored to the highest risk cases.
Vibrations are caused by bit and drill string interaction with formations under certain drilling conditions. They are affected by different parameters such as weight on bit, rotary speed, mud properties, BHA and bit design as well as by the mechanical properties of the formations. During the actual drilling process the bit interacts with different formation layers whereby each of those layers usually have different mechanical properties. Vibrations are also indirectly affected by the formations since weight on bit and rotary speed are usually optimized against changing formations (drilling optimization process). Therefore it can be concluded that for optimized drilling reduction of vibrations is one of the challenges.
A fully automated laboratory scale drilling rig, the CDC miniRig, has been used to conduct experimental tests. A three component vibration sensor sub attached to drill string records drill string vibrations and an additional sensor system records the drilling parameters. Uniform concrete cubes with different mechanical properties were built. Those cubes as well as a homogeneous sandstone cube were drilled with different ranges of weight on bit and bit rotary speed. The mechanical properties of all cubes were measured prior to the experiments. During all experiments, drilling parameters and the vibration data were recorded. Based on analyses of the data in the time and the frequency domain, linear and non-linear models were built. For this purpose the interrelations of sandstone and concrete mechanical properties, drilling parameters and vibration data were modeled by neural networks. Application of sophisticated attribute selection methods showed that vibration data in both, time- and frequency domain, have a major impact in modeling the rate of penetration.
Viscosity and Density are important physical parameter of crude oil, closely related with the whole processes of production and transportation, and are very essential properties to the process design and petroleum industries simulation. As viscosity increases, a conventional measurement becomes progressively less accurate and more difficult to obtain. According to the literature survey, most published correlations that are used to predict density and viscosity of heavy crude oil are limited to certain temperatures, API values, and viscosity ranges. The objective of present work is to propose accurate models that can successfully predict two important fluid properties, viscosity and density covering a wide range of temperatures, API, and viscosities. Viscosity and density of more than 30 heavy oil samples of different API gravities collected from different oilfield were measured at temperature range 15oC to 160oC (60oF to 320oF), and the results were used to ensure the capability of proposed and published correlations to predict the experimental viscosity and density data. The proposed correlation can be summarized in two stages. The first step was to predict the heavy oil density from API and temperature for different crudes. The predicted values of the densities were used in the second step to develop the viscosity correlation model. A comparison of the predicted and actual viscosities data, concluded that the proposed model has successfully predict all data with average relative errors of less than 12% and with the correlation coefficient R2 of 0.97, and 0.92 at normal and high temperatures respectively. Meanwhile, the results of most of the available models has an average relative error above 40%, with R2 values between 0.19 to 0.95. These comparisons were made as a quality control to confirm the reliability of the proposed model to predict density and viscosity values of heavy crudes when compared with other models.
Both oil and gold are commodities with price in US Dollars, but they choose different path in trend figure. While gold has been showing great stability over the years, oil keeps changing in price level. Oil price movements have distorted measurement of economic variables measured in dollar values. In economical evaluation for oil and gas field development projects longer than one year, oil price is one of the most critical assumptions.
This paper is trying to solve whether:
• gold is more stable than US dollars or other currencies
• gold equivalency is more reliable way to project the future costs/price
• the gold-based oil price can be applied in current economical evaluation template for justification of approval process on field development plan
Considering crude oil prices are moving dynamically for last decade, this paper exercise the model to determine realistic oil price assumption by using more stable "currencies??, thus it can provide more reliable and accurate economical evaluation. It shows that gold-based inflation-adjusted crude-oil price is preferable indampening or mitigating:
• effect of dynamic oil price nature
• impact of inflation
• risks of paper-based currency fluctuation
• discount rate requirement
Using case study of Indonesian Production Sharing Contract (PSC) fiscal terms, gold-based oil price provides more simple economical evaluation, resulting real net cashflow of field development plan. The paper concludes by demonstrating using gold equivalency instead of paper-based currencies provides more consistent and reliable nominal revenue in both perspective of PSC Contractor and Government.
Arnaout, Arghad (TDE Thonhauser Data Engineering GmbH) | Thonhauser, Gerhard (Montanuniversitat Leoben) | Esmael, Bilal (Montanuniversitat Leoben) | Fruhwirth, Rudolf Konrad (TDE Thonhauser Data Engineering GmbH)
Detection of oilwell drilling operations is an important step for drilling process optimization. If drilling operations are classified accurately, detailed performance reports not only on drilling crews but also on drilling rigs can be produced. Using such reports, the management can evaluate the drilling work more precisely from performance point of view.
Mud-logging systems of modern drilling rigs provide numerous sensors data. Those sensors measurements are considered as indicators to monitor different states of drilling process. Usually real-time measurements of the following sensors data are available as surface measurements: hookload, block position, flow rates, pump pressure, borehole and bit depth, RPM, torque, rate of penetration and weight on bit.
In this work, collected sensors measurements from mud-logging systems are used to detect different drilling operations. Detailed data analysis shows that the surface sensors measurements can be considered as a main source of information about drilling operations. For this purpose, a mathematical model based on polynomials approximation is constructed to interpolate sensors data measurements.
Discrete polynomial moments are used as a tool to extract specific features (moments) from drilling sensors data. Then we use these moments for each drilling operation as pattern descriptor to classify similar operations in drilling time series. The extracted polynomial moments describe trends of sensors data and behavior of rig's sub-systems (Rotation System, Circulation System, and Hoisting System). Furthermore, this paper suggests a method on how to build patterns base and how to recognize and classify drilling operations once sensors data received from mud-logging system. Drilling experts compare the results to manually classified operations and the results show high accuracy.
The purpose of history matching is to achieve geological realizations calibrated to the historical performance of the reservoir. For complex geological structures it is usually intractable to run tens of thousands of full reservoir simulation to trace the most probable geological model. Hence the inadequacy of the history-matching results frequently leads to poor estimation of the true model and high uncertainty in production forecasting. Reduced-order modeling procedures, which have been applied in many application areas including reservoir simulation, represent a promising means for constructing efficient surrogate models. Nonlinear dimensionality reduction techniques allow for encapsulating the high-resolution complex geological description of reservoir into a low-dimensional subspace, which significantly reduces number of unknowns and provides an efficient way to construct a proxy model based on the the reduced-dimension parameters.
Polynomial Chaos Expansions (PCE) is a powerful tool to quantify uncertainty in dynamical system when there is probabilistic uncertainty in the system parameters. In reservoir simulation it has been shown to be more accurate and efficient compared to traditional experimental design (ED). PCEs have a significant advantage over other response surfaces as the convergence to the true probability distribution is proved when the order of the PCE is increased. Accordingly PCE proxy can be used as the pseudo-simulator to represent the surface responses of the uncertain variables. When the objective and constraints of a reservoir model is described by multivariate polynomial functions, there are very efficient algorithms to compute the global solutions. We have developed a workflow at which incorporates PCE to find the global minimum of the misfit surface and assess the uncertainty associated with. The accuracy of the PCE proxy increases with the additional trial runs of the reservoir simulator.
We conduct a two dimensional synthetic case study of a fluvial channel as well as a real field example to demonstrate the effectiveness of this approach. Kernel Principal Component Analysis (KPCA) is used to parameterize the complex geological structure. The study has revealed useful reservoir information and delivered more reliable production forecasts.
PCE-based history match enhances the quality and efficiency of the estimation of the most probable geological model and improve the confidence interval of production forecasts.
Stuck pipe has been recognized as one of the most challenging and costly problems in the oil and gas industry. However, this problem can be treated proactively by predicting it before it occurs.
The purpose of this study is to implement the two most powerful machine learning methods, Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs), to predict stuck pipe occurrences. Two developed models for ANNs and SVMs with different scenarios were implemented for prediction purposes. The models were designed and constructed by the MATLAB language. The MATLAB built-in functions of ANNs and SVMs, and the MATLAB interface from the library of support vector machines were applied to compare the results. Furthermore, one database that included mud properties, directional characteristics, and drilling parameters has been assembled for training and testing processes. The study involved classifying stuck pipe incidents into two groups - stuck and non-stuck - and also into three subgroups: differentially stuck, mechanically stuck, and non-stuck. This research has also gone through an optimization process which is vital in machine learning techniques to construct the most practical models. This study demonstrated that both ANNs and SVMs are able to predict stuck pipe occurrences with reasonable accuracy, over 85%.
The competitive SVM technique is able to generate generally reliable stuck pipe prediction. Besides, it can be found that SVMs are more convenient than ANNs since they need fewer parameters to be optimized. The constructed models generally apply very well in the areas for which they are built, but may not work for other areas. However, they are important especially when it comes to probability measures. Thus, they can be utilized with real-time data and would represent the results on a log viewer.