The objective of this paper is to identify ways to accelerate the uptake and fulfill the value potential of Intelligent Energy (IE). The paper is coauthored by a cross-industry group drawn from operators, service providers, and product vendors, all of whom have been involved in the IE arena for 10 years or more. We have analyzed past experiences to identify both ways in which IE has been successful and the improvements that could be made to add value across a broader scale amid the challenges of today’s commercial environment. In this paper, assessments are given on IE implementations to identify practical ways in which we can expand deployment and deliver results more quickly, including the importance of collaboration and competition in the IE domain, and how longer-term business models and new organizational ideas could improve the industry’s uptake of IE. We have identified two areas in which we believe changes to our approach could deliver significant benefits--through the expanded use of integrated work flows and shared subject-matter-expert (SME) services. We discuss the benefits and challenges of this integrated approach to solution design, work processes, technology, skills, and competencies. Field cases from two major operators are given as best-practice examples on advanced use of IE in the oil-and-gas industry.
Catastrophic events such as hurricanes and oil spills have enormous impacts on the local and regional economies and labor markets. The US Gulf Coast experienced the largest marine oil spill, the highest mobilization of spill-response resources, and also the first drilling moratorium in the history of deepwater operations during 2010. Another regional disaster, Hurricane Katrina impacted Louisiana, Mississippi, and Alabama, as it ripped over the core of the Gulf of Mexico (GOM) producing region, one of the most-important oil-and-gas production areas of the world, during 2005. The disruption of oil-and-gas production and fisheries caused by an oil spill or a drilling moratorium can be modeled as negative shock impacts on the local labor markets. Therefore, the analysis of the damage started by the storms at offshore oil-and-gas drilling and production facilities brings a valuable opportunity to learn how to be well-prepared for hurricanes, with the aim of avoiding future damage. The objective of this paper is to find the impact of such shocks on the employment numbers and wages in the US Gulf Coast region. This research uses econometric tools to provide quantitative estimates of the response and correlation between past and current activities of Louisiana employment and other relevant regional economies. In this study, we have determined the likely magnitude of the net economic impact of a major oil spill such as the Deepwater Horizon oil spill on certain sectors with the vector autoregressive (VAR) method. Also, the potential impacts of future changes in employment after a disaster on the economy are discussed.
We conducted a practical case study that aims at estimating the shapes and parameters of probability distributions for key cost, time, and activity performance inputs required by a risk, resource, and value simulator to conduct a stochastic valuation of a new exploration asset.
We analyzed a sample of 73 shallow offshore fields in Australia retrieved from a global field-by-field database that includes reserves, production profiles, financials, valuation, breakeven prices, ownership, and other key metrics for global oil and gas fields, discoveries, and exploration licenses.
The reviewed facilities concepts include 40 steel platforms, 2 concrete gravity-based developments, 10 projects with extended-reach drilling, 4 floating production, storage, and offloading (FPSO) vessels, and 17 subsea tiebacks. The aggregate capital expenditure (Capex) of projects in the sample during 1965-2015 is USD 99.1 billion (in nominal terms), which is commensurate with the total asset size of the Australian offshore petroleum industry.
We estimate probability distributions for all full-cycle parameters required to generate Monte Carlo Capex, operating expenditure (Opex), and production profiles. In particular, these include facility Capex per unit peak, development-phase duration and scheduling rules, drilling expenditures per barrel of oil equivalent (BOE), cost of exploration and appraisal wells, Opex/Capex ratio, abandonment cost ratio, fraction of hydrocarbons produced yearly at plateau, fraction of hydrocarbons remaining at end of plateau, terminal production rate, and fraction predrilled wells. Most of the aforementioned were found to be log-normally distributed.
The paper illustrates a practical application of a simple, yet robust, work flow relying on real industry data to assess the value, risks, and uncertainties of an exploration prospect. It can be seamlessly extended to other basins because of a rich coverage of the online field-analogs database.
Polymer flooding of oil fields has not reached the same maturity as waterflooding. Hence, implementing polymer projects at field scale requires a workflow comprising several steps. The workflow starts with screening of the portfolio of an organization for oil fields potentially amenable for this enhanced-oil-recovery (EOR) method. Next, laboratory and field testing is required, followed by sector and field implementation and finally rollout in the portfolio.
Going through the workflow, not only is the subsurface uncertainty reduced, but also the knowledge regarding the cost structure and operating capabilities of the organization is improved.
Analyzing the economics of polymer-injection projects shows that costs can be split into costs dependent on the polymer injector/producer (polymer pattern) and costs that are independent. Knowing these costs, a minimum economic number of patterns (MENP) is defined to achieve net present value (NPV) of zero. This number is used to determine a minimum economic field size (MEFS) for polymer injection, which is taken into account in the screening of the portfolio.
A robustness criterion for economic-evaluation purposes is defined as the minimum number of patterns required for economic polymer injection. By use of this criterion, a diagram is derived allowing for screening of fields for polymer economics by use of pattern-dependent and pattern-independent costs and the utility factor (UF).
The cost structure reveals how the NPV of polymer projects changes with the number of patterns, incremental oil, and injectivity. Injectivity is particularly important because it determines the chemical-affected reservoir volume (CARV) or speed of production.
A sensitivity analysis of the NPV showed that for the cost structure used here, in addition to the polymer costs, the well costs are important for the economics of a full-field polymer-injection project.
The standard method to evaluate an oil- or gas-production-decline curve estimated with an exponential function—taking the logarithms of both sides of the equation, estimating the parameters of the transformed function through linear regression, and exponentiating—leads to biased estimates of future production. The bias arises in the process of exponentiation.
The direction and magnitude of exponentiation bias depend on three driver variables: the variance of the post-peak-production history; the number of post-peak observations on production; and the estimated rates of production during the forecast period. A correction factor, dependent on the confluent-hypergeometric-limit function, applied to the biased estimators produces unbiased estimates of future production. The correction factor can be quickly evaluated and introduced into the work flow for use in evaluating exponential-decline curves.
The net bias in estimates of future production is more likely to be negative than positive. Negative bias understates remaining resources and reserves. The probability of negative, rather than positive net bias, is an increasing function of the maturity of production at the point of evaluation. The absolute magnitude of the bias is a direct function of both the variance of the empirical post-peak-production history and the forecasted rates of future production. It is an inverse function of the length of the post-peak-production history.
A data set of 54,254 completion-level monthly production histories from the Gulf of Mexico (GOM) was used to quantify the bias and show the characteristics of production that determine its direction and magnitude. In this data set, exponentiation bias in estimates of remaining resources usually results in small absolute errors. Holding out varying fractions of the production histories of the completions analyzed, the interquartile range for errors in the estimated remaining resources (relative to unbiased estimates) extends from an underestimate of 886 to an overestimate of 2,105 BOE.
However, at the extreme ends of the distribution of errors, maximum underestimates of 8.3 million BOE and overestimates as large as 22.5 million BOE were found. More than 14% of the completions analyzed had forecast errors of more than 30%. Extreme biases are predictably associated with specific ranges and combinations of values of the three driver variables. Therefore, exponentiation bias can have very large and predictable effects on the economic value of estimated remaining resources, but they can be reliably corrected.
An oilfield services company that recognizes the value of embedding sustainability into internal business processes has elevated environmental performance by expanding its existing Continuous Improvement (CI) program. This paper describes the program that has been successfully implemented globally during a 5-year period. It is illustrated with actual case studies that exhibit substantial savings in energy, water, and waste, alongside business cost savings.
The program covers major facilities of the company’s engineering and manufacturing division. Each year, participating sites have an objective to complete a minimum of one improvement project that actively targets elimination or reduction of environmental wastes. Each project executes the Define, Measure, Analyze, Improve, Control (DMAIC) process. The steps in DMAIC are incremental and data-driven, and use applicable statistical and analytical tools to enable solutions that deliver step-change impact. The Control phase specifically facilitates sustainability because it requires implementing models for preserving short- and long-term improvements. These projects are tracked in a centralized repository and undergo a standard validation process.
The program entered its fifth year in 2014, with 28 participating locations worldwide. Project initiatives have generated significant financial benefits, exclusive of environmental savings. Annually, the program has saved an average of 2 378 000 kW-hr of electricity and 19 704 000 L of water, with 521 t of waste eliminated, reused, or recycled.
The successes of the program are communicated both internally and externally, including contributions to sustainability filings. Internally, the program has provided a vehicle to positively engage employees across disciplines and to share innovations, technologies, and best practices for the environment. Projects resulting in facilities-related enhancements have demanded initial capital expenditure, but the return on investment is projected to continue beyond the annual timescale captured by the program. The program has stimulated forward-thinking management decisions on the future and sustainability within the organization.
DMAIC is a recognized CI process that uses Lean and Six Sigma techniques. By leveraging on this approach to focus on the environment, the probability of overlooking environmental opportunities is substantially reduced. Lean’s systematic elimination of waste is implemented in the very literal sense. Checkpoints in each DMAIC step reduce the risk of project failure, and Six Sigma’s data-analysis methods enable measurable, visible results that allow the organization to track CI actions toward sustainability.
Although a severe drop in commodity prices was expected to adversely affect a financially leveraged producer, the variability of this effect across the universe of exploration and production (E&P) companies during the current downturn, which began in Autumn 2014, surprised industry participants. What factors caused the effect to be magnified for certain companies and muted for others? In examining 71 public E&P companies, we found a moderate correlation between a company’s financial leverage and the loss of its equity value. Our study confirms that leveraged producers are exposed to the risk of debt-induced value loss in a downturn. Two other factors were studied for their influence on equity performance: the economics of a company’s resource portfolio and the extent of its commodity-price hedging.
To examine the effect of resource economics, we analyzed the finding-and-development (F&D) cost of producers and noticed statistically significant differences by hydrocarbon-producing regions. When controlled for the regional effect, the correlation between a company’s financial leverage and the loss of its equity value substantially improved. The offsetting effect of a superior resource economics on the debt-induced value loss was evident. For example, the producers operating in the Permian Basin outperformed their similarly leveraged peers operating in the Williston Basin, a less-profitable region.
A subset of financially leveraged companies significantly outperformed their similarly leveraged peers. These outperformers, termed here as “Leaders,” showcase a strong alignment between their financing and hedging activities. We observed evidence of the use of a continuous-hedging program through the price cycle by the Leaders. They responded soon to changes in their hedge positions, such as hedge roll-offs, rather than hedging when it is advantageous to do so. A key benefit of such a continuous-hedging program is the dollar-cost averaging of hedged prices, which appears to be an implicit goal of the Leaders.
We contend that a leveraged producer must coordinate its hedging and financing policies to maintain an alignment between the hedged volume and the debt load. Endogeneity arises in the relationship between debt and hedges, with each influencing the other. An alignment can be achieved through the implementation of a continuous-hedging program factoring in annual production, operating cashflows, financial leverage, and hedged volumes. This paper takes the hedging debate for a leveraged producer beyond the realm of “to hedge or not to hedge” and addresses the question of “how much to hedge.”
Myrseth, Velaug (SINTEF Petroleum Research) | Perez-Valdes, Gerardo A. (SINTEF Technology and Society) | Bakker, Steffen J. (Norwegian University of Science and Technology) | Midthun, Kjetil T. (SINTEF Technology and Society) | Torsæter, Malin (SINTEF Petroleum Research)
An estimated 3,000 oil wells need to be plugged and abandoned on the Norwegian Continental Shelf (NCS), with approximately 150 new wells being drilled each year. The petroleum industry estimates the total plugging costs to be almost 900 billion Norwegian kroner (NOK), and that the work will take up to 40 years to complete. Because of the current tax regulations in Norway, the state indirectly pays 78% of the costs (approximately 700 billion NOK). It is therefore vital to reduce expenses by targeted research and development (R&D) of new technology, and to ensure better planning of plug-and-abandonment (P&A) operations in and between licenses.
The current study aims to gather available data relevant for Norwegian P&A operations in an open-source database, and to develop and use a P&A planning software that serves as a decision-support tool for various problems operators face. The software can be used to generate planning schedules, identify bottlenecks in P&A operations, and analyze potential efficiency gains from technology improvements and cooperative plugging campaigns. We will use a cross-disciplinary approach that combines the fields of operations research with technological expertise.
In this paper, we present the outline of the database and the current status of available data for P&A operations on the NCS, as well as a short literature review. Available data are categorized according to the different choices that need to be made within a single P&A operation, with regard to both technological aspects and the existing regulatory framework. We also discuss the type of analysis the P&A planning software is envisioned to perform.
Industry, government, and ordinary tax payers will all benefit from knowledge sharing, optimized planning, and more targeted R&D efforts on this topic. The results from this study will be used to identify important cost drivers and to draw up a roadmap for future P&A-related R&D.
Stochastic modeling provides a mechanism for incorporating risk and uncertainty considerations into portfolio production forecasts. Through this process, insight is gained into the likelihood of production targets being missed, met, or exceeded. This insight enables organizations to better manage operational, positioning, and strategic planning activities around stakeholders’ production expectations. Inherent in all capital programs are numerous uncontrollable, but definable, factors that affect overall corporate production performance. These factors can be categorized into four groups: (1) timing uncertainties, (2) performance uncertainties, (3) sequencing uncertainties, and (4) risk. Timing uncertainty considers spud scheduling, spud-to-first-production cycle timing, and production-ramp-up cycle timing. Performance uncertainty considers the historical or modeled distribution of period-specific production rates within the constituent plays (e.g., What is the unavoidable range of variability within a play as depicted in a peak-normalized composite-production plot of wells within the analog population?). Sequencing uncertainty considers performance-percentile clustering or sequencing within the program (e.g., the number of top-quartile wells that are, by chance, drilled early in the year vs. later in the year). Finally, risk addresses commercial failure within a program attributed to either geology or execution, or both. By integrating historical operational data with a standardized set of play-assessment deliverables, the building blocks of a stochastic capital-program forecast and analysis are readily available. Ultimately, the use of stochastic modeling in portfolio production forecasting provides an organization’s decision makers with the information necessary to examine investment and strategic decisions in the context of corporate-risk tolerance.
Surveillance data can be critically important in managing producing oil and gas fields. To maximize value, it is necessary to identify those surveillance opportunities that are not only informative, but materially value-adding. For surveillance decisions in producing fields, applying conventional value-of-information (VoI) methods to every individual surveillance opportunity can be too time-consuming to be practical, considering the large number of relatively small data decisions to be evaluated, each possibly addressing multiple reservoir uncertainties and supporting numerous business decisions. This paper outlines risk-based surveillance planning (RBSP), a simple approach that is based on the observation that the vast majority of surveillance opportunities in producing fields are designed to manage risks. RBSP then evaluates surveillance opportunities with VoI principles, but exploits the fact that much relevant value data already exist as byproducts of risk-management processes. RBSP has been used successfully in dozens of oil and gas fields; two case studies are described herein.
Many companies use a risk-management framework that documents risk events and their causes and consequences, assesses risk probability and impact, and identifies prevention measures to reduce the probability of the risk events occurring and mitigation measures to reduce the impact of risk events should they occur. If such a framework has been used, the risk assessment provides an estimate of the expected monetary value (EMV) lost because of a risk event (= probability × impact). Risk reassessment taking into account the planned prevention and mitigation measures reveals the EMV increment attributable to the risk-management measures. RBSP assumes this to be the theoretical value of “perfect” information (VoIperfect) for the package of surveillance that supports these measures.
RBSP links each potential surveillance opportunity to the risk-management plan by determining how the surveillance would (a) help create or strengthen prevention measures; (b) detect the impending occurrence of a risk event, thus enabling mitigation measures to be triggered; or (c) help monitor both prevention and mitigation measures to detect any weaknesses that require improvement. The VoIperfect of a surveillance opportunity with respect to a given risk can then be estimated by attributing to it a proportion of the total VoIperfect of the surveillance package by use of a simple criticality score to estimate how dependent the risk-management measures are on each item of surveillance. A reliability score then discounts this VoIperfect on the basis of how likely the surveillance is to deliver the necessary information in reality, yielding the value of imperfect information. Where an item of surveillance affects several risk-management measures and multiple risks, the values are summed, providing a VoI for each item of surveillance.
In case studies of an asphaltene-precipitation risk in a deepwater oil field and a water-influx risk in an offshore gas field, RBSP helped to create a logical value-based rationale for surveillance decisions, and the surveillance, when implemented, did facilitate effective management of the risks and added value.