To estimate the areas of the fields to be discovered in a mature play, we use a model that conceptually divides the play (or a portion of it) into hexagons of "unit area," loosely defined as the minimum area that an oil field should have to be commercial. The individual hexagons may contain oil, and they are supposed to be spatially independent. The distribution of field sizes is defined by just one parameter (the quotient cells with oil in the area of study/total number of cells in the area of study). Samples taken from this discrete distribution look as if they had been drawn from continuous log-normal distributions. The paper considers the hypothesis that all samples drawn from a distribution of field sizes are biased, overestimating the true areas of the fields existing in the play.
The Nigerian bitumen belt is currently receiving a great deal of attention from Nigerian economists. This is because the federal government of Nigeria is placing a greater emphasis on the diversification of the country’s economy. The economic policies are focusing less on the dominant conventional oil in the Niger Delta and more on agriculture, tourism, entertainment, infrastructure development, and industrial and solid mineral production. Development of the latter is currently governed by the Nigerian Minerals and Mining Act of 2007 (NMMA 2007). For development purposes, bitumen is classified as one of the solid minerals under the act (NMMA 2007). In this regard, we proposed and performed an economic study of a more-manageable small-scale 1,000 B/D in-situ bitumen-extraction project assuming modularized, “cookie-cutter” steam-assisted-gravity-drainage (SAGD) development technology by use of NMMA (2007) fiscal terms. Under the current deflationary conditions in the upstream sector of the oil and gas industry, such a small-scale in-situ project could be started with an initial capital expenditure of fewer than USD 25 million. In this paper, we discuss the Nigerian bitumen belt and resource potential within the belt. Previous works on the economics of Nigerian bitumen development, market opportunity for bitumen in road construction, and discussion on environmental footprints are also presented. These are then followed by economic evaluation and analysis of a small-scale 1,000 B/D in-situ SAGD project. The findings in this paper provide economic data that can be used for economic screening when considering in-situ bitumen-development investment in the Nigerian bitumen belt.
The exploration and production (E&P) industry is facing a net-present-value (NPV) paradox. Despite the fact that the NPV method is widely criticized by practitioners and academics alike, the NPV method remains the cornerstone of E&P project valuation. We posit that this contradiction, which we labeled the NPV paradox, is likely to be caused by a combination of limitations of the method, a lack of theoretical understanding, and ambiguity regarding the implementation of the NPV method.
Even though the NPV method has been described in numerous papers and textbooks, rigorous and succinct guidance on how to determine risk premiums for systematic risk is not available. We demonstrate that risk-adjusted discount rates are very sensitive to the choice of the length of the periods over which returns are determined (daily, weekly, or monthly), length of the time horizons considered (such as 10 or 25 years), and start date (such as 1965 or 1990). We discuss the fundamental implications and rationale of choices and their effects on the variables that underpin the risk-adjusted discount rate: risk-free rate, company b, market-risk premium, and the cost of debt.
Although not entirely satisfactory, we argue for a moderate downward revision of discount rates for projects with timelines exceeding 20 years. This recommendation is dependent on recent advancements in public finance and the reality that the exposure to systematic risk in the long run is significantly less in many real-life E&P projects than the capital-assessment-pricing model (CAPM) implies. The inflated discount rate that is currently used, combined with the extended investment horizons that are common in the upstream sector, will for example result in an underweighting of decommissioning and future legacy costs.
In addition to a set of widely recognized shortcomings of the NPV model, there are also lesser well-known issues. For example, the failure of CAPM to capture bankruptcy risk has a bearing on the project-risk premium. Also, the application of the NPV model implies a set of probabilistic assumptions around market risks that are likely to be invalidated when evaluating a set of market scenarios or using a series of probability-weighted market scenarios.
The objective of this paper is to identify ways to accelerate the uptake and fulfill the value potential of Intelligent Energy (IE). The paper is coauthored by a cross-industry group drawn from operators, service providers, and product vendors, all of whom have been involved in the IE arena for 10 years or more. We have analyzed past experiences to identify both ways in which IE has been successful and the improvements that could be made to add value across a broader scale amid the challenges of today’s commercial environment. In this paper, assessments are given on IE implementations to identify practical ways in which we can expand deployment and deliver results more quickly, including the importance of collaboration and competition in the IE domain, and how longer-term business models and new organizational ideas could improve the industry’s uptake of IE. We have identified two areas in which we believe changes to our approach could deliver significant benefits--through the expanded use of integrated work flows and shared subject-matter-expert (SME) services. We discuss the benefits and challenges of this integrated approach to solution design, work processes, technology, skills, and competencies. Field cases from two major operators are given as best-practice examples on advanced use of IE in the oil-and-gas industry.
Catastrophic events such as hurricanes and oil spills have enormous impacts on the local and regional economies and labor markets. The US Gulf Coast experienced the largest marine oil spill, the highest mobilization of spill-response resources, and also the first drilling moratorium in the history of deepwater operations during 2010. Another regional disaster, Hurricane Katrina impacted Louisiana, Mississippi, and Alabama, as it ripped over the core of the Gulf of Mexico (GOM) producing region, one of the most-important oil-and-gas production areas of the world, during 2005. The disruption of oil-and-gas production and fisheries caused by an oil spill or a drilling moratorium can be modeled as negative shock impacts on the local labor markets. Therefore, the analysis of the damage started by the storms at offshore oil-and-gas drilling and production facilities brings a valuable opportunity to learn how to be well-prepared for hurricanes, with the aim of avoiding future damage. The objective of this paper is to find the impact of such shocks on the employment numbers and wages in the US Gulf Coast region. This research uses econometric tools to provide quantitative estimates of the response and correlation between past and current activities of Louisiana employment and other relevant regional economies. In this study, we have determined the likely magnitude of the net economic impact of a major oil spill such as the Deepwater Horizon oil spill on certain sectors with the vector autoregressive (VAR) method. Also, the potential impacts of future changes in employment after a disaster on the economy are discussed.
We conducted a practical case study that aims at estimating the shapes and parameters of probability distributions for key cost, time, and activity performance inputs required by a risk, resource, and value simulator to conduct a stochastic valuation of a new exploration asset.
We analyzed a sample of 73 shallow offshore fields in Australia retrieved from a global field-by-field database that includes reserves, production profiles, financials, valuation, breakeven prices, ownership, and other key metrics for global oil and gas fields, discoveries, and exploration licenses.
The reviewed facilities concepts include 40 steel platforms, 2 concrete gravity-based developments, 10 projects with extended-reach drilling, 4 floating production, storage, and offloading (FPSO) vessels, and 17 subsea tiebacks. The aggregate capital expenditure (Capex) of projects in the sample during 1965-2015 is USD 99.1 billion (in nominal terms), which is commensurate with the total asset size of the Australian offshore petroleum industry.
We estimate probability distributions for all full-cycle parameters required to generate Monte Carlo Capex, operating expenditure (Opex), and production profiles. In particular, these include facility Capex per unit peak, development-phase duration and scheduling rules, drilling expenditures per barrel of oil equivalent (BOE), cost of exploration and appraisal wells, Opex/Capex ratio, abandonment cost ratio, fraction of hydrocarbons produced yearly at plateau, fraction of hydrocarbons remaining at end of plateau, terminal production rate, and fraction predrilled wells. Most of the aforementioned were found to be log-normally distributed.
The paper illustrates a practical application of a simple, yet robust, work flow relying on real industry data to assess the value, risks, and uncertainties of an exploration prospect. It can be seamlessly extended to other basins because of a rich coverage of the online field-analogs database.
Polymer flooding of oil fields has not reached the same maturity as waterflooding. Hence, implementing polymer projects at field scale requires a workflow comprising several steps. The workflow starts with screening of the portfolio of an organization for oil fields potentially amenable for this enhanced-oil-recovery (EOR) method. Next, laboratory and field testing is required, followed by sector and field implementation and finally rollout in the portfolio.
Going through the workflow, not only is the subsurface uncertainty reduced, but also the knowledge regarding the cost structure and operating capabilities of the organization is improved.
Analyzing the economics of polymer-injection projects shows that costs can be split into costs dependent on the polymer injector/producer (polymer pattern) and costs that are independent. Knowing these costs, a minimum economic number of patterns (MENP) is defined to achieve net present value (NPV) of zero. This number is used to determine a minimum economic field size (MEFS) for polymer injection, which is taken into account in the screening of the portfolio.
A robustness criterion for economic-evaluation purposes is defined as the minimum number of patterns required for economic polymer injection. By use of this criterion, a diagram is derived allowing for screening of fields for polymer economics by use of pattern-dependent and pattern-independent costs and the utility factor (UF).
The cost structure reveals how the NPV of polymer projects changes with the number of patterns, incremental oil, and injectivity. Injectivity is particularly important because it determines the chemical-affected reservoir volume (CARV) or speed of production.
A sensitivity analysis of the NPV showed that for the cost structure used here, in addition to the polymer costs, the well costs are important for the economics of a full-field polymer-injection project.
The standard method to evaluate an oil- or gas-production-decline curve estimated with an exponential function—taking the logarithms of both sides of the equation, estimating the parameters of the transformed function through linear regression, and exponentiating—leads to biased estimates of future production. The bias arises in the process of exponentiation.
The direction and magnitude of exponentiation bias depend on three driver variables: the variance of the post-peak-production history; the number of post-peak observations on production; and the estimated rates of production during the forecast period. A correction factor, dependent on the confluent-hypergeometric-limit function, applied to the biased estimators produces unbiased estimates of future production. The correction factor can be quickly evaluated and introduced into the work flow for use in evaluating exponential-decline curves.
The net bias in estimates of future production is more likely to be negative than positive. Negative bias understates remaining resources and reserves. The probability of negative, rather than positive net bias, is an increasing function of the maturity of production at the point of evaluation. The absolute magnitude of the bias is a direct function of both the variance of the empirical post-peak-production history and the forecasted rates of future production. It is an inverse function of the length of the post-peak-production history.
A data set of 54,254 completion-level monthly production histories from the Gulf of Mexico (GOM) was used to quantify the bias and show the characteristics of production that determine its direction and magnitude. In this data set, exponentiation bias in estimates of remaining resources usually results in small absolute errors. Holding out varying fractions of the production histories of the completions analyzed, the interquartile range for errors in the estimated remaining resources (relative to unbiased estimates) extends from an underestimate of 886 to an overestimate of 2,105 BOE.
However, at the extreme ends of the distribution of errors, maximum underestimates of 8.3 million BOE and overestimates as large as 22.5 million BOE were found. More than 14% of the completions analyzed had forecast errors of more than 30%. Extreme biases are predictably associated with specific ranges and combinations of values of the three driver variables. Therefore, exponentiation bias can have very large and predictable effects on the economic value of estimated remaining resources, but they can be reliably corrected.
An oilfield services company that recognizes the value of embedding sustainability into internal business processes has elevated environmental performance by expanding its existing Continuous Improvement (CI) program. This paper describes the program that has been successfully implemented globally during a 5-year period. It is illustrated with actual case studies that exhibit substantial savings in energy, water, and waste, alongside business cost savings.
The program covers major facilities of the company’s engineering and manufacturing division. Each year, participating sites have an objective to complete a minimum of one improvement project that actively targets elimination or reduction of environmental wastes. Each project executes the Define, Measure, Analyze, Improve, Control (DMAIC) process. The steps in DMAIC are incremental and data-driven, and use applicable statistical and analytical tools to enable solutions that deliver step-change impact. The Control phase specifically facilitates sustainability because it requires implementing models for preserving short- and long-term improvements. These projects are tracked in a centralized repository and undergo a standard validation process.
The program entered its fifth year in 2014, with 28 participating locations worldwide. Project initiatives have generated significant financial benefits, exclusive of environmental savings. Annually, the program has saved an average of 2 378 000 kW-hr of electricity and 19 704 000 L of water, with 521 t of waste eliminated, reused, or recycled.
The successes of the program are communicated both internally and externally, including contributions to sustainability filings. Internally, the program has provided a vehicle to positively engage employees across disciplines and to share innovations, technologies, and best practices for the environment. Projects resulting in facilities-related enhancements have demanded initial capital expenditure, but the return on investment is projected to continue beyond the annual timescale captured by the program. The program has stimulated forward-thinking management decisions on the future and sustainability within the organization.
DMAIC is a recognized CI process that uses Lean and Six Sigma techniques. By leveraging on this approach to focus on the environment, the probability of overlooking environmental opportunities is substantially reduced. Lean’s systematic elimination of waste is implemented in the very literal sense. Checkpoints in each DMAIC step reduce the risk of project failure, and Six Sigma’s data-analysis methods enable measurable, visible results that allow the organization to track CI actions toward sustainability.
Surveillance data can be critically important in managing producing oil and gas fields. To maximize value, it is necessary to identify those surveillance opportunities that are not only informative, but materially value-adding. For surveillance decisions in producing fields, applying conventional value-of-information (VoI) methods to every individual surveillance opportunity can be too time-consuming to be practical, considering the large number of relatively small data decisions to be evaluated, each possibly addressing multiple reservoir uncertainties and supporting numerous business decisions. This paper outlines risk-based surveillance planning (RBSP), a simple approach that is based on the observation that the vast majority of surveillance opportunities in producing fields are designed to manage risks. RBSP then evaluates surveillance opportunities with VoI principles, but exploits the fact that much relevant value data already exist as byproducts of risk-management processes. RBSP has been used successfully in dozens of oil and gas fields; two case studies are described herein.
Many companies use a risk-management framework that documents risk events and their causes and consequences, assesses risk probability and impact, and identifies prevention measures to reduce the probability of the risk events occurring and mitigation measures to reduce the impact of risk events should they occur. If such a framework has been used, the risk assessment provides an estimate of the expected monetary value (EMV) lost because of a risk event (= probability × impact). Risk reassessment taking into account the planned prevention and mitigation measures reveals the EMV increment attributable to the risk-management measures. RBSP assumes this to be the theoretical value of “perfect” information (VoIperfect) for the package of surveillance that supports these measures.
RBSP links each potential surveillance opportunity to the risk-management plan by determining how the surveillance would (a) help create or strengthen prevention measures; (b) detect the impending occurrence of a risk event, thus enabling mitigation measures to be triggered; or (c) help monitor both prevention and mitigation measures to detect any weaknesses that require improvement. The VoIperfect of a surveillance opportunity with respect to a given risk can then be estimated by attributing to it a proportion of the total VoIperfect of the surveillance package by use of a simple criticality score to estimate how dependent the risk-management measures are on each item of surveillance. A reliability score then discounts this VoIperfect on the basis of how likely the surveillance is to deliver the necessary information in reality, yielding the value of imperfect information. Where an item of surveillance affects several risk-management measures and multiple risks, the values are summed, providing a VoI for each item of surveillance.
In case studies of an asphaltene-precipitation risk in a deepwater oil field and a water-influx risk in an offshore gas field, RBSP helped to create a logical value-based rationale for surveillance decisions, and the surveillance, when implemented, did facilitate effective management of the risks and added value.