Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Results
Accelerated History Matching Through Process Independent Scale-up Techniques in a Giant Carbonate Reservoir
Suzuki, K. (Japan Oil Development Co. Ltd.) | Asada, J. (Japan Oil Development Co. Ltd.) | Yoshida, K. (Japan Oil Development Co. Ltd.) | Nomura, M. (Technology Research Center, Japan National Oil Corporation)
Abstract Although upscaling has been extensively investigated, a quantitative relationship between the multi-million block geological model and the upscaled flow model is likely to be lost during the history match process, primarily due to parameter modification on an upscaled model. In this paper, we propose a new history match procedure combined with an innovative faster upscaling technique. In the proposed procedure, after each history run, parameter modification for the next run was done on the detailed geological model in close collaboration with geologist. This loop was made possible with a new faster upscaling technique, distinct against the existing method, by adopting empirical correlations of computing upscaled absolute and relative permeabilities. A capillary-limit upscaling technique was applied for the case study, since a target reservoir is capillary dominant. In the case study, history match was conducted on a sector of an oil-wet carbonate reservoir under waterflooding for 20 years. In each modification, the revised geological model of three million blocks was upscaled to a flow model of 60 thousand blocks in a process independent manner with a scale-up factor of 50. The CPU speed-up factor was approximately 200: 10 days for the detailed model with massive parallel computing and an hour for the upscaled model with a personal computer. Even with such significant CPU time reduction, it was confirmed that the upscaled model still maintained the simulation quality of the detailed geological model. Introduction Recently, the size of a geological model has been becoming huge, e.g., multi-million-gridblock model, in order to describe the heterogeneity as much as possible. Direct transfer of such a huge geological model (detailed geological model) to a flow model would result in a too long CPU time in a flow simulation and be impossible even for nowadays computer capacity in terms of the required memory. In general, the objective of upscaling is to reduce the number of gridblocks of a flow model so that a flow simulation can be conducted in a reasonable range of a computation time. The requirement for upscaling is that the important characteristics of heterogeneity and multiphase flow are to be retained in the resultant upscaled flow model made up of the fewer number of gridblocks than the detailed geological model. In other words, upscaling is one of such techniques that assign representative properties to each simulation block of a coarse grid model given that a finely gridded model is available. In the literature, there are two major upscaling approaches: dynamic approaches and effective properties approaches (Ekrann and Dale 1992). Dynamic approaches adopt pseudo- functions based on the simulation results of an entire reservoir model or representative portion of a fine grid model. Thus, these approaches give accurate results limited to an imposed flow condition from which pseudofunctions are derived. However, the derivation is too time-consuming to upscale very large geological models. In addition, the pseudofunctions have many other deficiencies as criticized by Barker and Thiebeau (1997). On the other hand, effective properties approaches average flow properties from the information on sub-grid scale heterogeneity alone in which the local flow regime assumed to be extreme: viscous, gravity, or capillary dominated. These approaches, which work on the isolated coarse grid block, enable us to compute effective properties much faster than dynamic approaches and give accurate results if the appropriate assumption is made for the actual flow field.
- Asia (1.00)
- North America > United States > Texas (0.69)
Abstract Effective management of oil and gas fields over field life with the objective of maximising asset value can be a complex affair. Application and integration of various skills from subsurface (geology, geophysics, petrophysics, reservoir engineering, production technology and drilling) to surface engineering, production operations / surveillance and economics disciplines are required to manage technical, commercial and political challenges and uncertainties. In large organizations, there are the additional challenges of ensuring consistent work quality and process with multiple projects and technical personnel with different levels of skills and experience. Subsurface and economic disciplines workflow over a field life from acquisition of exploration or development areas, exploration, development, production and abandonment work activities were mapped, improved and integrated to ensure work are executed more efficiently and correctly. This paper describes the techniques and activities involved in developing this workflow which incorporates data and information management, technology mastery and promoting a collaborative and integrated multidiscipline approach. It was developed and deployed in two phases over 3 years and became the first web enabled workflow in the organisation. Pilot implementation was carried out on a number of domestic, international and outsource Field Development and Field Review projects. Case histories of the application, benefits gained and lessons learnt are discussed this paper. Introduction Upstream assets in the Exploration and Production business are managed over different phases of the life of the assets or fields. These typical phases are acquisition, exploration, development, production and abandonment as shown in Figure 1. During these phases, different combination of skills and technical disciplines are required to manage and maximise values from the assets. The necessary skills and disciplines reside in either the competency departments or multidisciplinary project teams, as shown in Figure 2. Challenges in managing and maximising the values of these oil and gas fields include :Quality work done by each discipline as the overall work is only as good as the lowest quality or weakest link in the chain of activities. Integration and collaboration between disciplines to ensure discrepancies between disciplines are avoided. Subsurface uncertainties with limited information from seismic and exploration and appraisal wells data Political and commercial uncertainties e.g. oil prices, war, civil unrest and political changes Health, safety and environment concerns due to hazardous nature of oil and gas from fires and spillage to the environment Capital intensive and relatively higher risk compare to other industries due to the nature of uncertainties. Dealing with countries and communities with different political, business and cultural system and beliefs Maturing assets requiring intensive works Declining production and decreasing success rates to make major discoveries with large commercial quantities of oil or gas Increasing and more costly technical challenges e.g. deepwater, tight formations and high sour gas content Need to train and motivate younger professionals to replace an aging workforce Sustaining business growth to increase values expected by shareholders and other stakeholders
- Asia > Malaysia (0.47)
- North America > United States (0.46)
- Asia > Middle East > Israel > Mediterranean Sea (0.34)
- North America > Mexico > Tamaulipas > Burgos Basin (0.99)
- North America > Mexico > Nuevo Leon > Burgos Basin (0.99)
- North America > Mexico > Coahuila > Burgos Basin (0.99)
- (8 more...)
Seismic and Geologic Evidence of Complex "Positive Flower" Structures Along Confining Bends in Strike Slip Faults, Lake Maracaibo, Venezuela
Kramer, Darrell (International Reservoir Technologies, Inc.) | List, David (International Reservoir Technologies, Inc.) | Tillman, Laura (International Reservoir Technologies, Inc.) | Jagiello, Keith (International Reservoir Technologies, Inc.) | Gould, Thomas (International Reservoir Technologies, Inc.)
Abstract Since 1996, International Reservoir Technologies (IRT) and Petroleos de Venezuela, SA (PDVSA) have completed four (4) integrated studies within Block V Lamar, Block VI Lamar, Block V Centro, and Centro Sur Lago Areas of Lake Maracaibo, Venezuela. These studies have identified four major and numerous minor "positive flower" structures along confining bends of three major strike slip faults. These flower features provide the structural trap that has accounted for some 623 MMstb of oil production from the Eocene Misoa B and C sands. Positive flower structures are often dismissed as non-existent and are often interpreted as reactivated normal faults with reverse movement. Although the normal displacement is true on the examples shown in this paper, we will show several examples of bounding arcuate reverse faults that coalesce into a single fault at depth commonly referred to as "Positive Flower" structures. 3D seismic, in conjunction with dipmeters, logs, fluid contacts, engineering tests and production data provided the necessary information to build a detailed, comprehensive and consistent 3D geologic model of these structures. The integration and use of all possible data sources was key to accurately mapping the associated reservoirs. The studies have led to improved knowledge of the timing and development of flower structures with respect to hydrocarbon migration and trapping. Production continues in these structures and development drilling continues both within the flower structures and to drain oil that has been trapped on the underside of bounding reverse faults. The integrated study approach has lead to optimized development of these complex "Positive Flower" structures, but is also applicable to other structural and stratigraphic regimes. Introduction Positive flower structures develop along restraining or convergent bends in a strike-slip fault (Harding, 1985), where the change in the direction of the trace of the fault is opposed to the sense of the offset. The result is an en echelon series of uplifted blocks and folded structures that are separated by reverse faults that steepen and root into a main strike-slip fault. The first mention of these faults is reported to be by Rod (1956) at outcrops along the Oca Fault, Venezuela. Since that time numerous others have reported the existence of positive flower structures, (Harding, et al, 1983), (Schlische and Ackermann, 1995), (Robb, et al, 2002), (Lindenbeck, et al, 1997), (Sylvester and Smith, 1976), (Edwards and Kidd, 1996), (Holdsworth, 1989), and (Stea, 1995). Pull-apart structures along releasing bends in strike-slip faults generate negative flower structures, (Litak, et al, 1998), (Aksu, et al, 2000), (Nemoek and Nemoek, 1998). In the Maracaibo Basin area of northern Venezuela a series of strike-slip faults developed near the end of the Eocene as the South American plate continued a north and west migration and interacted with the Caribbean plate. Major strike-slip faults that developed are shown in Figure 1 and are from west to east, respectively, the Tigre, Icotea, Pueblo Viejo and Valera faults (Lugo, 1991). These faults are believed to have originated as normal faults during the Jurassic extensional phase and were periodically reactivated from Jurassic through Early Miocene time. Displacement along some of these major faults, including the Icotea Fault that passes just west of Block V Centro and Block VI Lamar, changed from extensional to left-lateral strike-slip sometime during Middle to Late Eocene time as oblique compression occurred between the Caribbean and South American plates (Pindell et al, 1998). Several of these strike-slip faults have formed "pop-up" features along the location where the rooted fault changes direction. The following discussion details the integrated methodology used to map these features.
- Geology > Structural Geology > Fault > Strike-Slip Fault (1.00)
- Geology > Structural Geology > Fault > Dip-Slip Fault > Reverse Fault (0.68)
- Geology > Structural Geology > Fault > Dip-Slip Fault > Normal Fault (0.57)
- Geophysics > Borehole Geophysics (1.00)
- Geophysics > Seismic Surveying > Seismic Processing (0.93)
- Energy > Oil & Gas > Upstream (1.00)
- Government > Regional Government > South America Government > Venezuela Government (0.54)
- South America > Venezuela (0.99)
- North America > United States > Oklahoma > Ardmore Basin (0.99)
- North America > United States > Oklahoma > Anadarko Basin (0.99)
Abstract A software tool is described that enables optimization for maximizing asset value, both with and without uncertainty. Modular architecture allows examination of different objective functions, optimization schemes and financial models. The method generates an efficient frontier that can be used for risk and decision analysis. The tool is demonstrated in the context of a field example optimizing for Net Present Value (NPV). Valuation of advanced completions is explored along with the returns gained from expanding surface gas handling facilities. The results clearly demonstrate the utility of such a tool for value-maximization in planning both near and long-term time horizons as well as providing the necessary foundation for maximizing asset value. Introduction In this article we demonstrate an application of asset value maximization through optimization of an existing infill program for a mature real onshore oil and gas field. We optimize a history-matched reservoir model and provide confidence levels under uncertainty by generating efficient frontiers. Application of search or optimization algorithms has been the subject of numerous studies and articles both inside and outside the petroleum industry. Following in particular the work of Raghuraman et al. this article considers a real reservoir and attempts to maximize its value by analyzing various exploitation scenarios. This article first describes the main features of the software tool: the overall methodology and different optimization schemes. It then applies the optimization process to the field example. The main objective of the study is to maximize asset value, with and without the presence of uncertainty. The efficient frontier is discussed and its use for risk management and decision-making demonstrated. Methodology The process of optimizing a reservoir, under the assumption that everything is deterministically known, is relatively straightforward. One may want to extract the maximum fraction of oil and/or minimize the water production or maximize the net present value (NPV) of the oil produced, by optimally controlling various operational variables (e.g., individual completion flow rates), all the while accounting for physical constraints (e.g., single well production or pumps/valves limitations) and economic constraints (e.g., drilling, logging or stimulation costs). However, the presence of physical and/or financial uncertainties elevates the problem of optimization to the level of a risk management problem. A software tool has been developed with an easy-to-use dialog-driven interface that encompasses the necessary elements to perform reservoir optimization under uncertainty and to provide the risk analysis necessary for decision-making. A detailed description of the process, with an example on reservoir monitoring and control, is given in Raghuraman et al.Figure 1 shows a schematic of the algorithm for a problem with uncertainty. The control (or operational) variables can, essentially, be chosen to be anything contained in the reservoir simulator input โdeckโ. We have considered liquid flow rates at various completions, flow-head pressure, and multiple โactionโ parameters, which act, for example, as triggers of a shut-in when too much water is recorded in a given completion. These control variables, which the optimizer algorithm modifies in its iterative process to achieve an optimal objective function, need to be distinguished from the uncertainty parameters. Those contained in the reservoir are effectively uncontrollable and most of the time not known precisely or even to a good accuracy. They may include the pore volume, fault transmissibilities, the permeability or porosity (regionally or field-wide), the productivity index of the aquifer supporting the water displacement for oil production, etc. Similarly, there might be many financial parameters that are uncontrollable and known to vary randomly in time, oil price being the most obvious.
- Europe (0.93)
- North America > United States > Texas (0.47)
- Asia > Middle East > Israel > Mediterranean Sea (0.24)
Abstract The purpose of this study was to develop and apply a knowledge value chain (KVC) model. The model is based on Drucker's next society, Porter's value chain, Nonaka's spiral of knowledge, Kaplan and Norton's balanced scorecard, Gardner's multiple intelligences, and Pareto's 80/20 rule. An optimization technique is included in the model to identify the nodes (individual employees or groups of employees) of the critical value chain (CVC) and optimum value chain (OVC). The KVC model includes a business phase and a numerical phase composed of input knowledge, knowledge activities, and output goals. The input knowledge can come from intranet, extranet, and Internet channels and can be tacit and explicit. Whatever the source, input knowledge will converge toward an enterprise information portal (EIP). The value-added knowledge activities are based on socialization, externalization, combination, and internalization of knowledge. The knowledge value chain can be determined from output goals based on selected multiple values. We applied the KVC model when analyzing employee value contributions at K Company in Taiwan. We selected five values (goals): customer, financial, learning and growth, internal business process, and innovation. The results of our application of the KVC model offered three key findings. First, the strengths and weaknesses of each employee's core competence were examined. Second, the critical nodes (winners' nodes or those of high-performance employees) for each goal were detected, and the selected critical nodes were gathered together to share their best practices. Third, optimum goals to reach the maximum value contribution to K Company were calculated. Introduction In 1996, OECD reported that the knowledge-based economy would change the activities of the global economy. Knowledge has become main driving force for production and economic growth. The World Bank also indicated that economic growth in the Knowledge Economy Era is based on not only human capital accumulation and core capability applications, but also on the acquisition and utilization of information and knowledge. In recent years, much interdisciplinary research such as biochemical science, biotechnology, nanotechnology, information technology, and wireless communication technology has resulted in a great revolution of human life. Peter Ducker, the most influential and widely read thinker and writer on future organizations, indicated that single-discipline technology will die out in the next society. He also pointed out that everybody has the same opportunity to acquire knowledge as a new production tool in knowledge society. The purpose of this study was to develop a interdisciplinary model, a Knowledge Value Chain (KVC) model, to monitor and evaluate the contributions of knowledge activities. We applied the KVC model to examine the strengths and weaknesses of individual employee core competence, to identify the critical knowledge workers, and to calculate their maximum value contribution. Literature Review The KVC model is based on Drucker's next society, Porter's value chain, Nonaka's spiral of knowledge, Kaplan and Norton's balanced scorecard, Gardner's multiple intelligences, and Pareto's 80/20 rule. In addition, optimization technique is included to identify the nodes of the critical value chain and the optimum value chain.
- North America > United States (1.00)
- Asia (0.66)
- Banking & Finance (1.00)
- Energy > Oil & Gas > Upstream (0.83)
Abstract Remote start-up facility is initiated and implemented at some of Petronas Carigali Sdn Bhd (PCSB) offshore installation as one of the initiative to reduce production deferment and optimize resources. Conventionally, operation's personnel are required to visit the remote platform to perform the local/manual start-up activity whenever the platform has shutdown due to process or other abnormality. The process upset may be originated from the platform itself or due to cascading effect from receiving platform after it has shutdown. During bad weather, the traveling time by boat to the remote platform is difficult and normally takes longer traveling time. There are also cases where the start-up activity needs to wait on weather when the traveling by boat is not possible due to rough sea. The delay due to traveling time and further delay during bad weather cause the delay in restarting-up the platform. This start-up delay results in production deferment. The implementation of the remote start-up facility will enable the operation's personnel to remotely start-up the drilling platform. It is then no longer required to travel to the platform for local/manual start-up activity. Based on the study conducted, the restart-up delay due to traveling time contributes significantly to the production deferment. Therefore, if the start-up activity can be done from remote i.e. from mother platform or FPSO, then the production deferment can be reduced significantly. By having remote start-up facility, the total downtime and thus production deferment could be minimized not only during bad weather condition but also during normal weather condition as the travel to the platform is no longer required. This production deferment would accumulate from year to year if not being addressed. This facility has been implemented on MASA platforms where existing control and safety shutdown systems were modified together with additional instrumentations installed. Only minor modification at relatively marginal cost was required but yet capable of generating relatively significant reduction in deferment. At MASA platforms, the remote start-up facility is initiated from the FPSO by opening the respective surface safety valves (SSV), Shutdown Valves (SDV) and Choke Valves (CV). Introduction The remote start-up facility is a step towards fully remote operated platforms whereby some of the manual functions required for starting-up the platform such as production well opening and choke valve opening are being automated for remote control. Existing remote drilling platform normally being equipped with minimal facilities and does not have comprehensive and sophisticated remote monitoring and control capability. Therefore, study need to be conducted to identify the scope of the modification and it has to be economically justified before proceeding with engineering design and implementation. It is also necessary to ensure the modified facility is safe enough for remote operation. For new installation, this function can be incorporated during the design stage as part of the overall platform control philosophy. Normally, the remote start-up is limited only after Process Shutdown (PSD) and Unit Shutdown (USD). It is not provided for Emergency Shutdown (ESD) as it is normally due to serious abnormality such as fire & gas detection or leaking which requires rectification or isolation prior to any safe start-up. The process upset need to be normalized prior to start-up either by manually controlled from remote or on auto-reset.
Abstract In a finite difference scheme, continuous multi-phase flow variables that appear in conservation equations such as saturation are spacially discretized on grid blocks. When coarse grid blocks are used for reservoir simulation, the numerical solution tends to magnify the discretization error. This causes numerical errors called as coarse grid effect. Through a coarse grid reservoir simulation, injection pressure showed quite difference with that of fine grid model. This is because total mobility at the injection well is defined by the finite difference based average saturation in the injection well grid block. In other words, the saturation gradient that would appear in the near-well region is ignored in such coarse-grid system. In this study, the areal average saturation in the injection well grid block was calculated analytically by the newly derived radial displacement approximation which was extende from the Buckley-Leverett linear displacement problem, taking account of pressure difference between the injection well and the injection well grid block. Consequently, the corrected total mobility was provided, by defining new value of total well index and transmissibility. The injection pressure computed by this technique with for coarse grid model is reasonably agreed with the numerical solution of fine grid model. The practical application of the developed well-pseudo is validated through an actual reservoir simulation study. Introduction The upscaling is one of the reservoir simulation techniques to reduce the number of cells in a geological model to the appropriate level for a flow simulation. The upscaling process defines a coarse grid system where each coarse grid block contains several fine grid blocks and assigns effective flow properties to each coarse grid block accounting for the flow fields simulated in a fine grid model. In near-well region, since pressure shows drastic change in radial direction, the upscaling approach for linear pressure region cannot be applied. Therefore, another approach i.e. well-pseudo which accounts for such pressure change in the near-well region is required. Historically, well-pseudo was firstly introduced to describe coning problems. In literatures, only several papers treat upscaling in the near well region. Durlofsky et al. showed an approach to calculate transmissibility and well index for single-phase flow. This method is based on the solution of local well-driven flow problems subject to generic boundary conditions. Simulation results with their method indicates improved predictions compared to those obtained by conventional techniques. However they did not take frontal advance into consideration. We developed a new well-pseudo to account for flow fields representing the near-well regions and frontal advance. A new analytical well-pseudo, it is derived by discretizing a continuous solution to Buckley-Leverett type radial displacement. The approach is an extension of Hewett et al. which analytically calculate pseudofunctions required for discretization on a coarse grid. In this paper, first, we describe the detail of the new well-pseudo. Secondly, several characteristics of coarse grid effects are revealed through calculations of well-pseudos with changing reference relative permeabilities. Finally, the practical approach of the developed well-pseudo is validated through the application to an actual reservoir model. Derivation of Analytical Well-pseudo In this section, continuous analytical solutions for single-phase and two-phase flow problems were described. Assumptions were given in radial flow system of incompressible fluid where no gravity and no capillary forces were considered. Single-Phase Property Under a single-phase radial steady-state flow condition, the pressure distribution around an injector is given by the following equation.
- Asia (0.47)
- North America > United States (0.46)
- Reservoir Description and Dynamics > Reservoir Simulation > Scaling methods (1.00)
- Reservoir Description and Dynamics > Reservoir Fluid Dynamics > Flow in porous media (1.00)
- Reservoir Description and Dynamics > Fluid Characterization > Phase behavior and PVT measurements (1.00)
- Production and Well Operations > Well & Reservoir Surveillance and Monitoring > Production logging (1.00)
Abstract The paper describes results of a petrophysical study aimed to improve reservoir description of a complex oil reservoir located in Western-Siberia, Russia. The K-field was put on production in 1998. The reservoir rocks were deposited in a sand-rich deltaic environment, that implies reasonably good lateral sand bodies continuity. Additionally, no evident continuous faults were seen by 3D seismic survey. However, recent field development has revealed far more complex reservoir compartmentalization and heterogeneity, than it was supposed at the exploration stage. A considerable variation of oil-water contact was encountered and large discrepancy was observed between log predicted and observed well productivity. Additionally core experiments demonstrated poor correlation between porosity and permeability with two orders of magnitude permeability variation at similar porosity values. The hydraulic flow unit (HFU) approach is used in the present study as an integrating tool for petrophysical description of the reservoir. First, we start from a rock type classification based on core data to pick out major HFU constituting the reservoir. We employ combination of routine and special core analysis data to work out a resulting rock type classification for the studied field. Along with description of the HFU classification, we present the underlying geological reasons which control FZI variation for K-field reservoir. Based on geological and physical background we select relevant logs types related to FZI and propose simple regression approach of for prediction of FZI using gamma-ray and effective porosity log readings. Predictive capabilities of proposed regression approach is compared with more complex statistical techniques and we show that regression approach have reasonable accuracy of predicted FZI comparable to Bayesian inference and artificial neural network methods. Then we use HFU distributions obtained for each logged well along with classified capillary pressure data for calculation of synthetic water saturation logs and estimation of the free water level (FWL) depth. The estimated FWL data supplemented with seismic information about possible discontinuities and faulting zones are used to update reservoir compartmentalization scheme and to delineate pay zones. The 3D spatial distribution of the HFU is obtained using geostatistical modeling techniques and integrated into the geological reservoir model to define the porosity-permeability relationships. Additionally, HFU are used in the dynamic reservoir model to assign regions of relative permeability and capillary pressure functions. The above described simple engineering application of the hydraulic flow units approach allowed us to construct an improved reservoir model and delineate a new probable pay zones, what lead to an increase of estimated oil in place by 70 %. Introduction The HFU approach is a methodology for classification of rock types and prediction of flow properties, based on sensible geological parameters and the physics of flow at pore scale. The theory of the method was originally suggested by Amaefule et al [1] and further developed by other researchers [2]. Development and application of the HFU approach is stimulated by the common problem of permeability prediction in uncored but logged wells. Classical approaches for estimation of permeability are based either on simple logarithmic regressions evaluating permeability from log-derived porosity (Eq. 1) or on empirical correlations which relate permeability to various log responses.Equation 1 Both traditional approaches are empirical and have no or little physical and geological background. The regression methods deliberately ignore experimental data scatter and predict smoothed permeability distributions, which usually do not reproduce observed variability of permeability. The other empirical correlations based on various log responses usually have limited local applicability since, they were derived for particular geological settings.
- Research Report > Experimental Study (0.34)
- Research Report > New Finding (0.34)
- Geology > Mineral > Silicate (0.75)
- Geology > Sedimentary Geology > Depositional Environment > Transitional Environment > Deltaic Environment (0.54)
- Geology > Rock Type > Sedimentary Rock > Clastic Rock (0.47)
- Geophysics > Seismic Surveying (1.00)
- Geophysics > Borehole Geophysics (1.00)
Abstract The proposed revitalization of the Bokor field is affected by many surface-facility constraints as well as the subsurface complexity of laminated stacked reservoirs. Such complex problems require a truly integrated multidisciplinary methodology using a decision-and-risk-assessment approach to screen development options. This paper presents the process adopted and used to screen and rank the various options. Any field redevelopment needs to begin with an identification of system bottlenecks and proposals to remove them, but often this is only considered at a facilities level. In proposing a revitalization plan for the Bokor field, the joint Petronas Carigali Sdn. Bhd (PCSB)-Schlumberger team had to develop a complex decision-making process to select an optimal development plan that would solve the constraints while minimizing unit technical operating cost (UTOC) and maximizing production. Often, brownfield redevelopments are limited to maximizing existing facilities (slots, processing plant). However, in the case of Bokor, essentially no additional oil can be produced without an impact on existing production (lack of gas lift, lack of export compression). Thus, an approach was required that not only solved the existing bottlenecks but also allowed for sufficient additional capacity to sustain a cost-effective revitalization plan. The process had to be iterative, assessing the risks and uncertainties of each approach and its economic impact. The solution was an approach that considered each reservoir target by discounted cumulative oil, thus producing a ranked list of opportunities that could be added together to form any development scenario. Hence, rather than running a reservoir simulation on each development case, this approach allowed multiple facility development options to be tried quickly. Each development scenario was then analysed using a tornado diagram. From this diagram was prepared a decision tree that gave the most likely outcome. This approach allowed options as diverse as subsea tiebacks to full integrated platforms with a high number of electrical submersible pumps (ESPs) to be risked and ranked equitably within a reasonable timeframe. Introduction Field optimization in brown field environments, in particular, is an area where subsurface engineering, production technology, and drilling and facility engineering disciplines frequently conflict in selecting the most technically appropriate field redevelopment strategies and plans, despite the apparently aligned interests in maximizing and optimizing reservoir drainage. The conflict arises because of the constraints imposed by any existing infrastructure, such as existing location versus new well target locations, existing facilities capacity versus new required capacity or existing infrastructure mechanical integrity, relative to a green field development where no such predetermined constraints exist. Consequently, the most effective and economic solutions may not be easy to establish without the use of a powerful holistic modelling approach. In addition, the paradigms of deterministic and probabilistic thinking, prevalent to varying degrees within these various disciplines, introduce a further dynamic that lends itself to resolution or conciliation through the use of an integration process and tool.
- North America > United States > Texas (1.00)
- Asia > Malaysia > Sarawak > South China Sea (1.00)
- Asia > Malaysia > Sarawak > South China Sea > Sarawak Basin > Baram Delta Province > Bokor Field (0.99)
- Asia > Malaysia > Sarawak > South China Sea > Sarawak Basin > Baram Delta Province > Block SK307 > Betty Field (0.99)
- Asia > Malaysia > Sarawak > South China Sea > Sarawak Basin > Baram Delta Province > Block SK307 > Baronia Field (0.99)
Abstract Uncertainty resolution in recovery driving factors is a crucial issue in oil and gas exploration and exploitation activities. One such application is to select field candidates for development in the midst of several stochastic variables e.g. recoverable reserves, production rates, gas/oil price, discount rates etc. Unfortunately, the cashflows from investments tend to be stretched out in several years. Traditionally, to measure such inherent potential risks, engineering analysis is performed along with project economics. Engineering analysis builds confidence around recovery factors whereas the economic analysis justifies the returns through NPV and IRR computation. One implicit yet critical behavior, however, is often neglected. Investments tend to have embedded flexibilities, i.e. they are not always now or never type, which can be valued as flexible decisions via the theories available for financial options mathematics. This paper investigates the application of financial mathematics to determine viability of a tight gas reservoir as an investment candidate and presents some interesting results dictated by intrinsic optionality. Real options (RO) have gained increasing popularity in the oil and gas community in the recent past. These applications employ analytic solutions of Black-Scholes type partial differential equation (PDE) available in the literature which are usually limited to cases of relatively simple payoff structure, exercise strategy, lower dimensions etc. However, most realistic cases, at the very least, will have complex payoff structure e.g. cascading cashflows in the example considered in this paper. Moreover, these options can be exercised anytime during the life of the project i.e. American type options. To accurately and rigorously compute option values in such cases one has to resort to suitable numerical approach to solve the PDE along with appropriate, complex boundary conditions. In this paper, we prescribe one such technique, namely Implicit Mulitgrid Finite Difference (IMFD) to asses a two well pilot development program. Based on a previous engineering study, stochastic driving factors for the investment are determined. The investment is then crafted in a RO framework. Representative PDE is solved with imposed boundary conditions. Two different scenarios are considered and results discussed. Finally, exercise boundaries are forecasted above which, at a given time in the course of the project life, it is beneficial to exercise the option (in this example drill the development well) instead of waiting for a future exercise. Background and Motivation Traditional Discounted Cashflow (DCF) approach suffers from one significant drawback, namely it assumes single decision pathway with fixed outcomes. On the other hand, in practicality decisions are actuated in the midst of uncertainty resolution and knowledge gained by management. This consequence of high uncertainty coupled with management's flexibility to make midcourse strategy corrections as more information is made available is the essence captured in the real options approach. Management always acts rationally and with judgment whereas DCF based economic valuation methodology never accounts for this way of action. Occasionally, DCF results are further refined by scenario analysis, Tornado diagrams and Monte Carlo simulations of sensitive variables. Nevertheless, none can capture the value of flexibility and knowledge. This additional knowledge gained ultimately resolves uncertainty around key variables which, in turn aids rational decision making. This age old, inherent disconnect between theory and practice has motivated researchers to borrow ideas from financial options and reconstruct a new framework to value projects through real options12. Although slow to gain widespread popularity, the prevalent literature suggests that there are numerous business instances where real options can deduce a much higher value compared to static NPV calculations. It must be mentioned, however, that the projects which are deep-in-the-money (positive NPV) or deep-out-of-the-money (highly negative NPV) are easier to decide on and should never be considered for further real options analysis. It is the near-money (swing) projects that could benefit from the insights gained from real options analysis.