Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
Results
A Preconditioning Scheme For Full Waveform Inversion
Guitton, Antoine (GeoImaging Solutions Inc) | Ayeni, Gboyega (Stanford University) | Gonzales, Gladys (Repsol)
SUMMARY The waveform inversion problem is inherently ill-posed. Traditionally, regularization terms are used to address this issue. For waveform inversion where the model is expected to have many details reflecting the physical properties of the Earth, regularization and data fitting can work in opposite directions: the former smoothing and the later adding details to the model. In this paper, we constrain the velocity model with a modelspace preconditioning scheme based on directional Laplacian filters. This preconditioning strategy preserves the details of the velocity model while smoothing the solution along known geological dips. The Laplacian filters have the property to smooth or kill local planar events according to a local dip field. By construction, these filters can be inverted and used in a preconditioned waveform-inversion scheme to yield geologically meaningful models. We illustrate on a 2-D synthetic example how preconditioning with non-stationary directional Laplacian filters outperforms traditional waveform inversion when sparse data are inverted for. We think that preconditioning could benefit waveform inversion of real data where irregular geometry, coherent noise and lack of low frequencies are present. INTRODUCTION The goal of waveform inversion is to derive physical properties of the Earth, such as P-wave velocity, S-wave velocity, or density. These properties can be related to the presence of hydrocarbons in the subsurface and their estimation is one of the most important goal in seismic processing. Traditionally, ill-posed problems can be solved by adding a regularization term to the objective function. Very often, a regularization term that can penalize differences between neighboring points is selected. However, whereas waveform inversion tends to add details to a velocity model, regularization tends to smooth them out, thus working against our primary goal: fitting the data. One way to address these somewhat conflicting goals is to use preconditioning. Here, we show how we can geologically constrain the velocity model by using a non-stationary preconditioning approach. This method requires two ingredients: a dip estimation method and a local dip filtering technique. We use the method of Fomel (2002) for the former and of Hale (2007) for the later. In this paper we first introduce the waveform inversion approach, with and without preconditioning. We show that preconditioning amounts to a simple change of variable which, in effect, changes the gradient direction. Then, we present our method of local dip filtering, which follows Hale’s. Finally, we present synthetic results on a modified version of the Marmousi model. These results demonstrate that non-stationary, preconditioned inversion yields geologically plausible models. DEFINING THE PRECONDITIONING OPERATOR S Preconditioning amounts to a change of the gradient direction. For waveform inversion, a gradient that embeds some geological information could help yielding more meaningful velocity models. To this end, we follow the approach of Hale (2007) for the construction of the operator S . Doing so, this operator becomes a non-stationary deconvolution with directional Laplacian filters. Directional Laplacian filters are built from small wavekill filters A , similar to those of Claerbout (1995). Wavekill filters have the ability to anihilate local planar-events with a given dip.
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.34)
- Geophysics > Seismic Surveying > Seismic Processing (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling > Seismic Inversion (0.64)
Efficient Seismic Monitoring Of Hydrocarbon Reservoirs Using Multiple Shooting Vessels
Ayeni, Gboyega Olaoye (Stanford University) | Tang, Yaxun (Stanford University) | Biondi, BIondo (Stanford University)
Abstract We propose an efficient reservoir monitoring method that utilizes data from multiple seismic sources. Because this technique reduces acquisition time and cost, seismic data sets can be recorded cheaply at short, regular intervals. Although, in many cases, the recorded multiplexed data can be separated into independent records, we choose to leverage the efficiency of direct imaging of such data sets. However, direct imaging with a migration algorithm introduces cross-talk artifacts and does not account for differences in acquisition geometry and relative shot-timing between surveys. We propose a joint least-squares migration/inversion approach that attenuates artifacts caused by imaging cross-talk and by acquisition discrepancies in the recorded data sets. By incorporating spatial and temporal regularization in our inversion algorithm, we ensure that the resulting time-lapse images are geologically plausible. Using a 2D numerical model, we show that our method can give results of comparable quality to migrated single-source data sets. Introduction Conventional seismic data acquisition involves a single seismic source and a recording array of receivers. Although not a new idea (Womack et al. 1990), recent advances in acquisition technology enables seismic acquisition with multiple sources (e.g., Hampson et al. (2008); Beasley (2008)). This acquisition approach, also called simultaneous-shooting (or multi-shooting, or blended acquisition), can be used to achieve longer offsets, better shot-sampling, and improved time and cost efficiency (van Mastrigt et al. 2002; Berkhout et al. 2008; Howe et al. 2009). The recorded data can be separated into independent shot records and then imaged with conventional methods (e.g. Hampson et al. (2008); Spitz et al. (2008)), or they can be imaged directly (e.g., Berkhout et al. (2008); Tang and Biondi (2009)). Although time-lapse (4D) seismic is an established technology for monitoring hydrocarbon reservoirs (Rickett and Lumley 2001; Whitcombe et al. 2004; Zou et al. 2006; Ebaid et al. 2009), it still has several limitations. First, because of the high cost of conventional (single-source) acquisition, it is impractical to acquire seismic data sets at short time intervals. Therefore, typical monitoring survey intervals may be too large to measure production-related, short-period variations in reservoir properties. Because of the large time intervals between seismic surveys, it may be difficult to match time-lapse seismic signatures to reservoir property changes derived from well-sampled sources (e.g. production history matching). Secondly, in many time-lapse seismic applications, inaccuracies in the replication of acquisition geometries for different surveys (nonrepeatability) are a recurring problem. Although modern acquisition techniques can improve repeatability of shot-receiver geometries, field conditions usually prevent perfect repetition. In order to isolate differences caused by changes in reservoir properties, non-repeatability effects must be removed from time-lapse data sets. Furthermore, because of operational, climatic, and other limitations, the acquisition time-window may be too small for conventional seismic data acquisition. In such cases, it would be difficult to acquire conventional seismic data sets at desirable intervals.
- Geophysics > Time-Lapse Surveying > Time-Lapse Seismic Surveying (1.00)
- Geophysics > Seismic Surveying (1.00)
Joint Preconditioned Least-squares Inversion of Simultaneous Source Time-lapse Seismic Data Sets
Ayeni, Gboyega (Stanford University) | Tang, Yaxun (Stanford University) | Biondi, Bindo (Stanford University)
ABSTRACT We present a joint least-squares inversion method for imaging simultaneous source (or blended) time-lapse seismic data sets. Non-repeatable shot and receiver positions introduce undesirable artifacts into time-lapse seismic images. We conjecture that more artifacts will result from relative shot-timing non-repeatability, when data sets are acquired with several simultaneously shooting sources. We show that these artifacts can be attenuated by joint inversion of such data sets without need for initial separation. Preconditioning with non-stationary dip filters and with temporal smoothness constraints ensures stability and geologically consistent time-lapse images. Results from a modified Marmousi 2D model show that this method yields reliable time-lapse images.
An Approach For Quasi-continuous Time-lapse Seismic Monitoring With Sparse Data
Arogunmati, Adeyemi (Stanford University) | Harris, Jerry M. (Stanford University)
ABSTRACT An approach for quasi-continuous, geophysical time-lapse monitoring with sparse seismic data is proposed. This approach takes advantage of the small changes in the seismic property of a geological reservoir that are expected to occur in a small time interval. The goal of this approach is to obtain high temporal and spatial resolution in reconstructed, time-lapse geophysical images using comparable resources that would have provided high spatial but low temporal resolution images with conventional approaches. This is done by acquiring spatially sparse data at small time intervals. In this case, a spatially sparse dataset refers to that dataset which is a small fraction (as little as 5%) of what would be acquired to reconstruct a high spatial resolution tomographic image of the subsurface. The high spatial resolution obtained by the proposed approach occurs because unrecorded data are predicted from future and past data. With high temporal and spatial resolution, early detection of important reservoir changes is more likely to occur.
- Geophysics > Time-Lapse Surveying > Time-Lapse Seismic Surveying (1.00)
- Geophysics > Seismic Surveying (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Reservoir Description and Dynamics > Formation Evaluation & Management > Seismic (four dimensional) monitoring (1.00)
- Data Science & Engineering Analytics > Information Management and Systems (1.00)
Wave-equation Tomography Using Image-space Phase Encoded Data
Guerra, Claudio (Stanford University) | Tang, Yaxun (Stanford University) | Biondi, Biondo (Stanford University)
ABSTRACT We present the image-space wave-equation tomography to the generalized source domain, where a smaller number of synthesized shot gathers are generated. Specifically, we generate synthesized shot gathers by image-space phase encoding. The comparison of the gradients of the tomography objective functional obtained using image-space encoded gathers with that obtained using the original shot gathers shows that those encoded shot gathers can be used in wave-equation tomography problems. The advantages of using image-space encoded data is the decreased computational when compared to that of the original shot gathers and the ability of performing velocity analysis in a target-oriented way. We illustrate our examples on Marmousi model.
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling (0.70)
- Geophysics > Seismic Surveying > Seismic Processing > Seismic Migration (0.49)
Joint Target-oriented Wave-equation Inversion of Multiple Time-lapse Seismic Data Sets
Ayeni, Gboyega (Stanford University) | Biondi, Biondo (Stanford University)
ABSTRACT We propose a joint target-oriented wave-equation inversion method for multiple time-lapse seismic data sets. Complex reservoir overburden or acquisition geometry difference degrade the quality of time-lapse seismic images. This degradation occurs because the imaging operator does not account for overburden and geometry artifacts. Under such conditions, time-lapse images are poor indicators of production-related changes in reservoir properties. To solve this problem, we pose time-lapse imaging as a joint linear inverse problem that utilize concatenations of target-oriented approximations to the least-squares imaging Hessian. The proposed method outputs a baseline image and time-lapse images from multiple seismic data sets. Using a 2D synthetic sub-salt model, we show that this method attenuates overburden and geometry artifacts and that it gives reliable time-lapse seismic images.
- Africa (0.46)
- North America (0.28)
- North America > Canada > Saskatchewan > Western Canada Sedimentary Basin > Alberta Basin > Pikes Peak Field > Waseca Formation (0.99)
- Africa > Angola > South Atlantic Ocean > Lower Congo Basin > Block 17 > Girassol Field (0.99)
Least-squares Migration/inversion of Blended Data
Tang, Yaxun (Stanford University) | Biondi, Biondo (Stanford University)
SUMMARY We present a method based on wave-equation least-squares migration/ inversion to directly image data collected from recently developed wide-azimuth acquisition geometry, such as simultaneous shooting and continuous shooting, where two or more shot records are often blended together. We show that by using least-squares migration/inversion, we not only enhance the resolution of the image, but more importantly, we also suppress the crosstalk or acquisition footprint, without any pre-separation of the blended data. We demonstrate the concept and methodology in 2-D and apply the data-space inversion scheme to the Marmousi model, where an optimally reconstructed image, free from crosstalk artifacts, is obtained. INTRODUCTION High quality seismic images are extremely important for subsalt exploration, but data collected from conventional narrow-azimuth towed streamer (NATS) often produce poor subsalt images due to insufficient azimuth coverage. Recently developed wide-azimuth towed streamer (WATS) (Michell et al., 2006) and multi-azimuth towed streamer (MATS) (Keggin et al., 2006; Howard and Moldoveanu, 2006) acquisition technologies have greatly improved the subsalt illumination and hence better subsalt images are obtained. However, acquiring WATS or MATS data is expensive. One main reason is the inefficiency of the conventional way of acquiring data, which allows sufficient time sharing between shots to prevent interference (Beasley et al., 1998; Beasley, 2008; Berkhout, 2008). As a consequence, the source domain is often poorly sampled to save the survey time. To gain efficiency, simultaneous shooting (Beasley et al., 1998; Beasley, 2008; Hampson et al., 2008) and continuous shooting, or more generally, blended acquisition geometry (Berkhout, 2008), have been proposed to replace the conventional shooting strategy. In the blended acquisition geometry, we try to shoot and record continuously, and consequently, the time sharing between shots would be minimized and a denser source sampling can be obtained. However, this shooting and recording strategy results in two or more shot records blending together and brings processing challenges. A common practice of processing these blended data sets is to first separate the blended shot gathers into individual ones (Spitz et al., 2008; Akerberg et al., 2008), as called ”deblending” by Berkhout (2008). Then conventional processing flows are applied to these deblended shot gathers. The main issue with this strategy is that it might be extremely difficult to separate the blended gathers when the shot spacing is close and many shots are blended together. In this paper, we present an alternative method of processing these blended data sets. Instead of deblending the data prior to the imaging step, we propose to directly image them without any pre-separation. The simplest way for direct imaging would be migration, however, migration of blended data generates images contaminated by crosstalk. The crosstalk is due to the introduction of the blending operator (Berkhout, 2008), which makes the corresponding combined Born modeling operator far from unitary, thus its adjoint, also known as migration, gives poor reconstruction of the reflectivity. A possible solution is to go beyond migration by formulating the imaging problem as a least-squares migration/inversion (LSI) problem, which uses the pseudo inverse of the combined Born modeling operator to reconstruct the reflectivity of the subsurface.
Development and Application of a New Well Pattern Optimization Algorithm for Optimizing Large Scale Field Development
Onwunalu, Jerome Emeka (Stanford University) | Durlofsky, Louis J. (Stanford University)
Abstract The optimization of large-scale field development is challenging because the number of optimization variables can become excessive. A way to circumvent this difficulty is to constrain wells to exist within patterns and to then optimize parameters associated with the pattern type and geometry. In this paper, we introduce a general framework for accomplishing this type of optimization. The overall procedure, which we refer to as well pattern optimization (WPO), entails a new well pattern description (WPD) incorporated into an underlying optimization method. The WPD encodes potential solutions in terms of pattern types (e.g., five-spot, nine-spot) and pattern operators. The operators define geometric transformations (e.g., stretching, rotating) quantified by appropriate sets of parameters. It is the parameters that specify the well patterns and the pattern operators, along with additional variables that define the sequence of operations, that are optimized by WPO. The well pattern description developed here could be used with a variety of underlying optimization methods. Here we combine it with a particle swarm optimization (PSO) technique, as PSO methods have recently been shown to provide robust and efficient optimizations for well placement problems. Detailed optimization results are presented for three different example cases using several variants of the WPO algorithm. In one case, multiple reservoir models are considered to account for geological uncertainty. For all examples, significant improvement in the objective function is observed as the algorithm proceeds, particularly at early iterations. A two-stage optimization procedure, in which the first-stage optimization considers multiple well pattern types while the second stage focuses on the most promising pattern, is applied and shown to be effective. Limited comparisons with results while the second stage focuses on the most promising pattern, is applied and shown to be effective. Limited comparisons with results using standard well patterns of various sizes demonstrate that the net present values achieved by the WPO algorithm are considerably greater. Taken in total, the optimization results highlight the potential of the WPO procedure for use in practical field development. Introduction Field development optimization entails the determination of the number, type, location, trajectory and drilling schedule for new wells such that an objective function is maximized. Examples of relevant objective functions include net present value for the project and cumulative oil produced. Computational optimization is commonly employed to address this problem, with recent applications involving hundreds of wells (Volz et al., 2008). A straightforward and common approach for representing the solution parameters in field development optimization is to consider a series of wells and to concatenate the well-by-well optimization parameters. For problems with many wells, however, the number of optimization variables can become large, thereby increasing the complexity of the optimization problem. Furthermore, the performance of the algorithm may degrade for very large numbers of variables. For example, if the binary genetic algorithm is employed for the optimization of hundreds of wells, very long chromosomes will result. Large population sizes will then be required to achieve acceptable algorithm performance, which will lead to high computational expense.
- North America > United States > Texas (0.46)
- Europe > United Kingdom > England (0.28)
Multiscale Finite Volume Formulation for the Saturation Equations
Tchelepi, Hamdi A. (Stanford University) | Lee, Seong Hee (Chevron ETC) | Zhou, Hui
Abstract Recent advances in multiscale methods have shown great promise in modeling multiphase flow in highly detailed heterogeneous domains. Existing multiscale methods, however, solve for the flow field (pressure and total-velocity) only. Once the fine-scale flow field is reconstructed, the saturation equations are solved on the fine scale. With the efficiency in dealing with the flow equations greatly improved by multiscale formulations, solving the saturation equations on the fine scale becomes the relatively more expensive part. In this paper, we describe an adaptive multiscale finite-volume (MSFV) formulation for the nonlinear transport (saturation) equations. A general algebraic multiscale formulation consistent with the operator based framework proposed by Zhou and Tchelepi (SPEJ 13:267–173) is presented. Thus, the flow and transport equations are solved in a unified multiscale framework. Two types of multiscale operators, namely restriction and prolongation, are used to construct the multiscale saturation solution. The restriction operator is defined according to the local sum of the fine-scale transport equations in a coarse gridblock. Three adaptive prolongation operators are defined according to the local saturation history at a particular coarse block. The three operators have different computational complexity, and they are used adaptively in the course of a simulation run. When properly used, they yield excellent computational efficiency while preserving accuracy. This adaptive multiscale formulation has been tested using several challenging problems with strong heterogeneity, large buoyancy effects, and changes in the well operating conditions (e.g., switching injectors and producers during simulation). The results demonstrate that adaptive multiscale transport calculations are in excellent agreement with fine-scale reference solutions, but with a much lower computational cost.
Dynamic Upscaling of Multiphase Flow in Porous Media via Adaptive Reconstruction of Fine Scale Variables
Lee, Seong Hee (Chevron ETC) | Wang, Xiaochen (Stanford University) | Zhou, Hui | Tchelepi, Hamdi A. (Stanford University)
Abstract We propose an upscaling method that is based on dynamic simulation of a given model in which the accuracy of the upscaled model is continuously monitored via indirect error-measures. If the indirect measures are bigger than a specified tolerance, the upscaled model is dynamically updated with approximate fine scale information that is reconstructed by a multi-scale finite volume method (Jenny et al., JCP 217; 627–641, 2006). Upscaling of multi-phase flow entails a detailed flow information in the underlying fine scale. We apply adaptive prolongation and restriction operators for flow and transport equations in constructing an approximate fine scale solution. This new method eliminates inaccuracy associated with the traditional upscaling method which relies on prescribed inaccurate boundary conditions in computing upscaled variables. The new upscaling algorithm is validated for two-phase, incompressible flow in two dimensional porous media with heterogeneous permeabilities. It is demonstrated that the dynamically upscaled model achieves high numerical efficiency than the fine-scale models and also provides an excellent agreement with the reference solution computed from fine-scale simulation. Introduction The displacement process of multi-phase flow in porous media shows a strong dependency on process and boundary conditions. These process and boundary condition dependency, as a result, has hampered effort to construct a general coarse grid model that can be applied for multi-phase flow with various operational conditions. In addition, the conventional process in developing coarse-grid models lacks, in a general, a priori error estimate that will guide homogenization or upscaling process. Upscaling of single-phase and multiphase flow in porous media is reviewed by Farmer (2002), Christie (2001) and Barker and Thibeau (1997). Upscaling of multiphase flow in porous media is much more complex than that of single phase flow because it is difficult to delineate the effects of heterogeneous permeability distribution and multi-phase flow parameters and variables. To alleviate this difficulty, Efendiev and Durkofsky (2002, 2004) derived a generalized convection-diffusion equation to describe multi-phase flow, in place of the usual multi-phase extension of Darcy's equation with coarse grid (volume averaged) parameters and variables. Chen and Durkofsky (2006) combined the local-global upscaling and the generalized convection-diffusion equation to obtain upscaling of two-phase flow. This combined approach consistently provided reasonably accurate solutions for test cases.