Layer | Fill | Outline |
---|
Map layers
Theme | Visible | Selectable | Appearance | Zoom Range (now: 0) |
---|
Fill | Stroke |
---|---|
Collaborating Authors
2008 SEG Annual Meeting
SUMMARY Shaping regularization is a general method for imposing constraints on the estimated model in the process of solving an inverse problem. In this paper, I extend the concept of shaping regularization to the case of nonlinear operators and show its connection to the nonlinear Landweber iteration and related iterative inversion methods. An example application is 1-D seismic inversion that extracts an interval velocity model from plane-wave (tau-p) moveout analysis. I develop a nonlinear inversion scheme that utilizes local seismic event slopes and apply it to a synthetic data example to demonstrate an application of nonlinear shaping regularization. Different regularization strategies produce smooth or blocky (layered) models. INTRODUCTION Regularization is an essential part of inversion methods that operate with incomplete or insufficient data. Regularization makes estimation problems well-posed by adding indirect constraints on the estimated model (Engl et al., 1996; Zhdanov, 2002). Developed originally by Tikhonov (1963) and others, the method of regularization has become an indispensable part of the inverse problem theory and has found many applications in geophysical problems: traveltime tomography (Osypov and Scales, 1996; Bube and Langan, 1999), migration velocity analysis (Woodward et al., 1998; Zhou et al., 2003), high-resolution Radon transform (Trad et al., 2003), spectral decomposition (Portniaguine and Castagna, 2004), etc. In an earlier work (Fomel, 2007c), I introduced a method of shaping regularization in the context of linear least-squares estimation. The meaning of a shaping operator is an explicit mapping of the estimated model to the space of acceptable models. Shaping gets embedded in each step of an iterative estimation algorithm and thus provides required regularization of the solution. Shaping regularization has a number of advantages compared with the traditional Tikhonov''s regularization, including an easier control on properties of the estimated model and, in some cases, significantly faster iterative convergence. It has been applied to the definition of local seismic attributes (Fomel, 2007b; Fomel and Jin, 2007; Fomel et al., 2007) and to nonstationary filtering (Fomel, 2007a). In this paper, I extend the idea of shaping regularization to nonlinear inversion and show its connection to nonlinear Landweber iteration and its variants, in particular the R algorithm described by Goldin (1986) and the sparseness-constrained inversion scheme of Daubechies et al. (2004). As an example application, I consider a 1-D prestack seismic inverse problem for interval velocity estimation using local seismic event slopes. I use experiments with synthetic data to demonstrate the effectiveness of nonlinear shaping regularization. SHAPING REGULARIZATION A sufficient condition for the convergence is that the operator on the right side of equation (2) is compressive-its spectral radius being less than one (Collatz, 1966). When B is taken as the adjoint of F (in the linear case) or the adjoint of the Fréchet derivative of F (in the nonlinear case), iteration (2) is known as the Landweber iteration (Landweber, 1951) and has been studied extensively in the inverse problems literature (Hanke, 1991; Hanke et al., 1995; Engl et al., 1996; Bertero and Boccacci, 1998). The Landweber iteration solves the system of normal equations and converges to the least-squares estimate of m.
- Geophysics > Seismic Surveying > Seismic Processing (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling > Seismic Inversion (0.90)
SUMMARY The imaging condition used in reverse-time migration requires that the source wavefield (computed via a forward recursion) and the receiver wavefield (computed via a backwards recursion) must be made available at the same time in an implementation of the algorithm. Several strategies to organize the calculation can be employed, differing in balance between memory and computation. This paper describes and compares these different approaches, and argues that strategies favoring computational complexity over memory (to the point where disk i/o can be avoided) are attractive for 3D prestack migrations. An example of 3D reverse-time migration applied to wide-azimuth data from the Gulf of Mexico is presented to support the claim. INTRODUCTION Reverse-Time Migration (RTM) was introduced in the late 1970''s (Hemon, 1978) but despite showing promising imaging capabilities (Baysal et al., 1983; Whitmore, 1983; McMechan, 1983; Loewenthal and Mufti, 1983), it was not used in practice due to its stringent requirements, both in terms of computation and memory. Until recently, RTM was therefore largely confined to 2D and/or post-stack imaging but computer technology has now reached the point where 3D prestack RTM isfeasible (Yoon et al., 2003, 2004; Bednar and Bednar, 2006; Farmer et al., 2006; Guitton et al., 2007; Jones et al., 2007). The core of the algorithm is the crosscorrelation of two wavefields at the same time level, one computed by stepping forward in time, the other computed by stepping backwards in time. The forward recursion is usually carried out first, in which case the entire time history must be made available during the backwards recursion in order to compute the imaging condition. Several options can be used to arrange the calculation, differing in the amount of memory and computation requirements. The next section describes the realization of a particular approach to RTM based on leapfrog time stepping for a scalar field (pressure). The following section compares the various strategies which can be used to organize the computations. Finally, an example of 3D RTM applied to wide-azimuth data from the Gulf of Mexico is presented. SIMULATION AND REVERSE-TIME MIGRATION The forward problem involves marching this scheme for n = 0,1, . . . ,N -1. The simulation of synthetic seismic data dn is related to pn by a sampling operator Sn which extracts time samples of the wavefield at receiver positions, at the time sample rate of the output data traces. The implementation used for the examples shown below does not assume any relation between the computation grid and the acquisition geometry, or between the simulation time step and the sample rate of the seismic traces. It uses bilinear interpolation for the spatial variables and cubic interpolation in time. Similarly, the source field is adjoint-interpolated onto the computational grid. The Laplacian is approximated using an eighthorder centered difference scheme with optimized coefficients (Ye and Chu, 2005; Etgen, 2007) and Perfectly Matched Layers (PML) absorbing boundary conditions (Cohen, 2001) are used to simulate unbounded domain wave propagation.
Summary Using converted wave, a new method called S-Zero Stack has been designed to capture the density contrast reliably at the interfaces without going to inversion. Introduction Analyzing the Aki-Richards (1980) equation for converted waves (PS), I find that it is possible to decouple the effect of density contrast from that of shear velocity contrast. The two terms are mixed when the P-wave incident angle is less than 30° but they start to separate at a middle angle range (approximately 40°). The term related to shear-wave velocity reaches zero at an incident angle around 60°. However, the other term which is related to the density contrast does not reverse polarity until 90°. Furthermore, this density term reaches almost the maximum magnitude around 60°. In this paper, the theoretical decoupling is introduced, the method of S-Zero Stack is developed and the test on a synthetic example is presented. Finally, the new method has been applied in a real 4C/3D PS data and the result is calibrated with density log. S-Zero Stack reveals reliably the subsurface density anomalies without going to inversion. It is simple but robust, even when there is noise in the common conversion point (CCP) gathers. Combined with the traditional P-wave AVO technique, S-Zero Stack of PS waves may help discriminate commercial gas from fizz. S-Zero Stack For a comparison, the conventional full stack and near angle stack (up to 20° ) are listed in the 5th and 6th panels. Their differences from the panel of S-Zero Stack can easily be seen. Specially, at time of 1160 ms and 1240 ms, there are noticeable anomalies on S-Zero Stack, which match perfectly the large density drops of the log, but there is no show we can find on full stack or near stack panels at the two locations. The yellow arrows indicate all otherlocations where no response shown on the conventional full stack or near stack, but with obvious density changes of the log. Real data example S-Zero Stack has been applied to a 4C swath survey across the Pamberi-1well location in the lower reverse L block of the Columbus basin, eastern offshore Trinidad in South America. PS data was processed through anisotropic prestack depth migration, using cell-based tomography with amplitude preserving consideration (Jones etc., 2007). Figure 5 shows the result of S-Zero Stack section of line1005, which partially across with Pamberi-1well. Up part of Figure 5 is the S-Zero Stack of whole line. The lower part of the deviated well is crossing with line 1005, which is overlaid on the S-Zero Stack section. The details of SZero Stack section around target horizon at time of 2900 ms are shown in the lower part of Figure 5.The yellow arrow pinpoints a strong anomalies (red in color) on the SZero Stack, which corresponding to a density drop seen on the log at about 2920 ms. From petrophysical analysis of the area, density drop in this target layer may related with gas pay. Thus, the S-Zero stack may be used for identifyinggas prospects in the area.
- South America (0.54)
- North America (0.37)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling (0.88)
- Geophysics > Seismic Surveying > Seismic Processing > Seismic Migration (0.55)
Summary We propose an Ensemble Kalman Filter based onedimensional prestack seismic waveform inversion method for estimating elastic parameters. The basic idea is that the offset or incident angle dependent data are inverted sequentially, which is similar to the process of time dependent data being used sequentially in petroleum engineering or groundwater hydrology. The proposed method is tested with a synthetic data using both flat and good initial models. Introduction The estimation of petrophysical properties by prestack seismic waveform inversion is an active area of research. Prestack seismic waveform inversion is a challenging task because of its nonlinear and non-unique nature. Both local (Mora, 1987; Wood, 1993; Sen and Roy 2003) and global optimization (Sen and Stoffa, 1991, Stoffa and Sen, 1991) methods have been reported to solve this problem. The advantage of local optimization over global optimization is its efficiency. The limitation of this method lies in two parts. One is that local optimization method needs gradient information, which is difficult to be computed for nonlinear forward modeling operator. The gradient can also be computed by numerical method, such as finite differences. However, this kind of method is not efficient for problems with a large number of parameters which are common for prestack seismic waveform inversion. The second limitation of local optimization is that a good initial model is needed. The global optimization is designed to overcome the limitations of local optimization. No gradient and good initial model are needed for this kind of method, such as simulated annealing (SA) and genetic algorithms (GA). The limitation for global optimization is the requirement of a large number of forward model evaluations. In the recent years, the ensemble Kalman filter based sequential inversion method has become a promising tool to address the limitation of gradient based linear inversion and global inversion methods. The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables. It has been successfully applied for assimilating data in weather forecasting (Evensen, 1994; Evensen and van Leeuwen, 1996; Evensen, 1997, 2003), the groundwater hydrology (Reichle et al., 2002; Chen and Zhang, 2006) and petroleum engineering (Naeval et al, 2002, 2005; Gu and Oliver ,2004; Skjervheim , 2005, 2006). The EnKF is similar to global optimization method in the sense that no gradient information is needed. At the same time, it is a sequential inversion, which assimilates data sequentially. In this paper, we propose a EnKF based prestack waveform inversion. The main idea is that the offset or incident angle dependent information is inverted sequentially, similar to the time dependent data used sequentially in petroleum engineering or groundwater hydrology. Wavelet transform parameterization is also used to reduce the number of model parameters. We use a synthetic data which uses a well log data from Gulf of Mexico to demonstrate the feasibility of our proposed method. The results show the EnKF based inversion method works well even when the initial model is only a constant mean. The byproduct of this EnKF method is that multiple results are obtained simultaneously.
- Geophysics > Seismic Surveying > Seismic Processing > Seismic Migration (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling > Seismic Inversion (1.00)
Shallow Water 3D Surface-Related Multiple Modelling, Case Study.
Plasterien, P. (CGGVeritas Australia) | Gayne, M. (CGGVeritas Australia) | Lange, M. (CGGVeritas Australia) | Sarjono, Imam (CGGVeritas Australia) | Pica, A. (CGGVeritas France) | Poulain, G. (CGGVeritas France) | Leroy, Sylvain (CGGVeritas France) | Mosher, Chuck (Conoco-Phillips America) | Bril, Robert (Conoco-Phillips Australia) | Faulkner, Charlie (Conoco-Phillips Australia)
Summary The efficiency of multiple attenuation techniques depends on how shallow and how structurally complex the sea floor is. In shallow water environment comes a point were the sea floor is theoretically too shallow for Surface Related Multiple Elimination (SRME) techniques to efficiently model all multiples. In this Bonaparte basin case study we look at a range of multiple attenuation techniques. 3D surface related modelbased modeling technique SRMM, 2D SRME and predictive deconvolution are applied to water depths ranging from 100ms to 500ms. Attempts at identifying the reasons for successes or failures at different water depths are made and conclusions drawn on to which of the three or combination of the three methods was the most efficient for a given water depth. Through this Autralian case study, are SRME / SRMM methods still efficient in shallow water depths compared to predictive deconvolution, is the first aspect we analyse. The second aspect looks at the relative efficiency of fully data driven 2D SRME versus 3D surface related model-based modeling technique (SRMM) in this shallow and structurally non-complex water bottom environment. Introduction Although in theory geophysicists should not apply multiple removal before surface related multiple attenuation, modern marine processing sequences include predictive deconvolution before or after SRME / SRMM. In practice the two methods complement each other well where predictive deconvolution will tackle very short period multiples in very shallow water environment and where SRME, although supposed to handle any multiples in the data, will thrive in slightly longer multiple period. The reason for the theoretical failure of the SRME / SRMM techniques in shallow water depth environment in that the convolution process between the primary response and the total response is jeopardize because the primary response (the water bottom) is inaccurately recorded. Indeed, due to missing near offsets (near offsets are in the order of 150m) most of the true primary water bottom reflection is not recorded properly and in addition might interfere with the direct arrival wave (Verschuur, 2006). At least it is not sufficiently accurately recorded for the SRME convolution to successfully predict multiples for the small offsets that in any case need to be interpolated. Predictive deconvolution on the other hand is known to be very effective as it only relies on the periodicity of the multiples wave field and not on the water bottom reflection itself. In the first part of this case study we look at the relative efficiency of 2D SRME / 3D SRMM with predictive devonvolution. One of the well known advantage of 3D surface related model-based modeling techniques (SRMM) compared to fully data-driven solution (SRME) is that although they may require interpolation between streamers, they do not require interpolation between sources. This makes them operational and ready for convolutional / demultiple process for any acquisition geometry (including wide azimuth and OBC). On this dataset example, once the reflectivity model had been prepared, although involving greater computing capacity, turnaround on a test sail line was comparable to 2D SRME, creating good condition for testing and comparison.
Summary Although the gravitational effect of Earth’s atmosphere has relatively small values it is generally recommended to account for it in precision gravimetry. Since the effect is height-dependent, it is especially worth considering when the survey covers a broad range of gravity station heights and where the survey is performed close to a continental coast. Previously, the Earth''s topography was not considered significant when calculating the atmospheric correction for subtraction from the theoretical ellipsoidal gravity at the station. In fact the Earth''s surface is not flat over the continents and this variation in height must produce an additional influence upon the values of such a correction. We show using several examples that accounting for the Earth’s topography significantly changes the values from those calculated in the conventional way. The necessary calculations can be efficiently performed using a newly derived formula for the gravitational effect of a spherical shell with variable density. Introduction Existing theory of atmospheric correction assumes that normal ellipsoidal gravity includes the gravitation of the atmospheric mass as if it represented a regular part of the mass of the normal Earth bounded by the surface of the normal ellipsoid (Torge, 1989, 54; Hinze et al., 2005, J28). Regarding the theoretical background, the works of Ecker and Mittermayer (1969) and Wenzel (1985) are cited. In the last quoted study the approximation within the range of topographic heights was derived in the form. Such an approach assumes the atmosphere to consist of homogenous layers. However, this assumption is not realistic, especially in or near high mountains. In fact Torge (1989) was aware of the fact that the actual distribution of the atmospheric mass differs from the normal atmosphere. This is due to both the topography and to latitude. Torge (1989) assumes that the total effect of those large-scale deviations to be less than 0.01 mGal (presumably a misprint?) and quotes the work of Anderson et al. (1975) in this context. Within the last 15 years some new papers dealing with the influence of topography have been published. Sjöberg (1993) did not agree that the effect of topography would be negligible. He introduced a topography-dependent "gravity terrain correction for the removal of the atmosphere" to the original atmospheric correction as defined by Ecker and Mittermayer (1969). He arrives at an interesting approximation of that topography-dependent correction term in the form of a linear function of height at the point of calculation. Ramillien (2002), using a spherical harmonic approach, considered the Earth''s topography and obtained a global amplitude range for the "gravity contribution of a simplified dry and steady atmosphere as it would be measured at sea or by a satellite" of about 0.150 mGal. Nahavandchi (2004) improved the older methods of calculating the "direct atmospheric gravity effect" (Sjöberg, 1999; Sjöberg and Nahavandchi, 2000) by deriving a new (approximate) formula for such an effect. He combined local topographic information with the set of spherical harmonic coefficients which represented the global component of the Earth''s topography. Previously only the last mentioned consideration was in use, as when deriving equation (2).
Summary We used a combination of two methods to explore the elastic property of clays: Molecular simulation on montmorillonite, as a function of water content in the interlayer and stress; and Nanoindentation measurements on various reference clays, where mechanical properties are determined from the strain response due to loading of a sharp indenter on the sample. Modeling and experi- ments offered us insights and assisted in the verification and interpretation of our results. The sample orientation in our models corresponds to a Reuss assembly, that is, deformation occurs mainly in the interlayer. Thus, we relate the Young''s modulus from our simulations to be that of the interlayer. Youngs modulus is insensitive to water content in wet montmorillonite. Youngs modulus of kaolinite from nanoindentation was 2.59 GPa. For montmorillonite, the nanoindentation results (7.26 GPa) fall within the range of our simulation results (4.75-16.17 GPa), indicating that the weaker interlayer accounts for most of the deformation and dominates the clay property. This agreement between the independent modeling and experimental approaches validates our results. Introduction Since shales are the most abundant sedimentary rock on earth and clays are present in abundance in shales, the elastic properties of clays are of utmost importance in soil science and geophysics. Several other disciplines such as material science, biological science, geology, engineering and many others also use clays in various ways. The clay minerals are hydrous aluminum silicates and are classified as phyllosilicates, or layer silicates. All layer sil- icates are constructed from two modular units: A sheet of corner-linked tetrahedra and a sheet of edge-linked octa- hedra. These sheets can have different unbalanced layer charges depending on the clay type. The smectite group of clay minerals with tetrahedra, octahedra and tetrahe- dra structure is able to expand and contract its structures while maintaining crystallographic integrity (Moore and Reynolds, 1989). Expansion takes place as water or some polar molecule, such as ethylene glycol, enters the inter- layer space. The layers expand because the interlayer cations such as Na+, Ca2+, K+, Li+, etc. are attracted more to water than to the relatively small layer charge. In addition to the clay and water interactions at depth, ex- ternal stress also plays an important role on clay swelling. The swelling of clay minerals in the presence of an aqueous solution can produce strong adverse effects in the explo- ration and production of gas and oil (Zhang et al., 2006). As pore-filling materials, they block hydraulic pathways and decrease permeability and porosity considerably but do not affect seismic wave propagation. Pore-lining clay on the other hand will considerably alter seismic prop- erties without a large effect on porosity (Prasad, 2001). Knowledge of the elastic properties of clay is therefore essential for the interpretation and modeling of the seis- mic response of clay bearing formations. Polymer, paper, ceramic, medicine, automobile and various other indus- tries also use clays. Presence of clays can increase or decrease the damage caused in an earthquake depending on whether they are weaken or strengthen the underlying rock.
- Geology > Mineral > Silicate > Phyllosilicate (1.00)
- Geology > Structural Geology > Tectonics > Plate Tectonics > Earthquake (0.54)
- Geology > Rock Type > Sedimentary Rock > Clastic Rock > Mudrock > Shale (0.45)
- Energy > Oil & Gas > Upstream (1.00)
- Materials > Chemicals > Commodity Chemicals > Petrochemicals (0.54)
- Well Drilling > Drilling Fluids and Materials > Drilling fluid selection and formulation (chemistry, properties) (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization > Reservoir geomechanics (1.00)
Summary DNSC07 is a new global ocean wide satellite altimetry derived gravity field computed at the Danish National Space Center (DTUDenmark) with a spatial resolution of 1 arc-minute by 1 arc-minute covering all marine regions of the world including the Arctic Ocean up to the North Pole. For more than a decade satellite altimetry has been used to determine the marine gravity field. All present satellite based global marine gravity fields are based on the GEOSAT and ERS-1 Geodetic Mission (GM) data combined with other satellite derived datasets. The most urgent outstanding problem with most existing marine gravity fields (i.e. KMS02, Sandwell and Smith V12, NTU 01, GSFC00) was to improve the coverage of the data and the quality of the satellite observations in order to gain higher accuracy of the derived marine gravity field in particularly high latitudes and in coastal regions where many sedimentary basins are found. By using a new double-retracking system for the entire ERS-1 GM mission using a highly advanced expert based system of multiple retrackers and subsequently repacking/retracking we have been able to obtain both higher quality data but also many more data than seen before. Especially in coastal and ice-covered regions the new DNSC07 global marine gravity field is superior to other satellite based global marine gravity fields. We will present the high resolution new global marine gravity field and comparison with existing marine gravity surveys in several regions of the world (Indonesia, Florida Keys and East Greenland) and demonstrate how much accuracy have been gained by retracking the satellite observations compared with existing global marine gravity fields. Introduction DNSC07 global marine gravity field DNSC07 is a new global ocean wide satellite altimetry derived gravity field computed at the Danish National Space Center (DTUDenmark) with a spatial resolution of 1 arc-minute by 1 arc-minute covering all marine regions of the world including the Arctic Ocean up to the North Pole. DNSC07A was derived using satellite altimetry from the ERS-1 and GEOSAT geodetic missions, retracked using a highly advanced expert based system of multiple retrackers. This enables accurate ranging to both the open ocean surface and to all ice-covered regions within the +/- 82° of latitude coverage of the ERS satellites. Augmenting these data with ICESat data and with gravity anomalies from the Arctic Gravity Project (ArcGP) enables the continuation of the gravity anomaly field all the way up to the North Pole. The DNSC07A altimetry-derived gravity anomaly field was derived with respect to a very high degree (2160) Earth Gravitational Model designated PGM07B and a consistent mean Dynamic Ocean Topography model designated DOT07A, derived jointly by NGA and its contractor SGT, Inc. During the last two decades altimetry has increased vastly in accuracy from meters to centimeters (Wunch and Zlotniki, 1984, Chelton et al., 2001), and has opened for a whole new suite of scientific problems that can now be addressed using altimetry After the altimetric range observation of the distance between the satellite and the sea surface has been corrected for orbital height above the ellipsoid and the small distortion of the speed of the radar pulse (i.e., Chelton et al., 2001), they provide the sea surface height.
- North America > Greenland (0.25)
- Asia > Indonesia (0.24)
- North America > United States (0.16)
- Information Technology > Communications > Networks (0.55)
- Information Technology > Artificial Intelligence (0.48)
High-resolution Reservoir Characterization By 2-D Model-driven Seismic Bayesian Inversion: an Example From a Tertiary Deltaic Clinoform System In the North Sea
Tetyukhina, Daria (Delft University of Technology) | Luthi, Stefan M. (Delft University of Technology) | van Vliet, Lucas J. (Delft University of Technology) | Wapenaar, Kees (Delft University of Technology)
Introduction
Summary In order to retrieve a high-resolution reservoir model from seismic and well data, an approach was developed based on an a priori layered model from well data, specifically the acoustic impedances derived from the sonic and density
- Europe > United Kingdom > North Sea (1.00)
- Europe > Norway > North Sea (1.00)
- Europe > North Sea (1.00)
- (2 more...)
- Geophysics > Seismic Surveying > Seismic Processing (1.00)
- Geophysics > Seismic Surveying > Seismic Modeling > Velocity Modeling > Seismic Inversion (0.48)
- Europe > United Kingdom > North Sea > Southern North Sea > Southern Gas Basin (0.99)
- Europe > Poland > North Sea > Southern North Sea > Southern Gas Basin (0.99)
- Europe > Netherlands > North Sea > Southern North Sea > Southern Gas Basin (0.99)
- Europe > Belgium > North Sea > Southern North Sea > Southern Gas Basin (0.99)
- Reservoir Description and Dynamics > Reservoir Characterization > Seismic processing and interpretation (1.00)
- Reservoir Description and Dynamics > Reservoir Characterization > Sedimentology (1.00)
- Reservoir Description and Dynamics > Formation Evaluation & Management > Open hole/cased hole log analysis (1.00)
SUMMARY In this paper, we introduce a preconditioner for seismic Imaging-i.e., the inversion of the linearized Born scattering operator. This preconditioner approximately corrects for the "square root" of the normal-i.e., the demigration-migration operator. This approach consists of three parts, namely (i) a left preconditoner, defined by a fractional time integration designed to make the migration operator zero order, and two right preconditioners that apply (ii) a scaling in the physical domain accounting for a spherical spreading, and (iii) a curvelet-domain scaling that corrects for spatial and reflectordip dependent amplitude errors. We show that a combination of these preconditioners lead to a significant improvement of the convergence for iterative least-squares solutions to the seismic imaging problem based on reverse-time migration operators. INTRODUCTION Over the years, extensive research has been done to reduce the computational costs of seismic imaging. Improvements in this area are particularly important during iterative leastsquares migration, where the linear Born scattering operator is inverted with iterative Lanczos methods, such as LSQR (Paige and Saunders, 1982; De Roeck, 2002). Examples of this method can be found in the literature (see e.g. Nemeth et al., 1999; Chavent and Plessix, 1999; Kuhl and Sacchi, 2003). The most successful methods to reduce the cost of migration are the so-called scaling methods where the action of the compound linearized modeling-migration operator-known as the Hessian or normal operator-is replaced by a diagonal scaling in some domain, see e.g. contributions from Claerbout and Nichols (1994); Rickett (2003); Guitton (2004), and more recently from Herrmann et al. (2008) and Symes (2008). These methods vary in degree of sophistication with regard to the estimation of the diagonal through migrated-image to remigratedimage matching and in the way the scaling is applied-i.e., by division in the physical or via sparsity promotion in the curvelet domain as reported in Herrmann et al. (2008). Amplitudes are restored during all these methods by applying the scaling as a post-processing step after migration. In this paper, we take this line of research a step further by using the above scaling argument to define the appropriate preconditioning to the system of equations involved in linearized Born scattering. To illustrate the improvements in the migrated image and in the convergence of least-square migration, we consider three levels of preconditioning. First, we correct for the order of the normal operator by introducing a left preconditioning consisting of a fractional time integration. This first level is consistent with earlier work reported by Herrmann et al. (2008) and Symes (2008). The next level of preconditioning is made off a simple diagonal scaling in the physical domain to compensate for spherical spreading of seismic waves. As a last step, we include a curvelet-domain scaling as part of the right preconditioning. We conclude by studying the performance of these different levels of preconditioning on a synthetic example using a reverse-time "wave-equation" migration code with optimal checkpointing (Symes, 2007).