|Theme||Visible||Selectable||Appearance||Zoom Range (now: 0)|
Abstract In this work, we investigate different approaches for history matching of imperfect reservoir models while accounting for model error. The first approach (base case scenario) relies on direct Bayesian inversion using iterative ensemble smoothing with annealing schedules without accounting for model error. In the second approach the residual, obtained after calibration, is used to iteratively update the covariance matrix of the total error, that is a combination of model error and data error. In the third approach, PCA-based error model is used to represent the model discrepancy during history matching. However, the prior for the PCA weights is quite subjective and is generally hard to define. Here the prior statistics of model error parameters are estimated using pairs of accurate and inaccurate models. The fourth approach, inspired from Köpke et al. (2017), relies on building an orthogonal basis for the error model misfit component, which is obtained from difference between PCA-based error model and corresponding actual realizations of prior error. The fifth approach is similar to third approach, however the additional covariance matrix of error model misfit is also computed from the prior model error statistics and added into the covariance matrix of the measurement error. The sixth approach, inspired from Oliver and Alfonzo (2018), is the combination of second and third approach, i.e. PCA-based error model is used along with the iterative update of the covariance matrix of the total error during history matching. Based on the results, we conclude that a good parameterization of the error model is needed in order to obtain good estimate of physical model parameters and to provide better predictions. In this study, the last three approaches (i.e. 4, 5, 6) outperform the others in terms of the quality of the estimated parameters and the prediction accuracy (reliability of the calibrated models).
Deshpande, Alisha (University of Southern California) | Dong, Yining (University of Southern California) | Li, Gang (University of Southern California) | Zheng, Yingying (University of Southern California) | Qin, Si-Zhao (Joe) (University of Southern California) | Brenskelle, Lisa A. (Chevron U.S.A. Inc.)
Abstract A frequent problem experienced throughout industry is that of missing or poor quality data in data historians. While this can have many causes, the end result is that data required to perform analyses needed to improve facility operations may be unavailable. This generally occasions delays and wastes valuable time, as the data analyst must manually “clean up” the data before using it, or could even result in erroneous conclusions if the data is used as is without any corrections. This work has uncovered a dynamic principal component analysis model-based method to detect the presence of erroneous data, identify which sensor is at fault, and reconstruct corrected values for that sensor, to be stored in the historian. However, the dynamic principal component analysis model-based method is not appropriate for all sensors, so a second method for detecting errors in data from a single sensor and calculating corrected values has also been developed. Both methods work on streaming data, and thus make corrections continuously in near real-time. The dynamic principal component analysis model-based method has been successfully tested in the field by injecting errors such as a missing, bias, spike, drift, frozen, etc‥ into real streaming operating data from a Chevron facility. The single sensor data cleansing methodology has not yet undergone field test, but has been tested offline using operating data into which errors such as drift, spike, frozen, missing… have been introduced. Use of these methods can ensure that good quality data for needed analyses is available in the data historian, thereby saving analyst time and assuring that erroneous conclusions are not reached by using faulty data.
Current pipeline inspection tools allow a detail description of the pipeline physical condition throughout the system length. Such tools provide a large number of samples usually contained in extensive data files.
Pipeline engineers build up models based on pipeline physical data among other information. Diameter, wall thickness, maximum operating pressure, segment length and level profiles are key factors for a proper pipeline modeling.
Mistakes in the data selection during the configuration stage may lead to systematic errors in the model's estimations.
Large profile data sets are impractical and require significant computing power. The challenge lies in selecting the best representative points from the large profile while controlling the accuracy of the new data set.
This paper describes an algorithm used for downsizing the initial profiles while adjusting the final set to parameters such as: maximum error, minimum distance between data points, maximum number of points in the final data set.
It is assumed that a simulator performs hydraulic calculations for those points configured in the elevation profile (dominant points). Of course, the simulator may also perform calculations on any intermediate point by assuming level interpolation or some other technique.
An overburden with variable thickness can obscure the response of underlying geophysical features. For example, the gravity response of an increased thickness of low-density overburden might not be distinguishable from a deeper sandstone hydrothermally altered to clay. When the overburden is conductive, it’s thickness can be determined from the rate of decay of the off-time airborne electromagnetic data. However, the off-time decay of a thin or resistive overburden is small and difficult to measure. Previous studies have used the on-time resistive-limit response of a single component to successfully map apparent ground conductance in resistive areas. Quantitative resistive-limit models exist for thin-sheet, half-space, thin-sheet over half-space, and thick-sheet over half-space models. This study uses horizontal and vertical component data to estimate the thickness (and conductivities) of a two layered model across the survey profile.
Presentation Date: Thursday, October 18, 2018
Start Time: 8:30:00 AM
Location: 213B (Anaheim Convention Center)
Presentation Type: Oral
This paper describes the development and test in simulation of a coast line following preview controller for the DELFIMx autonomous surface craft (ASC) that takes into account the reference characteristics ahead of the vehicle. The presented solution is based on the definition of an error vector to be driven to zero by the path-following controller. The proposed methodology for controller design adopts a polytopic Linear Parameter Varying (LPV) representation with piecewise affine dependence on the chosen parameters to accurately describe the error dynamics. The controller synthesis problem is formulated as a discrete-time H2 control problem for LPV systems and solved using Linear Matrix Inequalities (LMIs). To increase the path-following performance, a preview controller design technique is used. The resulting nonlinear controller is implemented with the D-Methodology under the scope of gain-scheduling control theory. To build the reference path from laser range finder measurements, an automatic reference path reconstruction technique is presented that employs B-splines computed using least squares coefficient estimation. The final control system is tested in simulation with a full nonlinear model of the DELFIMx catamaran. INTRODUCTION Marine biologists and researchers depend on technology to conduct their studies on time and space scales that suit the phenomena under study. Several oceanography missions can be performed automatically by Autonomous Surface Craft (ASC), like bathymetric operations and sea floor characterization. ASC vehicles not only serve research purposes but can also be used for performing automatic inspection of rubblemound breakwaters, as required by the MEDIRES project, (Silvestre et al., 2004). In the scope of this project, the autonomous catamaran DELFIMx, built at IST-ISR, will be used for automatic marine data acquisition. The vessel is a major redesign of the DELFIM Catamaran, developed within the scope of the European MAST-III Asimov project that set forth the goal of achieving coordinated operation of the INFANTE autonomous underwater vehicle and the DELFIM ASC and thereby ensuring fast data communications between the two vehicles.