Total, Paris, France 1. Introduction With recent developments and application of geographical information system (GIS) capabilities, geohazard assessments can benefit from detailed data integration. The result of this integration is a conceptual geological model to be used as a support for the geohazard assessment, and which is provided to the engineering team in charge of a facility design. The engineering team, however, expects predictions (time, place, magnitude and probability of an event), while most geoscientists can only offer an improved forecast (general statement of future possibilities). Geological interpretation relies upon a combination of data and knowledge of the local geology, but in fact, a high degree of subjectivity may be introduced by the geologist when interpreting different datasets. To reduce this subjectivity, the interpreter should conduct an evaluation of the reliability of the model, taking into account the uncertainties related to each dataset (e.g. The quantification of uncertainties may be carried out for each dataset, but that of the conceptual geological model as a whole is not easy to determine. This paper reviews best practice in terms of data integration by means of a GIS and details uncertainties that should be taken into account. It also addresses considerations related to geostatistics and probabilities, in order to provide a reliable conceptual geological model for geohazard assessment. At that time, geotechnical engineering was still regarded as part of civil engineering, relying upon the principles of mechanics and hydraulics. Today, geotechnical engineering is recognised as an obvious bridge between geology and civil engineering, and in many areas it requires an integrated and multidisciplinary approach.
Abdollahzadeh, Asaad (Heriot-Watt University) | Reynolds, Alan (Heriot-Watt University) | Christie, Michael (Herit-Watt University) | Corne, David W. (Heriot-Watt University) | Davies, Brian J. (BP) | Williams, Glyn J.J. (BP)
Prudent decision making in subsurface assets requires reservoir uncertainty quantification. In a typical uncertainty-quantification study, reservoir models must be updated using the observed response from the reservoir by a process known as history matching. This involves solving an inverse problem, finding reservoir models that produce, under simulation, a similar response to that of the real reservoir. However, this requires multiple expensive multiphase-flow simulations. Thus, uncertainty-quantification studies employ optimization techniques to find acceptable models to be used in prediction. Different optimization algorithms and search strategies are presented in the literature, but they are generally unsatisfactory because of slow convergence to the optimal regions of the global search space, and, more importantly, failure in finding multiple acceptable reservoir models. In this context, a new approach is offered by estimation-of-distribution algorithms (EDAs). EDAs are population-based algorithms that use models to estimate the probability distribution of promising solutions and then generate new candidate solutions.
This paper explores the application of EDAs, including univariate and multivariate models. We discuss two histogram-based univariate models and one multivariate model, the Bayesian optimization algorithm (BOA), which employs Bayesian networks for modeling. By considering possible interactions between variables and exploiting explicitly stored knowledge of such interactions, EDAs can accelerate the search process while preserving search diversity. Unlike most existing approaches applied to uncertainty quantification, the Bayesian network allows the BOA to build solutions using flexible rules learned from the models obtained, rather than fixed rules, leading to better solutions and improved convergence. The BOA is naturally suited to finding good solutions in complex high-dimensional spaces, such as those typical in reservoir-uncertainty quantification.
We demonstrate the effectiveness of EDA by applying the well-known synthetic PUNQ-S3 case with multiple wells. This allows us to verify the methodology in a well-controlled case. Results show better estimation of uncertainty when compared with some other traditional population-based algorithms.
Park, J.H. (Geotechnical Engineering and Tunneling Research Division, Korea Institute of Construction Technology) | Kim, D. (Geotechnical Engineering and Tunneling Research Division, Korea Institute of Construction Technology) | Kwak, K. (Geotechnical Engineering and Tunneling Research Division, Korea Institute of Construction Technology) | Chung, M. (Geotechnical Engineering and Tunneling Research Division, Korea Institute of Construction Technology) | Chung, C.K. (Department of Civil and Environmental Engineering, Seoul National University)
Resistance factors of the conventional load and resistance factor design (LRFD) are calibrated from reliability analyses using the load and resistance distributions. In pile design practices, proof pile load tests are conducted to verify the pile designs. Typically, proof pile load tests are performed until an axial load on the pile head reaches twice its design load. Considering that the piles are usually designed with factors of safety higher than 2.0 (typically, 3.0), pile failures are rarely observed during proof pile load tests. When installation of a large number of piles is scheduled within a certain site, at least several proof pile load tests are required. For a given certain site, the proof pile load test results, indicating how many failures observed out of the total number of proof pile load tests, can be regarded as valuable information reflecting the site characteristics. This information can be quantified by a certain equation, which is a likelihood function. Bayesian theory for the obtained likelihood function enables updates of the conventional resistance factors. Based on the pile load test database collected from Korea sites, the site-specific adaptive resistance factors calculated from Bayesian theory implementation varied within wide ranges of 0.27 to 0.96 and 0.19 to 0.68 for target reliability indices of 2.33 and 3.0, respectively, depending on the proof pile load test results.
One way to mitigate formation damage is to design and execute underbalanced drilling in all phases of operations such as drilling, tripping and completion. Field cases of underbalanced drilling failure showed high formation damage which motivated the need of expert systems in underbalanced drilling to achieve higher production rates.
Many underbalanced drilling operations have been analyzed, resulting in the optimum practices, as outlined in this paper. To the best of the authors' knowledge, there are no systematic guidelines for underbalanced drilling.
The objective of this paper is to propose a set of guidelines for the optimal underbalanced drilling operations, by integrating current best practices through a decision-making system based on Artificial Bayesian Intelligence. Optimum underbalanced drilling practices collected from data, models, and experts' opinions, are integrated into a Bayesian Network BN to simulate likely scenarios of its use that will honor efficient practices when dictated by varying certain parameters.
The proposed decision-making model follows a causal and an uncertainty-based approach capable of simulating realistic conditions on the use of underbalanced drilling operations. For instance, by varying the type of UBD (flow, aerated, etc), operation and formation properties the system will show optimum tripping and connection procedure. The developed model also acknowledged UBD drilling techniques in different scenarios such as fractured formations, low permeability and high permeability formations.
The model also shows optimum solutions to problems related to underbalance drilling such as well control, completion, drilling multiple reservoirs with different pressures, equipment associated with drilling.
The advantage of the artificial Bayesian intelligence method is that it can be updated easily when dealing with different opinions.