He, Jincong (Chevron Energy Technology Company) | Tanaka, Shusei (Chevron Energy Technology Company) | Wen, Xian-Huan (Chevron Energy Technology Company) | Kamath, Jairam (Chevron Energy Technology Company)
Production forecasting for oil and gas reservoir is highly uncertain due to various subsurface uncertainties. Rapid interpretation of measurement data and update of the probabilistic forecast results are crucial for reducing uncertainty and devising high-side capture or low-side mitigation plans. Traditional history-matching-forecasting workflow requires a long life-cycle and a large number of simulations. In this work, we propose a novel method to rapidly update the prediction S-curves given early production data without performing additional simulations or model updates after the data come in.
The proposed method consists of several steps. Before the data come in, we perform an ensemble of simulations to calculate the correlation between the measurement data (e.g. BHP) and the business objective (e.g. EUR). After the data come in, we first perform model validation step based on Mahalanobis distance and statistical testing to determine the validity of the model given observation data. If the model passed the validation test, principal component analysis (PCA) will be applied to precondition the observation data to detect and modify the response which cannot be explained by simulation responses. Finally, an analytical formula based on multi-Gaussian assumption will be used to estimate the posterior S-curve of the business objective.
The approach has been successfully applied in a Brugge waterflood benchmark study, in which the first 2 years of production data (rate and BHP) were used to update the S-curve of the estimated ultimate recovery. Through the study we observed several key advantages of the proposed method. Compared with traditional history matching methods, our proposed method focuses on the data-objective function relationship and thus circumvents the need to update the model parameters and states. The ensemble of simulations can be pre-computed and no additional simulation is needed after data arrives. In addition, our proposed method is insensitive to the number of parameters, and does not require the use of a numerical proxy for simulations, which is normally needed with traditional sampling- based methods.
As far as we know, the proposed workflow, including the model validation and the de-noising techniques, is novel. The proposed workflow is also general enough to be used in other model-based data interpretation applications.