spe-119197-pa (continuous reservoir-simulation-model updating and forecasting improves uncertainty...

Upload: amr-al-sayed

Post on 16-Oct-2015

37 views

Category:

Documents


2 download

DESCRIPTION

Most reservoir-simulation studies are conducted in a static context--at a single point in time using a fixed set of historical data for history matching. Time and budget constraints usually result in significant reduction in the number of uncertain parameters and incomplete exploration of the parameter space, which results in underestimation of forecast uncertainty and less-than-optimal decision making. Markov Chain Monte Carlo (MCMC) methods have been used in static studies for rigorous exploration of the parameter space for quantification of forecast uncertainty, but these methods suffer from long burn-in times and many required runs for chain stabilization.In this paper, we apply the MCMC in a real-time reservoir-modeling application. The system operates in a continuous process of data acquisition, model calibration, forecasting, and uncertainty quantification. The system was validated on the PUNQ (production forecasting with uncertainty quantification) synthetic reservoir in a simulated multiyear continuous-modeling scenario, and it yielded probabilistic forecasts that narrowed with time. Once the continuous MCMC simulation process has been established sufficiently, the continuous approach usually allows generation of a reasonable probabilistic forecast at a particular point in time with many fewer models than the traditional application of the MCMC method in a one-time, static simulation study starting at the same time.Operating continuously over the many years of typical reservoir life, many more realizations can be run than with traditional approaches. This allows more-thorough investigation of the parameter space and more-complete quantification of forecast uncertainty. More importantly, the approach provides a mechanism for, and can thus encourage, calibration of uncertainty estimates over time. Greater investigation of the uncertain parameter space and calibration of uncertainty estimates by using a continuous modeling process should improve the reliability of probabilistic forecasts significantly.

TRANSCRIPT

  • 626 August 2010 SPE Reservoir Evaluation & Engineering

    Continuous Reservoir-Simulation-Model Updating and Forecasting Improves

    Uncertainty QuantificationChang Liu, SPE, and Duane A. McVay, SPE, Texas A&M University

    SummaryMost reservoir-simulation studies are conducted in a static con-textat a single point in time using a fixed set of historical data for history matching. Time and budget constraints usually result in significant reduction in the number of uncertain parameters and incomplete exploration of the parameter space, which results in underestimation of forecast uncertainty and less-than-optimal decision making. Markov Chain Monte Carlo (MCMC) methods have been used in static studies for rigorous exploration of the parameter space for quantification of forecast uncertainty, but these methods suffer from long burn-in times and many required runs for chain stabilization.

    In this paper, we apply the MCMC in a real-time reservoir-modeling application. The system operates in a continuous process of data acquisition, model calibration, forecasting, and uncertainty quantification. The system was validated on the PUNQ (production forecasting with uncertainty quantification) synthetic reservoir in a simulated multiyear continuous-modeling scenario, and it yielded probabilistic forecasts that narrowed with time. Once the continu-ous MCMC simulation process has been established sufficiently, the continuous approach usually allows generation of a reasonable probabilistic forecast at a particular point in time with many fewer models than the traditional application of the MCMC method in a one-time, static simulation study starting at the same time.

    Operating continuously over the many years of typical reser-voir life, many more realizations can be run than with traditional approaches. This allows more-thorough investigation of the parameter space and more-complete quantification of forecast uncertainty. More importantly, the approach provides a mechanism for, and can thus encourage, calibration of uncertainty estimates over time. Greater investigation of the uncertain parameter space and calibration of uncertainty estimates by using a continuous modeling process should improve the reliability of probabilistic forecasts significantly.

    IntroductionA central goal in reservoir management is determining how to produce oil and gas reservoirs effectively and profitability (Thakur 1996). Reservoir simulation is regarded as a critical tool in modern reservoir management (Thomas 1986). It enables the assessment of reservoir properties and, when a forecast run is made, an assess-ment of future production and reserves. These assessments feed directly into the process for making reservoir-management deci-sions, such as whether and where to drill new wells, or whether an enhanced-recovery scheme should be implemented.

    Given the considerable uncertainty in forecasting reservoir per-formance, it is generally accepted that quantifying the uncertainty in the forecasts enables better decision making. When quantifying the uncertainty, all possible outcomes of uncertain events should be considered and assigned probabilities in order to generate a reliable probability-density function of the result of interest (e.g., reserves). However, quantifying uncertainty in production forecasts is not a trivial undertaking. Fully assessing all the possible outcomes for

    a petroleum reservoir is quite challenging because the reservoir parameter space, the set of all possible combinations of reservoir parameters, is essentially infinite. It is usually necessary to reduce the number of uncertain parameters in order to make the problem tractable.

    More than 30 years ago, Capen (1976) demonstrated that professionals in the petroleum industry often significantly under-estimate uncertainty in their assessments. Despite the considerable work on this problem since then, more-recent study [e.g., Floris et al. (2001)] has shown thateven when we explicitly try to quantify uncertainty in simulation studieswe still tend to under-estimate it. Unreliable uncertainty quantification is likely to result in poor investment decision making.

    Part of the reason for unreliable uncertainty estimation is the costs involved. Reservoir studies are usually expensive, because of the time and effort required to tune many parameters in order to history match a simulation model. Attempting to quantify forecast uncertainty adds to the costs. Consequently, simulation studies are usually conducted only at discrete points in the life of a reservoir (e.g., when considering a major investment). As such, less significant reservoir-management decisions often do not warrant the expense of a simulation study and thus must proceed without simulation results (or without simulation results using an up-to-date, calibrated model). This could lead to suboptimal operations and potentially significant economic consequences.

    This paper addresses two needs expressed: (1) the need for improved reliability in uncertainty quantification and (2) the need for a calibrated simulation model and up-to-date probabilistic fore-casts available at any time.

    BackgroundUncertainty-Quantifi cation Techniques. In the last 10 years, signifi cant work on developing more-rigorous uncertainty-quanti-fi cation techniques has been presented in the literature. The PUNQ work was a thorough treatment of uncertainty quantifi cation in production forecasts in which several industrial and academic partners used different methods to quantify uncertainty (Bos 1999; Floris et al. 2001). The overall objective of the PUNQ project was to determine whether a method can be developed that propagates the combined reservoir-modeling, reservoir-parameter, and well-observation uncertainties to production-forecasting uncertainty in a formally unbiased way.

    Quantifying the uncertainty of production forecasts is integrally tied to history matching. Performing a history match and quantify-ing forecast uncertainty consist of several steps. First, the reservoir is defined in terms of a parameter set describing the geometry and flow properties. Next, the uncertain parameters of the reser-voir are identified and assigned prior probability distributions. In the history match, reservoir models are sampled from the prior distributions, and the difference between simulation results and observed data is quantified in an objective function. The multivari-ate posterior distribution resulting from the history match is then used in quantifying forecast uncertainty. Many methods exist for history matching (i.e., searching reasonable models to minimize the objective function). One group is gradient techniques. The primary goal of these methods is optimization, which means the result is often the one reservoir model that best fits observed data. Gradient methods can be suboptimal because they can be trapped

    Copyright 2010 Society of Petroleum Engineers

    This paper (SPE 119197) was accepted for presentation at the SPE Reservoir Simulation Symposium, The Woodlands, Texas, USA, 24 February 2009 and revised for publication. Original manuscript received for review 6 July 2009. Revised manuscript received for review 7 May 2010. Paper peer approved 12 May 2010.

  • August 2010 SPE Reservoir Evaluation & Engineering 627

    in local minima rather than finding the global minimum. Uncer-tainty quantification may be suboptimal with the gradient method because it involves sampling only in the neighborhood of one or more minima in the objective function.

    Another group is global optimization techniques. These attempt to circumvent the local-minima problem by sampling from the overall distribution. One class of global optimization techniques is the genetic algorithm (GA) (Goldberg 1989), which has a variety of applications. GAs are based loosely upon the rules that govern genetics in nature. In a GA, generations of unique reservoir models are created by mixing parameter values of previously run models in a process known as breeding. Models sampled by the GA can be used for history matching and quantification of forecast uncertainty (Romero et al. 2000).

    The MCMC sampling technique is similar to the genetic algo-rithm in that the next model in the chain relies on some properties of the previous one. However, MCMC is statistically more rigor-ous. The method has been used widely as a strong tool to sample from a complicated distribution function, especially when we do not know the exact form of that function. In reservoir-modeling research, the MCMC method has been applied as a method for exploring posterior distributions in Bayesian inference (Wadsley 2005; Ma et al. 2006). First, a random model is sampled from a prior distribution, which is the starting point of the Markov chain. Then, the next model is chosen randomly with some constraints related to the previous model already in the chain. After the chain is run long enough, we are able to use the models in the chain to generate the posterior distribution. Unfortunately, MCMC meth-ods often suffer from long burn-in times and require many runs for chain stabilization (Meyn and Tweedie 1993). Burn-in is the sampling that occurs between the random initial value and chain stabilization (i.e., the point in the chain when it begins to sample correctly from the target distribution). The burn-in period is usu-ally characterized by a declining objective-function value, and it is discarded while the rest of the chain is used to represent the target distribution.

    Reservoir Monitoring and Ensemble Kalman Filter (EnKF). The EnKF is a Monte Carlo approach used in weather forecasting (Bishop et al. 2001) and is being investigated actively in the petroleum industry for continuous reservoir monitoring and model updating and for quantifying forecast uncertainty (Nvdal et al. 2005; Gu and Oliver 2005; Bianco et al. 2007; Devegowda et al. 2007; Aanonsen et al. 2009). In both weather modeling and res-ervoir modeling, the EnKF starts with an ensemble of models conditioned to all available data at some start time. When new data become available, the EnKF updates each of these models using statistical information derived from the ensemble of models and model-predicted data. The assimilation of data in the EnKF is typically done sequentially rather than simultaneously. That is, to assimilate new data, the forward model has to be run from only the previous data assimilation, rather than from the start time. The EnKF generates a suite of plausible model realizations, condi-tioned to all static and dynamic data, which can be used in making probabilistic forecasts.

    The EnKF appears to be well suited for weather modeling. First, in weather modeling there is no porous medium, whose properties we are trying to determine, through which the fluid flows. The state variables are properties of the fluid, such as pressure, temperature, and relative humidity. Second, weather is an ongoing dynamic process, with no definite start and end times. The EnKF can be started at any time by estimating the values of the state variables and modeling from that point forward.

    Petroleum-reservoir modeling is fundamentally different from weather modeling in these two respects. First, in reservoir model-ing, in addition to determining the dynamic state variables (pres-sure and saturation), we are also trying to determine the unknown properties (e.g., permeability k and porosity ) of an essentially static porous medium through which the fluid flows. These static properties also become state variables, which makes the reservoir problem more difficult to solve in this respect. Second, petroleum reservoirs have a definite start time and a finite life. The start time

    usually corresponds to an equilibrium state, for which it is much easier to estimate the initial values of the state variables than for the initial dynamic state of a weather system. In this respect, the reservoir problem is easier to solve.

    The EnKF is computationally efficient, relatively easy to imple-ment, and has manageable storage requirements for large problems. Being able to assimilate data sequentially is one of the key attrac-tions of EnKF for use in reservoir monitoring. However, sequential data assimilation is also responsible for one of its disadvantages: namely, that static properties such as k and change with time in individual ensemble members, which is physically unrealistic. While this inconsistency in reservoir properties can be resolved by rerunning the simulator from time zero using the updated parameters, forecasts from the last data assimilation may not always be consistent with the model rerun from time zero because consistency is guaranteed only for linear systems and petroleum-reservoir systems are usually nonlinear (Aanonsen et al. 2009). EnKF is thus an approximation for most reservoir problems. Rerunning from time zero also decreases the computational efficiency of the method. The EnKF can also result in nonphysical results such as saturations outside the range of 0 to 1, which is troublesome, and there are many remaining challenges associated with nonlinear systems, non-Gaussian priors, and appli-cation to large-scale field problems (Aanonsen et al. 2009). EnKF currently appears to be the most promising method available for reservoir monitoring and closed-loop reservoir management, and the literature on EnKF is vast. However, much of the literature is written to address the many difficulties in its implementation. All this leads us to the question, are there alternatives to EnKF that should also be considered for continuous reservoir monitoring and forecast-uncer-tainty quantification? In particular, are there alternative methods that allow us to take advantage of the fact that we can easily model the complete finite life of a petroleum reservoir, which helps to avoid some of the difficulties of EnKF as it is typically employed, such as nonphysical results and changes in static properties with time?

    Continuous Approach. Sequential-data-assimilation methods such as the EnKF are being investigated to quantify forecast uncertainty because, currently, simultaneous-data-assimilation methods are too ineffi cient in the context of traditional, one-time simulation studies. Because the static data are sparse, the uncertain reservoir-parameter space is usually extremely large, even with a coarse parameteriza-tion. Because we cannot test every possible model, most tech-niques attempt to quantify uncertainty with a minimum number of simulation runs (Bos 1999). Techniques such as gradient methods attempt to quantify uncertainty using a few hundred runs to fi nd one or more minima, where MCMC and GA applications typically employ thousands of runs to obtain reasonable uncertainty ranges. While all these techniques would benefi t from more samples, there are practical time and budget limitations. This is because these uncertainty-quantifi cation methods typically are applied in the context of one-time studies (fi xed period of history data and fi xed prediction period).

    Holmes et al. (2007) proposed using a continuous simulation process to mitigate this problem. They proposed a continuous process of history matching and forecasting, incorporating new data when available, over the entire span of a reservoirs life. They demonstrated a multiyear continuous-simulation experiment (with actual time compressed considerably), using the PUNQ synthetic reservoir, in which history matching was started at 4 years in the reservoirs life and continued until 9 years, incorporating new dynamic data into the history matching as they became available in the reservoirs life. They used a GA-based sampling approach, with each sample consisting of a complete simulation run from time zero to end of history (i.e., the current end of history at the actual time of each run). The model and objective function used in the GA included all the data available at the actual time of each run. Despite the lack of statistical rigor in their GA-based sampling method, the production forecast for the PUNQ synthetic reservoir bracketed the truth case and showed a similar uncertainty range compared with other published studies (Bos 1999). Holmes et al. (2007) also demonstrated the continuous simulation process in a 3-month-long live field test with real-time data incorporation into

  • 628 August 2010 SPE Reservoir Evaluation & Engineering

    the simulation process. With large simulation models, this continu-ous approach offers the potential to make tens of thousands of simulation runs over the life of the reservoir; with smaller models, hundreds of thousands or millions of runs may be possible. Tak-ing advantage of time and running many more simulation models should yield a more thorough exploration of the parameter space and better probabilistic forecasts. The authors did not investigate the primary limitation of a continuous-simulation approach: namely, that the objective-function definition changes over time as more data are assimilated.

    The objectives of our work were to further investigate the merits of a continuous-reservoir-modeling process for forecast-uncertainty quantification and to improve upon the method proposed by Holmes et al. (2007). Specifically, we incorporated MCMC, a more statisti-cally rigorous sampling method, and we investigated the impact of a changing objective-function definition over time. In the remainder of this paper, we describe our implementation of an MCMC-based continuous-reservoir-modeling method and then demonstrate its application to the PUNQ synthetic reservoir model.

    Continuous Simulation Process Our continuous simulation process for history matching and gen-erating probabilistic forecasts requires the combination of several components. First, the uncertain reservoir parameters and their prior distributions must be determined. An objective function is then defined to quantify the difference between simulation results and observed data. We apply the MCMC method to sample the uncertain parameter space and generate reservoir models, and code was generated to run the simulations automatically. Each MCMC sample consists of a complete simulation run from time zero to end of history, assimilating all the data available at the time of each run. Thus, static reservoir properties remain constant throughout each run, which avoids one of the primary weaknesses of the EnKF method. A forecast run is made along with the history-match run for each sampled model and the results are stored. As the continu-ous history-matching process proceeds, new data are obtained from the field and are added to the objective function over time. The updated objective function is then employed in subsequent sam-pling. Finally, the results of individual forecast runs are combined into probabilistic forecasts. The details of these components are described in the following sections.

    Parameterization. Before running any simulations, it is necessary fi rst to determine which uncertain parameters should be consid-ered. Once identifi ed, we then assign prior distributions (usually continuous) to quantitatively represent the uncertainty in these parameters. The type of distribution that is assigned is usually based on reservoir-characterization data. The continuous-simulation pro-cess is not tied to any particular parameterization method. The case studies presented in this paper use constant multiplier regions, the traditional approach to simulation history matching. The regions are selected to be consistent with the known geological character of the reservoir.

    Objective Function. The posterior distribution is defi ned in a Bayesian framework, where the prior distribution is revised, con-ditioned on the observed dynamic data,

    P m d P d m P mobs obs( ) ( ) ( ), . . . . . . . . . . . . . . . . . . . . . . . . (1)where dobs represents the observed dynamic data, such as water cut from the field, and m

    represents the uncertain parameters. P(m) represents the prior probability distribution of uncertain parameters. P(dobs|m) is the likelihood function related to our observed data and P(m|dobs) is the posterior distribution. We make the common assumption that the prior model and the data errors follow Gaussian distributions, even though any distribution forms could be used in theory. Gaussian distributions are assumed for computational efficiency and because, at least for the prior, reser-voir properties such as porosity and log(permeability) are usually normally distributed. We often do not know the distribution form

    for data errors, so a normal distribution is a reasonable assumption. With this assumption, the posterior distribution P(m|dobs) has the following form (Howson and Urbach 1993):

    P m dm C m

    g m d

    Tm

    obsobs

    ( ) ( ) ( )+ ( )

    exp 12

    1

    ( )

    T DC g m d1 obs ,

    . . . . . . . . . . . . . . . . . . . . . . . . (2)

    where g(m) is the simulated reservoir response corresponding to the proposed m. Cm is the parameter covariance, and CD is the data covariance.

    Although history matching is conducted using the posterior distribution, it is convenient later in the paper to refer to the more traditional objective function. This term is a subset of the posterior-distribution function,

    O m m C m g m d C g mT mT

    D( ) = ( ) ( ) + ( ) ( ) 1 1obs dobs . . . . . . . . . . . . . . . . . . . . . . . . . (3)

    where m C mT m( ) ( ) 1 is the prior term and g m d C g m dT D( ) ( ) obs obs1 quantifies the mismatch between

    the model and the observed dynamic data. Thus, as a consequence of the posterior-distribution construction under the Bayesian frame, the objective function considers both prior static data and observed dynamic data.

    Metropolis-Hasting (M-H) MCMC Algorithm. The main objec-tive of the MCMC method is to construct a Markov chain whose stationary distribution matches the posterior distribution. The poste-rior distribution is typically defi ned on a high-dimensional param-eter space and often has multiple modes. We use the M-H MCMC approach (Hastings 1970), which is often applied to sample from complicated posterior distributions. The random-walk M-H sam-pling process is as follows:

    (1) Randomly sample a set of parameters from the prior distri-bution, denoted as mt1.

    (2) From state ti to state ti+1, m mt ti i+ = +1 ,where is a standard normal random variable and is a scale factor.

    RP m d

    P m d

    t t

    t t

    i i

    i i=

    ( )( )

    + +

    min ,11 1

    obs

    obs

    (3) Randomly draw a number y from uniform distribution between 0 and 1. If y R, accept mti+1 in chain. If y > R, put mtiin chain again.

    (4) Go back to Step 2.

    Continuous Data Assimilation. At various points in time during the continuous simulation process, new data from the fi eld become avail-able. It is advantageous to include new data in the process as quickly as possible because it is generally assumed that more information from the fi eld leads to better forecasts and assessments of uncertainty. As more data are added, Eq. 3 will include more observed and simula-tion data points. Thus, the defi nition of the observed-data-misfi t term in our objective function will change with time. A changing objective function in the MCMC method is not statistically rigorous. A large change in the objective-function defi nitionbecause of, for example, breakthough in one or more wellsmay cause a large shift in the posterior distribution, which may result in a new burn-in period as the chain moves toward the new posterior distribution (effectively starting the chain over). However, in a continuous-simulation pro-cess, new data assimilations will usually result in small incremental changes in the objective-function defi nition and, thus, only small shifts in the posterior distribution. It is our hypothesis that new burn-in periods associated with new data assimilations will be minimal and that the process may actually benefi t by running continuously over

  • August 2010 SPE Reservoir Evaluation & Engineering 629

    time. That is, after a data assimilation at a particular point in time, fewer models may be required to achieve a stable, representative chain with a continuous process as compared to a new MCMC chain that is started from a random sample with the same observed data available at that time. This is because, in the continuous process, the chain continues from a chain established before the data assimilation that is close enough because of only small differences in objective-function defi nition. In quantifying forecast uncertainty, we anticipate being able to use longer, although approximate, chains than would be achievable from traditional one-time MCMC studies (because of time constraints). We hypothesize that reasonable probabilistic forecasts can be generated despite the approximation in the MCMC process.

    Probabilistic Forecasts. A forecast run is made along with the history-match run for each sampled model. This is usually a single base-case forecast run with extension of current operating condi-tions. If alternative operating scenarios are being considered, they can be investigated by making multiple forecast runs for each his-tory-match run with appropriate use of restart fi les. The individual forecast results are stored for later use. The fi nal step in the process is to combine the production forecasts for individual Markov-chain samples into a probabilistic forecast when desired. In the continu-ous MCMC process, probabilistic forecasts can be generated at any time by using a suffi cient number of the most recently sampled models. If the frequency of data assimilation is low and run times are fast enough, then it may be possible to run enough models to use for prediction only those models that have been matched to all data to date and which are past burn-in (if there is any burn-in). If the data-assimilation frequency is high and/or run times are long, then it may be necessary to include for prediction some models from before the most recent data assimilation (an approximate chain, as discussed in the preceding subsection). A balance will ordinar-ily need to be struck between using a small number of models, to minimize the impact of the changing objective function, and a large number of models, to maximize the chance of obtaining a stationary chain distribution that is representative of the posterior

    distribution. The impact of using an approximate chain on proba-bilistic forecasts is investigated and discussed in the following case study.

    Case Study of PUNQ-S3 Reservoir We used the PUNQ-S3 synthetic reservoir as a test case for our proposed new method. The PUNQ-S3 is ideal because it has been studied by many authors with a focus on quantifying uncertainty in production forecasts. Results generated using a variety of his-tory-matching and uncertainty-quantification techniques have been compared in several publications [e.g., Floris et al. (2001); Barker et al. (2001)]. The PUNQ-S3 is also an ideal test case because it is relatively small and runs very quickly; multiyear continuous-simulation experiments can be run in practical time periods. The reservoir is a five-layer, three-phase synthetic reservoir based on an actual field operated by Elf (Bos 1999). The reservoir has a domal structure, is bounded by a fault to the east and south, and is in communication with a fairly strong aquifer to the northwest (Fig. 1). A small gas cap overlies the oil column. The six producing wells are all located near the gas/oil contact. The simulation grid is 19285, with 1,761 active gridblocks.

    Parameterization. In the PUNQ problem, the only uncertain res-ervoir properties are the porosity and permeability distributions. The process we used for identifying uncertain parameters and assigning prior distributions is consistent with previous application in the PUNQ studies (Bos 1999; Floris et al. 2001). In our PUNQ-S3 model, we assumed that porosity is normally distributed, while permeability is log-normally distributed. Instead of sampling porosity and permeability values directly, the uncertain parameters used in our study are porosity and permeability multipliers. These multipliers are applied to base porosity and permeability values specifi ed in the simulation data set. The effect is the same as sam-pling porosity and permeability values directly, but this approach simplifi es the implementation. The base reservoir description con-sists of uniform porosity and permeability values for each layer. On the basis of constant average porosity values given by Barker et al. (2001), horizontal and vertical permeabilities by layer were calculated using Eqs. 4 and 5 (Gu and Oliver 2005) (Table 1).

    log ( ) . .10 9 02 0 77kh = + . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (4)

    k kv h= +0 31 3 12. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (5)

    We parameterized the PUNQ-S3 model using six homogeneous regions per layer (Fig. 2). The regions are based on the geological description, which indicates that the reservoir is marked by wide northwest/southeast-trending high-quality streaks in Layers 1, 3, and 5 (Bos 1999; Floris et al. 2001). We considered the regions to be independent of each other. A total of 90 uncertain parameters (m in the objective function, Eq. 3) exist5 layers 6 regions per layer 3 properties (porosity, vertical-permeability, and horizon-tal-permeability multipliers) per region. The prior distribution for porosity multiplier was assigned a normal distribution with median 1 and standard deviation of 0.3 (Barker et al. 2001). To prevent extreme and unrealistic values, the porosity-multiplier distribution was truncated with a maximum value of 2.28 and a minimum value

    Fig. 1Structure of the PUNQ synthetic reservoir.

    TABLE 1BASE POROSITY AND PERMEABILITY VALUES BY LAYER

    FOR THE PUNQ-S3 RESERVOIR MODEL

    Layer Porosity Horizontal Permeability, md

    Vertical Permeability, md

    1 0.17 432 137 2 0.08 33 13 3 0.17 432 137 4 0.16 196 64 5 0.19 654 205

  • 630 August 2010 SPE Reservoir Evaluation & Engineering

    of 0. On the basis of Eqs. 4 and 5, both vertical- and horizontal-permeability multipliers were assigned log-normal distributions with median 1 and standard deviation of 1.35. The permeability-multiplier distributions were truncated with a maximum value of 4 and a minimum value of 0. The prior cumulative distribution functions (CDFs) used for porosity and permeability multipliers are shown in Figs. 3 and 4, respectively.

    Static Case. To provide a basis of comparison for the continuous MCMC approach, we fi rst ran a static MCMC case. In this static case, we history matched all observed data simultaneously at 9 years. Thus, the analysis was done as if we were conducting a traditional, one-time simulation study at a time of 9 years. The available observed data (dobs in the objective function, Eq. 3) con-sisted of well bottomhole pressure (WBHP), well gas/oil ratio (WGOR), and well water-cut (WWCT) data from fi eld inception through 8.04 years (Table 2) (Bos 1999). We used measurement

    errors consistent with Bos (1999) as well. The standard deviation for shut-in pressure was set at three times smaller than for fl owing pressure (1 and 3 bar, respectively). The standard deviation for gas/oil ratio was set at 10% before gas breakthrough and 25% after gas breakthrough. Similarly, for water cut, 2% before and 5% after water breakthrough were used. A total of 17,400 models were run in this static MCMC chain. The objective function has a general, though noisy, decline during the burn-in period of approximately 7,000 models, after which the chain stabilizes, or levels off (Fig. 5 shows the objective function vs. model number for the fi rst 9,000 models of the static case). An individual forecast run to 16.5 years was made along with every history-match run. The forecast run was a base-case forecast in which no changes were made to the operating conditions. The 10, 50, and 90% quantiles for forecasted production at 16.5 years, obtained using the approximately 10,400 models in the chain after the burn-in period, are shown as MCMC-static in Fig. 6, which compares our probabilistic forecasts to forecasts published in Floris et al. (2001) and forecasts generated using the EnKF method (Gu and Oliver 2005). The static MCMC forecast compares reasonably well with the MCMC forecast from the Norwegian Computing Centre (NCC), shown as NCC-MCMC in Fig. 6 (Barker et al. 2001). Both our static MCMC forecast and the NCC-MCMC forecast perform very well in comparison to most of the other methods used.

    Continuous Case. We next simulated a real-time modeling sce-nario in which the PUNQ-S3 reservoir was continuously simulated from 4.5 to 10 years. In other words, we assumed that we started history matching and forecasting at an actual time of 4.5 years in the reservoirs life and continued this process until 10 years, an actual time period of 5.5 years. We assumed that 9,000 model runs could be made in a year of actual time and, thus, made 49,500 simulation runs over the actual 5.5-year time frame. This equates to approximately one run per hour, which is not atypical in the petroleum industry today for fi eldwide simulation runs. Since the PUNQ-S3 model actually takes only seconds to run, we were able to run the 5.5-year experiment in a matter of days.

    The observed data, dobs, available with time are shown in Table 2. During the experiment, data assimilations were conducted at 4.5, 5, 6, 7, 8, and 9 years. At each data assimilation, we added to the objective function all of the new observed data obtained during the previous year, except at 4.5 years (4.5 years of data) and 5 years (half-year of data). Thus, the number of observed data points in the objective function increases with time as we add new data, which causes the objective-function value to increase with time (Fig. 7). With the possible exception of the third period between 6 and 7 years, no significant burn-in period (continuously declining

    Distance (X) MetersD

    ista

    nce

    (Y) M

    eter

    s

    Fig. 2Multiplier regions.

    Porosity Multiplier Value

    Cum

    ula

    tive

    Pro

    babi

    lity

    0.5 1 1.5 2 2.28

    1

    0.9

    0.8

    0.7

    0.6

    0.5

    0.4

    0.3

    0.2

    0.1

    0

    Fig. 3Prior CDF for porosity multiplier.

    Permeability Multiplier Value

    Cum

    ula

    tive

    Pro

    babi

    lity

    1 2 3 4

    1

    0.9

    0.8

    0.7

    0.6

    0.5

    0.4

    0.3

    0.2

    0.1

    0

    Fig. 4Prior CDF for permeability multiplier.

  • August 2010 SPE Reservoir Evaluation & Engineering 631

    objective function) is observed. The third period corresponds to the time of gas breakthrough in two wells. This indicates that longer burn-in times may be required if new data provide signifi-cantly different and unique information about the reservoir. How-ever, because the other data assimilations involve small incremental changes, they apparently benefit from previous continuous runs and experience little burn-in.

    Fig. 5 compares the objective functions for the static MCMC case and continuous MCMC case for the 9,000 models sampled between 9 and 10 years with all observed data. For the static case, this assumes that we decided to start a typical, one-time simula-tion study at 9 years. We assumed that the simulation study lasted 1 year, ending at 10 years. We started a new MCMC chain and,

    over the year, were able to make 9,000 simulation runs, given the approximately 1-hour run time. For the continuous case, we assumed that we had been history matching continuously since 4.5 years and that we simply continued this process from 9 to 10 years. Thus, the comparison is assumed to be at an actual time of 10 years and is made between the first 9,000 models from the static case described in the preceding section and models 40,50149,500 from the continuous case (Fig. 5). As discussed in the preceding section, the static-case objective function decreases for most of the period (the first 7,000 models), indicating a significant burn-in period. In the continuous case, we do not see an early-time portion in which the objective function decreases significantly. The chain appears to be relatively stable throughout this year-long period, apparently benefitting from all the continuous runs made before this time period, even though these prior runs were made considering fewer observed data. Thus, it appears that, once the continuous MCMC process has been established sufficiently, fewer samples will be required to generate a stationary chain at any particular time with the continuous MCMC process than with a static MCMC process starting at the same time by sampling randomly from the prior distribution. While significant burn-in could be experienced if new data provide significantly different information about the reservoir, we expect the burn-in time to be less than that required for a static MCMC case starting at that time, owing to the benefit of the pre-vious runs made. At worst, it would be the same burn-in time as the static case. We expect the overall benefit, in terms of reduced burn-in times, to be significant because most data assimilations will usually involve small, incremental changes.

    In the continuous MCMC case an individual forecast run to 16.5 years was made along with every history-match run. Again, the individual forecast run was a base case forecast in which no changes were made to the operating conditions. Since the PUNQ problem is synthetic, accounting for the difference between assumed future operating conditions and actual operating conditions at a later time is not an issue, since these conditions are identical. This will not be the case in an actual reservoir. Probabilistic forecasts were made at 5, 6, 7, 8, 9, and 10 years using sampled models in the chain available at these respective times. These probabilistic forecasts were created by taking only the 9,000 runs made over the past

    TABLE 2OBSERVED DATA (EACH SHADED SEQUENCE CORRESPONDS TO A DIFFERENT DATA ASSIMILATION IN THE CONTINUOUS CASE)

    Time (days) Time

    (years)

    Time Data Assimilated

    (years)

    WBHP (BARSA)

    WGOR (Sm3/Sm3)

    WWCT (Sm3/Sm3)

    1.01 0.00 4.50 6 - - 91 0.25 4.50 6 - -

    182 0.50 4.50 6 - - 274 0.75 4.50 6 - - 366 1.00 4.50 6 - -

    1,461 4.00 4.50 6 - - 1,642 4.50 4.50 - 1 - 1,826 5.00 5.00 6 5 - 1,840 5.04 6.00 6 - - 1,841 5.04 6.00 - 1 - 2,008 5.50 6.00 - 2 - 2,192 6.00 6.00 6 4 - 2,206 6.04 7.00 6 - - 2,373 6.50 7.00 - 2 - 2,557 7.00 7.00 6 4 - 2,571 7.04 8.00 6 - - 2,572 7.04 8.00 - - 1 2,738 7.50 8.00 - 2 1 2,922 8.00 8.00 6 4 6 2,936 8.04 9.00 6 - -

    Fig. 5Comparison of objective-function value between static case and continuous case, with 9,000 models made between 9 and 10 years.

    Models in Chain

    Obje

    ctive

    Func

    tion V

    alue

    2000 4000 6000 8000

    400

    350

    300

    250

    200

    150

    100

    50

    0

  • 632 August 2010 SPE Reservoir Evaluation & Engineering

    year (or 4,500 runs over the past half-year at 5 years). The CDFs of these forecasts are shown together in Fig. 8 and are discussed further below.

    Fig. 6 compares our continuous MCMC probabilistic forecasts to our static MCMC forecasts, the forecasts published in Floris et al. (2001), and the EnKF forecasts (Gu and Oliver 2005). All the forecasts shown were made using all the observed data through 8.04 years, including the EnKF methods, except for our continu-ous MCMC forecasts made at different points in the history. The results are displayed in the form of 10, 50, and 90% quantiles of the predicted probability density function (PDF). The median (P50 denoted by the horizontal line on each vertical bar in Fig. 6) of our continuous MCMC forecast made using the 9,000 samples between 9 and 10 years, which includes all the history data, is very close to the truth case. The P10P90 range for this case is smaller than any other published forecast, including the NCC-MCMC

    forecast. This continuous MCMC forecast CDF between 9 and 10 years is narrower than the static MCMC forecast CDF made using the first 9,000 models between 9 and 10 years (Fig. 9), which is not surprising because these 9,000 static MCMC models include burn-in (Fig. 5). However, this continuous MCMC forecast range between 9 and 10 years is also narrower than the static MCMC forecast made using 10,700 models after burn-in (Fig. 6). Again, we believe that the continuous MCMC performs well because it benefits from all the continuous runs made before this time period, even though these prior runs were made considering fewer observed data. We note that the continuous MCMC approach per-forms well despite the relatively coarse regional parameterization that was used. Horizontal-permeability fields for the P50 continu-ous MCMC case made using the 9,000 samples between 9 and 10

    3.4 3.3

    3.5

    3.6 3.73.8

    3.9 4.0

    4.14.2

    3.2TN

    O-1

    TNO

    -2TN

    O-3

    Am

    oco-

    iso

    Amoc

    o-en

    iso

    Elf

    IFP

    -STM

    IFP -

    Oliv

    erN

    CC

    -Oliv

    erN

    CC

    -GA

    NC

    C-M

    CM

    CEn

    KF-

    Init i

    alE

    nKF-

    Cor

    rect

    edM

    CM

    C-S

    tatic

    4.5-

    5 ye

    a rs

    5-6

    year

    s6-

    7 ye

    a rs

    7-8

    yea r

    s8-

    9 ye

    ars

    9-10

    yea

    rs

    Truth Case

    Cum

    ulat

    ive

    Pro

    duct

    ion

    (std

    m3

    106 )

    PUBLISHED FORECASTS NEW FORECASTS

    CONTINUOUS CASE

    Fig. 6A comparison of forecasts from the static case (with 10,400 models after burn-in) and the synthetic continuous case to published forecasts for the PUNQ reservoir. Results are in the form of 10, 50, and 90% quantiles.

    Models in Chain

    Obje

    ctive

    Func

    tion V

    alue

    1 10001 20001 30001 40001

    400

    300

    200

    100

    Num

    ber o

    f Obs

    erved

    Val

    ues

    110

    100

    90

    80

    70

    60

    50

    40

    30

    20

    10

    0

    Objectivefunction value

    Number ofobserved values

    Fig. 7Objective function vs. models of an MCMC chain in continuous case.

    Cumulative Production at 16.5 years (SM3)

    Cum

    ula

    tive

    Pro

    babi

    lity

    3.3E+06 3.4E+06 3.5E+06 3.6E+06 3.7E+06 3.8E+06 3.9E+06

    1

    0.9

    0.8

    0.7

    0.6

    0.5

    0.4

    0.3

    0.2

    0.1

    0

    Fig. 8Continuous-case forecast CDFs. A comparison of the CDFs for various forecasts made during each year (or half-year).

  • August 2010 SPE Reservoir Evaluation & Engineering 633

    years are shown in Fig. 10. The maps are geologically realistic because the regions were specified to be consistent with known geological trends (i.e., wide northwest/southeast-trending high-quality streaks in Layers 1, 3, and 5). Although these trends are not present in the truth case for Layers 2 and 4, these two layers are of lower quality and, thus, do not have as much of an impact on the history match and forecast.

    We can make some general observations about the progression of continuous MCMC forecast distributions shown in Figs. 6 and 8. First, we see that the medians of our continuous MCMC forecasts are initially far from the truth case (approximately 3.53106 std m3) and move year by year toward the truth case (3.87106 std m3). Correspondingly, the forecast uncertainty ranges (measured by the P10 and P90) shift over time, from ranges early in the history that do not bracket the truth case to ranges late in the history that do bracket the truth case. The medians and ranges shift over time because the likelihood term in our posterior distribution assumes more weight as more observed data are added, eventually leading to a posterior distribution dominated by the observed dynamic data. Second, we see that as time progresses and more data are assimi-lated, the uncertainty ranges narrow. This makes sense because we

    would expect forecast uncertainty to decrease as we acquire more information about the reservoir.

    Changing Objective Function. In the continuous test described above, new observed data were assimilated once per year and the yearly probabilistic forecasts (Figs. 6 and 8) were generated using one years worth of models. Thus, the objective-function defi ni-tion was constant for each probabilistic forecast. Here we consider a more practical problem, one in which we want to generate a probabilistic forecast at a time when relatively few models have been run since the last data assimilation. This is more likely to occur in an actual fi eld application, where the data-assimilation frequency is likely to be higher than the once per year assumed in the continuous test described in the preceding section. When the assimilation frequency is high, because the chain from the last data assimilation will usually not be long enough to generate the correct distribution when a forecast is desired, we will also have to rely on models sampled before the last data assimilation. This means the probabilistic forecast would be generated from samples with different objective-function compositions.

    In the next experiment, we generated probabilistic forecasts each year by combining 2 years of sampled models together (except at 6 years, which was based on 1.5 years of sampled models). These distributions are similar to the distributions generated using only 1 year of samples (Fig. 11). The distributions obtained using 2 years of models still move toward the truth case and narrow over time, and the truth case still falls within the ranges of the forecasts in the later years, just as the forecasts obtained using only 1 year of models. Since the distributions were generated with 2 years worth of models, models were sampled with different objective-function definitions. Although this is not statistically rigorous, it still produces reasonable forecast distributions. Fig. 12 shows an extreme case in which the forecast CDFs are obtained using all models from previous years for each forecast. Although the truth case still falls within the ranges of the forecasts in the later years, as before, the uncertainty ranges do not narrow as much over time because we retain all the uncertainty from all previous years of samples, including the earliest samples, which were based on far fewer observed data. It is clear that we should not retain all samples from the entire continuous modeling period.

    In actual field applications in which the data-assimilation fre-quency will likely be much higher than we have assumed and indi-vidual simulation run times may be longer, the question remains concerning how many models in the continuous MCMC chain should be retained to generate reasonable posterior and forecast distributions. This will likely be problem specific and will require a balance between retaining many samples (longer chains for bet-ter posterior-distribution definition) vs. retaining fewer samples (more-uniform objective-function definition). Methods for testing

    Cumulative Production at 16.5 years (SM3)

    Cum

    ula

    tive

    Pro

    babi

    lity

    3.4E+06 3.6E+06 3.8E+06 4E+06

    1

    0.9

    0.8

    0.7

    0.6

    0.5

    0.4

    0.3

    0.2

    0.1

    0

    Static Case

    Continuous Case

    Fig. 9Comparison of cumulative-production CDFs between static case and continuous case, with 9,000 models made be-tween 9 and 10 years.

    Fig. 10Maps of horizontal permeability for the P50 final history-matched model using the continuous MCMC approach.

  • 634 August 2010 SPE Reservoir Evaluation & Engineering

    convergence of MCMC chains, such as running multiple parallel chains with different starting models, may be useful in determin-ing how many samples to retain. However, it is unlikely that chain convergence can be established unequivocally because MCMC with a varying objective function is an approximation. This ques-tion should be the subject of future research.

    The PUNQ example demonstrates that MCMC can be used for continuous history matching and forecast-uncertainty quantifica-tion. Once the continuous MCMC process has been established sufficiently, the method usually allows generation of a reasonable probabilistic forecast at a particular point in time with far fewer models than application of the MCMC method in a one-time static simulation study starting at that same time. Modeling continuously over the long lives of reservoirs should allow the sampling of many more models, which should in turn allow the consideration of more uncertain parameters in reservoir parameterization. Together, these should result in the sampling of a much larger fraction of the uncertain parameter space.

    Calibration of Uncertainty Estimates. Referring back to Fig. 6, we see that the fi rst three continuous MCMC forecasts failed to bracket the truth case, and the forecast made using samples between 4.5 and 5 years is particularly far off. Thus, the forecast uncertainty is apparently underestimated in the early years. This could be caused by an underestimation of uncertainty in the prior distributions, an underestimation of the error in the observed data, or both. It is desirable to reliably quantify uncertainty at all times, even when there are not many dynamic data available. To increase forecast uncertainty, we could increase the variance in either our prior distributions or the observed data, or both. To illustrate, we increased the standard deviations of our prior multipliers. The permeability-multiplier standard deviation was increased from 1.35 to 20, and the porosity-multiplier standard deviation was increased from 0.3 to 0.5. Fig. 13 shows the production-forecast CDFs with the enlarged prior distributions, while Fig. 14 presents the forecast-uncertainty ranges and compares them to the forecast ranges with the original prior distributions. The enlarged prior standard deviations yield wider forecast-uncertainty ranges in the early years (particularly 4.55 years and 56 years) that essentially bracket the truth case. The forecast ranges in later years, however, do not change as much because of the larger impact of the likeli-hood function (observed data) in later years.

    These results suggest another benefit of the continuous-simula-tion approachcalibration of uncertainty estimates. Capen (1976) pointed out that uncertainty estimations must be calibrated over time to ensure that they are reliable (e.g., P10P90 ranges should bracket the actual result approximately 80% of the time). However, our PUNQ test was a synthetic test in which we know the truth case. Since we know the truth case, we could adjust the prior distributions until all the forecasts bracketed the truth case. How would this work in an actual field application in which we do not know the correct answer, without waiting until the end of the reservoir life? The solution is suggested in Fig. 14. With the original prior distributions and observed data errors, the forecast distributions shift in addition to narrowing over time. That is, subsequent forecast distributions are not bracketed by previous forecast distributions. In the enlarged prior case, however, subsequent distributions are essentially brack-eted by all previous distributions, which is what we should expect to see in practice if uncertainty is being quantified reliably. To ensure reliable uncertainty quantification in continuous simulation of an actual field, one must monitor the forecast distributions generated over time. If later forecast distributions are not bracketed by earlier distributions, then uncertainty is being underestimated somewhere and either prior or observed data uncertainty should be increased.

    Cumulative Production at 16.5 years (SM3)

    Cum

    ula

    tive

    Pro

    babi

    lity

    3.3E+06 3.4E+06 3.5E+06 3.6E+06 3.7E+06 3.8E+06 3.9E+06 4E+06

    1

    0.9

    0.8

    0.7

    0.6

    0.5

    0.4

    0.3

    0.2

    0.1

    0

    Fig. 11A comparison of the CDFs for various forecasts made using 2 years of samples (solid lines) vs. forecasts made using 1 year of samples (dashed lines).

    Cumulative Production at 16.5 years (SM3)

    Cum

    ula

    tive

    Pro

    babi

    lity

    3.25E+06 3.5E+06 3.75E+06 4E+06

    1

    0.9

    0.8

    0.7

    0.6

    0.5

    0.4

    0.3

    0.2

    0.1

    0

    Fig. 12Continuous-case forecast CDFs for various forecasts made using all the models in previous years.

    Cumulative Production at 16.5 years (SM3)

    Cum

    ula

    tive

    Pro

    babi

    lity

    3.25E+06 3.5E+06 3.75E+06 4E+06

    1

    0.9

    0.8

    0.7

    0.6

    0.5

    0.4

    0.3

    0.2

    0.1

    0

    Fig. 13Continuous-case forecast CDFs with enlarged prior. A comparison of the CDFs for various forecasts made during each year (or half-year).

  • August 2010 SPE Reservoir Evaluation & Engineering 635

    One approach to increasing forecast uncertainty would be to regenerate previous forecast distributions with increasing prior or observed data uncertainty until the forecast distributions essentially bracket all subsequent distributions. This is likely to be compu-tationally prohibitive in most cases. Another approach would be simply to increase prior and/or data uncertainty from that point forward in time. The question arises as to how much the prior or data uncertainty should be increased. Assuming that all informa-tion at hand was used in estimating the original prior and data error distributions, then there is little to go on other than the knowledge that the uncertainty must be greater. Thus, the increase to be made in either prior or data uncertainty will be somewhat arbitrary, although it may be possible to estimate the required increase on the basis of the degree of shifting in the forecast distributions over time. For example, if we are at an actual time of 6 years in the life of the PUNQ reservoir, then we have two forecast distributions available (4.55 years and 56 years on the left-hand side of Fig. 14). For the first distribution to bracket the second, we know that the P90 of the first distribution should be at least as large as the P90 of the second distribution. This is a percentage increase in the first distribution of approximately 32%, which may be possible to relate approximately to appropriate increases in prior or data standard deviations. In any event, the increase in prior and/or data error distributions will be approximate. It will be necessary to continue to monitor future forecast distributions and make additional adjust-ments in prior or data uncertainty if, in the future, earlier forecast distributions fail to bracket later distributions.

    Some may question whether it is realistic to expect to quantify uncertainty properly (i.e., to expect early forecast distributions to bracket later distributions), because this seldom happens in practice (Demirmen 2007). We as an industry fail to quantify uncertainty properly because we do not consider all uncertain parameters, we underestimate the uncertainty in prior distributions and dynamic data and, more generally, we fail to consider outcomes that we do not foresee as possibilities (Capen 1976). Capen argues, and we concur, that it is not only realistic, but vital, to properly quantify uncertainty in order to make good decisions. While it may seem that proper uncertainty quantification requires somewhat arbitrary increases in uncertainty, with a commitment to calibration of forecast

    uncertainty estimates over time the required increases in prior or data uncertainty will become less arbitrary. However, tracking and calibration of forecast-uncertainty estimates is not easy; it requires resources, discipline, and corporate memory, which is why it is seldom practiced. A continuous history-matching and forecasting process can encourage and facilitate calibration of probabilistic forecasts, and provide for increased sampling of the uncertain parameter space, both of which should lead to more-reliable probabilistic forecasts. The authors believe that the benefit to be gained from calibration of probabilistic forecasts can more than compensate for the lack of statistical rigor in the proposed con-tinuous MCMC approach and that, ultimately, calibration is more important than the particular method used for forecast-uncertainty quantification.

    Limitations and ApplicabilityThe test case used in this paper, the PUNQ-S3 reservoir, is rela-tively small and runs quickly. Although we have not yet run the new method on a larger field case, we believe that the potential gains of a continuous-simulation approach are just as applicable to large fields and models as they are to small fields and models. This is because the principles and the primary issue addressed (i.e., improvement in uncertainty quantification) are independent of field and model size. In fact, because large models take longer to run, we believe that more can be gained by using time as an ally (i.e., by using continuous simulation to investigate more of the parameter space) in large fields and models than in small. Testing on larger field cases is the intended subject of further research.

    We used regional parameterization in the PUNQ case study, in line with traditional reservoir-simulation history matching. While more-sophisticated parameterization schemes are available, regional parameterization performed quite well in our case because the regions were chosen consistent with known geological infor-mation. However, the proposed continuous-simulation approach is essentially independent of the parameterization scheme. Benefits to be gained by using a continuous processbeing able to consider more uncertain parameters and investigate more of the parameter spaceshould apply to other parameterization schemes.

    Fig. 14Continuous-case forecasts compared. A comparison of forecasts between the original prior distribution and enlarged prior distribution.

  • 636 August 2010 SPE Reservoir Evaluation & Engineering

    One of the advantages of the MCMC method is that it is a sequential sampling technique. This allows us to use complete simulation runs with unchanging static properties from time zero to end of history for each sample. On the other hand, MCMC has a number of disadvantages. It can suffer from excessive burn-in and can require many simulation runs for chain stabilization, and it can sample poorly when the chain is not long enough. In addition, it can have computational difficulties when the number of parameters is large. Thus, MCMC may not be the best choice for many prob-lem types, at least in the context of traditional, static simulation studies. However, in the context of continuous simulation, when much more time is available for simulation runs, MCMC may be feasible for certain problem types for which it would otherwise not be practical.

    Finally, we note that the continuous history-matching and fore-casting process is not limited to use of MCMC. For example, there could potentially be benefit to running EnKF continuously over time, as we have proposed with MCMC. With the advantage of years of run time, one could afford to run all simulations in the EnKF from time zero, eliminating the problem with inconsistent static properties, and could afford to run with more ensemble members, thus investigating more of the parameter space. The advantages of a continuous processbeing able to consider more uncertain parameters and make many more runs because of the greater time available, and providing a framework for forecast-uncertainty cali-brationwould apply to other history-matching techniques also.

    Summary and ConclusionsThe MCMC method was used in a synthetic continuous history-matching and probabilistic forecasting case study. Assimilating data, history matching, and forecasting continuously over an actual 5.5-year time period in the PUNQ-S3 reservoir resulted in forecast-uncertainty ranges that narrowed with time and compared well with forecasts generated by previous authors. Once the MCMC chain is sufficiently established, the continuous MCMC approach usually allows generation of a reasonable probabilistic forecast at a particular point in time with many fewer models compared to the traditional application of the MCMC method in a one-time simu lation study starting at the same time. This is because the continuous approach benefits from models run before the time at which the probabilistic forecast is generated.

    Modeling continuously over the long lives of reservoirs should allow the sampling of many more models and the consideration of more uncertain parameters in reservoir parameterization. Together, these should result in the sampling of a much larger fraction of the uncertain parameter space. The continuous-simulation approach also provides a mechanism for calibrating uncertainty estimates over time. If it is observed that forecast distributions generated over time do not bracket subsequent distributions, then one can increase the uncertainty in either the prior distribution or the observed data to increase the forecast uncertainty. Adjustments should continue to be made until subsequent forecast distributions are consistently bracketed by previous distributions. Greater investigation of the uncertain parameter space and calibration of uncertainty estimates by using a continuous-modeling process should improve the reli-ability of probabilistic forecasts significantly.

    Nomenclature CD = data covariance matrix Cm = parameter covariance matrix dobs = observed data d tiobs = observed data at the ti step g(m) = simulated reservoir response kh = horizontal permeability kv = vertical permeability m = uncertain parameters m

    ti = uncertain parameters at the ti

    step O(m) = objective function P(m) = prior probability distribution

    P(dobs|m) = likelihood function P(m|dobs) = posterior distribution R = the probability for the new state to be accepted ti = state i ti+1 = state i+1 y = sample from uniform distribution between 0 and 1 = standard normal random variable = prior mean = scale factor = porosity

    ReferencesAanonsen, S.I., Nvdal, G., Oliver, D.S., Reynolds, A.C., and Valls, B.

    2009. The Ensemble Kalman Filter In Reservoir Engineeringa Review. SPE J. 14 (3): 393412. SPE-117274-PA. doi: 10.2118/117274-PA.

    Barker, J.W., Cuypers, M., and Holden, L. 2001. Quantifying Uncertainty in Production Forecasts: Another Look at the PUNQ-S3 Problem. SPE J. 6 (4): 433441. SPE-74707-PA. doi: 10.2118/74707-PA.

    Bianco, A., Cominelli, A., Dovera, L., Nvdal, G., and Valls, B. 2007. History Matching and Production Forecast Uncertainty by Means of the Ensemble Kalman Filter: A Real Field Application. Paper SPE 107161 presented at the EAGE/EUROPEC Conference and Exhibition, London, 1114 June. doi: 10.2118/107161-MS.

    Bishop, C.H., Etherton, B.J., and Majumdar, S.J. 2001. Adaptive sam-pling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly Weather Review 129 (3): 420436. doi: 10.1175/1520-0493(2001)1292.0.CO;2.

    Bos, C.F.M. 1999. Production Forecasting with Uncertainty Quantification (PUNQ). Final report, Contract No. NITG 99-225-A, Fault Analysis Group, Dublin, UK (December 1999).

    Capen, E.C. 1976. The Difficulty of Assessing Uncertainty. J Pet Tech 28 (8): 843850. SPE-5579-PA. doi: 10.2118/5579-PA.

    Demirmen, F. 2007. Reserves Estimation: The Challenge for the Industry. Distinguished Author Series, J. Pet Tech 59 (5): 8089. SPE-103434-PA.

    Devegowda, D., Arroyo, E., Datta-Gupta, A., and Douma, S.G. 2007. Efficient and Robust Reservoir Model Updating Using Ensemble Kalman Filter With Sensitivity Based Covariance Localization. Paper SPE 106144 presented at the SPE Reservoir Simulation Symposium, Houston, 2628 February. doi: 10.2118/106144-MS.

    Floris, F.J.T., Bush, M.D., Cuypers, M., Roggero, F., and Syversveen, A.-R. 2001. Methods for quantifying the uncertainty of production forecasts: A comparative study. Petroleum Geoscience 7 (Supplement, 1 May): 8796.

    Goldberg, D.E. 1989. Genetic Algorithms in Search, Optimization, and Machine Learning. Columbus, Ohio, USA: Addison-Wesley.

    Gu, Y. and Oliver, D.S. 2005. History Matching of the PUNQ-S3 Reservoir Model Using the Ensemble Kalman Filter. SPE J. 10 (2): 217224. SPE-89942-PA. doi: 10.2118/89942-PA.

    Hastings, W.K. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 (1): 97109. doi: 10.1093/biomet/57.1.97.

    Holmes, J.C., McVay, D.A., and Senel, O. 2007. A System for Continu-ous Reservoir Simulation Model Updating and Forecasting. Paper SPE 107566 presented at the Digital Energy Conference and Exhibition, Houston, 1112 April. doi: 10.2118/107566-MS.

    Howson, C. and Urbach, P. 1993. Scientific Reasoning: The Bayesian Approach, second edition. Chicago: Open Court Publishing Company.

    Ma, X., Al-Harbi, M., Datta-Gupta, A., and Efendiev, Y. 2006. A Multistage Sampling Method for Rapid Quantification of Uncertainty in History Matching Geological Models. Paper SPE 102476 presented at SPE Annual Technical Conference and Exhibition, San Antonio, Texas, USA, 2427 September. doi: 10.2118/102476-MS.

    Meyn, S.P. and Tweedie, R.L. 1993. Markov Chains and Stochastic Stabil-ity. London: Springer-Verlag.

    Nvdal, G., Johnsen, L.M., Aanonsen, S.I., and Vefring, E.H. 2005. Reservoir Monitoring and Continuous Model Updating Using Ensemble Kalman Filter. SPE J. 10 (1): 6674. SPE-84372-PA. doi: 10.2118/84372-PA.

    Romero, C.E., Carter, J.N., Gringarten, A.C., and Zimmerman, R.W. 2000. A Modified Genetic Algorithm for Reservoir Characterisation. Paper SPE 64765 presented at the International Oil and Gas Conference and Exhibi-tion in China, Beijing, 710 November. doi: 10.2118/64765-MS.

  • August 2010 SPE Reservoir Evaluation & Engineering 637

    Thakur, G.C. 1996. What is reservoir management? J Pet Tech 48 (6): 520525. SPE-26289-MS. doi: 10.2118/26289-MS.

    Thomas, G.W. 1986. The Role of Reservoir Simulation in Optimal Reservoir Management. Paper SPE 14129 presented at the International Meeting on Petroleum Engineering, Beijing, 1720 March. doi: 10.2118/14129-MS.

    Wadsley, A.W. 2005. Markov Chain Monte Carlo Methods for Reserves Estimation. Paper IPTC 10065 presented at the International Petroleum Technology Conference, Doha, Qatar, 2123 November. doi: 10.2523/ 10065-MS.

    Chang Liu is a petroleum engineer with Schlumberger (SIS-Australia). E-mail: [email protected]. Previously, he spent 3

    years as a graduate research assistant at Texas A&M U., con-ducting research in reservoir simulation and uncertainty quan-tification. Liu holds a BS degree in mathematics from Peking U. and an MS degree in petroleum engineering from Texas A&M U. Duane McVay is an associate professor in the department of petroleum engineering at Texas A&M U. in College Station, Texas, USA. E-mail: [email protected]. His research interests include reservoir simulation, uncertainty quantification, and unconventional reservoirs. Previously, McVay spent 16 years with S.A. Holditch & Associates, a petroleum engineering consulting firm. He is a Distinguished Member of SPE and currently serves on the SPE Editorial Review Committee. McVay holds BS, MS, and PhD degrees in petroleum engineering from Texas A&M U.