The projection methods from the previous chapter showed how to transform the infinite-dimensional fixed-point problem into a finite-dimensional one by choosing basis functions and imposing conditions that make the residual small. Different projection conditions (Galerkin orthogonality, collocation at points, least squares minimization) yield different finite-dimensional systems to solve.
However, we left unresolved the question of how to evaluate the Bellman operator itself. Applying at any state requires computing an expectation:
For discrete state spaces with a manageable number of states, this expectation is a finite sum we can compute exactly. For continuous or high-dimensional state spaces, we need numerical integration. The projection methods framework is compatible with any quadrature scheme, but leaves the choice of integration method unspecified.
This chapter addresses the integration subproblem that arises at two levels in approximate dynamic programming. First, we must evaluate the transition expectation within the Bellman operator itself. Second, when using projection methods like Galerkin or least squares, we encounter outer integrations for enforcing orthogonality conditions or minimizing residuals over distributions of states. Both require numerical approximation in continuous or high-dimensional spaces.
We begin by examining deterministic numerical integration methods: quadrature rules that approximate integrals by evaluating integrands at carefully chosen points with associated weights. We discuss how to coordinate the choice of quadrature with the choice of basis functions to balance approximation accuracy and computational cost. Then we turn to Monte Carlo integration, which approximates expectations using random samples rather than deterministic quadrature points. This shift from deterministic to stochastic integration is what brings us into machine learning territory. When we replace exact transition probabilities with samples drawn from simulations or real interactions, projection methods combined with Monte Carlo integration become what the operations research community calls simulation-based approximate dynamic programming and what the machine learning community calls reinforcement learning. By relying on samples rather than explicit probability functions, we move from model-based planning to data-driven learning.
Evaluating the Bellman Operator with Numerical Quadrature¶
Before turning to Monte Carlo methods, we examine the structure of the numerical integration problem. When we apply the Bellman operator to an approximate value function , we must evaluate integrals of the form:
This shows two independent approximations:
Value function approximation: We represent using basis functions and coefficients
Quadrature approximation: We approximate each integral numerically
These choices are independent but should be coordinated. To see why, consider what happens in projection methods when we iterate . In practice, we cannot evaluate exactly due to the integrals, so we compute instead, where denotes the Bellman operator with numerical quadrature.
The error in a single iteration can be bounded by the triangle inequality. Let denote the true fixed point . Then:
The first term is the best we can do with our basis (how well approximates the true solution). The second term decreases with iterations when is a contraction. The third term is the error from replacing exact integrals with quadrature, and it does not vanish as iterations proceed.
To make this concrete, consider evaluating for our current approximation . We want:
But we compute instead:
where are quadrature nodes and weights. If the quadrature error is large relative to the basis approximation quality, we cannot exploit the expressive power of the basis. For instance, degree-10 Chebyshev polynomials represent smooth functions to accuracy, but combined with rectangle-rule quadrature (error ), the quadrature term dominates the error bound. We pay the cost of storing and manipulating 10 coefficients but achieve only convergence in the quadrature mesh size.
This echoes the coordination principle from continuous optimal control (Chapter on trajectory optimization): when transcribing a continuous-time problem, we use the same quadrature nodes for both the running cost integral and the dynamics integral. There, coordination ensures that “where we pay” aligns with “where we enforce” the dynamics. Here, coordination ensures that integration accuracy matches approximation accuracy. Both are instances of balancing multiple sources of error in numerical methods.
Standard basis-quadrature pairings achieve this balance:
Piecewise constant or linear elements with midpoint or trapezoidal rules
Chebyshev polynomials with Gauss-Chebyshev quadrature
Legendre polynomials with Gauss-Legendre quadrature
Hermite polynomials with Gauss-Hermite quadrature (for Gaussian shocks)
To make this concrete, we examine what these pairings look like in practice for collocation and Galerkin projection.
Orthogonal Collocation with Chebyshev Polynomials¶
Consider approximating the value function using Chebyshev polynomials of degree :
For orthogonal collocation, we place collocation points at the zeros of , denoted . At each collocation point, we require the Bellman equation to hold exactly:
The integral on the right must be approximated using quadrature. With Chebyshev-Gauss quadrature using the same nodes and weights , this becomes:
Substituting the basis representation :
Rearranging:
This yields a system of nonlinear equations (one per collocation point):
The matrix elements can be precomputed once the quadrature nodes, weights, and transition probabilities are known. Solving this system gives the coefficient vector .
Galerkin Projection with Hermite Polynomials¶
For a problem with Gaussian shocks, we might use Hermite polynomials weighted by the Gaussian density . The Galerkin condition requires:
Expanding the Bellman operator:
We approximate this outer integral using Gauss-Hermite quadrature with nodes and weights :
The inner integral (transition expectation) is also approximated using Gauss-Hermite quadrature:
Substituting the basis representation and collecting terms:
This gives nonlinear equations in unknowns. The right-hand side simplifies using orthogonality: when using the same quadrature nodes for projection and integration, .
In both cases, the basis-quadrature pairing ensures that:
The quadrature nodes appear in both the transition expectation and the outer projection
The quadrature accuracy matches the polynomial approximation order
Precomputed matrices capture the dynamics, making iterations efficient
Monte Carlo Integration¶
Deterministic quadrature rules work well when the state space has low dimension, the transition density is smooth, and we can evaluate it cheaply at arbitrary points. In many stochastic control problems none of these conditions truly hold. The state may be high dimensional, the dynamics may be given by a simulator rather than an explicit density, and the cost of each call to the model may be large. In that regime, deterministic quadrature becomes brittle. Monte Carlo methods offer a different way to approximate expectations, one that relies only on the ability to sample from the relevant distributions.
Monte Carlo as randomized quadrature¶
Consider a single expectation of the form
where and is some integrable function. Monte Carlo integration approximates by drawing independent samples and forming the sample average
This estimator has two basic properties that will matter throughout this chapter.
First, it is unbiased:
Second, its variance scales as . If we write
then independence of the samples gives
The central limit theorem then says that for large ,
so the integration error decays at rate . This rate is slow compared to high-order quadrature in low dimension, but it has one crucial advantage: it does not explicitly depend on the dimension of . Monte Carlo integration pays in variance, not in an exponential growth in the number of nodes.
It is often helpful to view Monte Carlo as randomized quadrature. A deterministic quadrature rule selects nodes and weights in advance and computes
Monte Carlo can be written in the same form: if we draw from density , the sample average
is just a quadrature rule with random nodes and equal weights. More advanced Monte Carlo schemes, such as importance sampling, change both the sampling distribution and the weights, but the basic idea remains the same.
Monte Carlo evaluation of the Bellman operator¶
We now apply this to the Bellman operator. For a fixed value function and a given state-action pair , the transition part of the Bellman operator is
If we can simulate next states from the transition kernel , either by calling a simulator or by interacting with the environment, we can approximate this expectation by
If the immediate reward is also random, say
we can approximate the full one-step return
by
where are independent samples given . Again, this is an unbiased estimator of the Bellman expectation for fixed .
Plugging this into the Bellman operator gives a Monte Carlo Bellman operator:
The expectation inside the braces is now replaced by a random sample average. In model-based settings, we implement this by simulating many next states for each candidate action at state . In model-free settings, we obtain samples from real interactions and re-use them to estimate the expectation.
At this stage nothing about approximation or projection has entered yet. For a fixed value function , Monte Carlo provides unbiased, noisy evaluations of . The approximation question arises once we couple this stochastic evaluation with basis functions and projections.
Sampling the outer expectations¶
Projection methods introduce a second layer of integration. In Galerkin and least squares schemes, we choose a distribution over states (and sometimes actions) and enforce conditions of the form
Here is the residual function, such as
and are test functions or derivatives of the residual with respect to parameters.
Note a subtle but important shift in perspective from the previous chapter. There, the weight function in the inner product could be any positive weight function (not necessarily normalized). For Monte Carlo integration, however, we need a probability distribution we can sample from. We write for this sampling distribution and express projection conditions as integrals with respect to : . If a problem was originally formulated with an unnormalized weight , we must either (i) normalize it to define , or (ii) use importance sampling with a different and reweight samples by . In reinforcement learning, is typically the empirical state visitation distribution from collected trajectories.
These outer integrals over are generally not easier to compute than the inner transition expectations. Monte Carlo gives a way to approximate them as well. If we can draw states , we can approximate, for example, the Galerkin orthogonality conditions by
Similarly, a least squares objective
is approximated by the empirical risk
If we now substitute Monte Carlo estimates for both the inner transition expectations and the outer projection conditions, we obtain fully simulation-based schemes. We no longer need explicit access to the transition kernel or the state distribution . It is enough to be able to sample from them, either through a simulator or by interacting with the real system.
Simulation-Based Projection Methods¶
Monte Carlo integration replaces both levels of integration in approximate dynamic programming. The transition expectation in the Bellman operator becomes , and the outer integrals for projection become empirical averages over sampled states. Writing for projection using state samples and for the Monte Carlo Bellman operator with transition samples, the iteration becomes
In the typical reinforcement learning setting, : each observed transition provides exactly one sample of the next state. This means we work with high-variance single-sample estimates rather than averaged returns. Variance reduction comes from aggregating information across different transitions through the function approximator and from collecting long trajectories with many transitions. We examine this single-sample constraint in detail after introducing Q-functions below.
Unlike deterministic quadrature, which introduces a fixed bias at each iteration, Monte Carlo introduces random error with zero mean but nonzero variance. However, combining Monte Carlo with the maximization in the Bellman operator creates a systematic problem: while the estimate of the expected return for any individual action is unbiased, taking the maximum over these noisy estimates introduces upward bias. This overestimation compounds through value iteration and degrades the resulting policies. We address this challenge in the next chapter on batch reinforcement learning.
Amortizing Action Selection via Q-Functions¶
Monte Carlo integration enables model-free approximate dynamic programming: we no longer need explicit transition probabilities , only the ability to sample next states. However, one computational challenge remains. The standard formulation of an optimal decision rule is
Even with an optimal value function in hand, extracting an action at state requires evaluating the transition expectation for each candidate action. In the model-free setting, this means we must draw Monte Carlo samples from each action’s transition distribution every time we select an action. This repeated sampling “at inference time” wastes computation, especially when the same state is visited multiple times.
We can amortize this computation by working at a different level of representation. Define the state-action value function (or Q-function)
The Q-function caches the result of evaluating each action at each state. Once we have , action selection reduces to a finite maximization:
No integration appears in this expression. The transition expectation has been precomputed and stored in itself.
The optimal Q-function satisfies its own Bellman equation. Substituting the definition of into the expression for :
This defines a Bellman operator on Q-functions:
Like the Bellman operator on value functions, is a -contraction in the sup-norm, guaranteeing a unique fixed point . We can thus apply the same projection and Monte Carlo techniques developed for value functions to Q-functions. The computational cost shifts from action selection (repeated sampling at decision time) to training (evaluating expectations during the iterations of approximate value iteration). Once is learned, acting is cheap.
The algorithm above evaluates the expectation at each state-action pair. In principle, we could approximate this integral using many Monte Carlo samples: draw next states from and average to reduce variance. This is standard Monte Carlo integration applied to the Bellman operator.
The Single-Sample Paradigm¶
Most reinforcement learning algorithms do not follow this pattern. Instead, they work with datasets of transition tuples , where each tuple records a single observed next state from executing action in state . Each transition provides exactly one sample of the dynamics: the target becomes using only the single observed .
This design choice has two origins. First, it reflects genuine constraints in some problem settings. A robot executing action in state observes one outcome and cannot rewind to the exact same state to sample alternative continuations. Real physical systems, biological experiments, and costly interactions (clinical trials, manufacturing processes) provide one sample per executed action. However, many domains where RL is applied do not face this constraint. Game emulators support state saving and restoration, physics simulators can reset to arbitrary configurations, and software environments can be cloned cheaply. The single-sample pattern persists in these settings not from necessity but from algorithmic convention.
Second, and perhaps more fundamentally, the single-sample paradigm emerged from reinforcement learning’s roots in computational neuroscience and models of biological intelligence. Early work viewed RL as a model of animal learning, where organisms experience one consequence per action and must learn from streaming sensory data. This perspective shaped the field’s algorithmic agenda: develop methods that learn from single sequential experiences, as biological agents do. Temporal difference learning, Q-learning, and actor-critic methods all follow this template.
These origins matter because they established design choices that persist even when the original constraints do not apply. In many practical applications, we have fast simulators that support arbitrary state initialization (e.g., physics engines for robotics, game emulators for Atari). We could collect multiple next-state samples per pair to reduce variance in target computation. However, most algorithms do not exploit this capability. The transition tuple with remains the standard data structure, and methods compensate for high variance through other means: larger replay buffers, function approximation that pools information across samples, multi-step returns, and careful optimization strategies.
This creates an asymmetry in how algorithms use Monte Carlo integration. For the outer expectation (sampling which states to evaluate at), algorithms freely adjust sample size: offline methods reuse entire datasets, replay buffers store thousands of transitions, online methods process one sample per step. For the inner expectation (sampling next states to evaluate the Bellman operator at a given ), the overwhelming majority of practical methods use regardless of whether the environment permits larger samples.
The single-sample paradigm shapes three aspects of algorithm design:
Data structure: Algorithms store and manipulate tuples , not state-action pairs with multiple next-state samples
Variance reduction: Comes from aggregation across different samples through function approximation and from trajectory length, not from repeated sampling at fixed
Target computation: Each transition yields exactly one target evaluated at the single observed
The algorithms that follow in the next chapters all adopt this single-sample structure. Understanding its origins clarifies when the design choice might be revisited: in domains with cheap, resettable simulators and high target variance, averaging multiple next-state samples per pair offers a direct variance reduction mechanism that is rarely exploited in current practice.
Overestimation Bias and Mitigation Strategies¶
Monte Carlo integration provides unbiased estimates of individual expectations, but when we apply the Bellman operator’s maximization to these noisy estimates, a systematic problem arises: taking the maximum over noisy values creates upward bias. This overestimation compounds through iterative algorithms and can severely degrade the quality of learned policies. This section examines the sources of this bias and presents two approaches for mitigating it.
Sources of Overestimation Bias¶
When we apply Monte Carlo integration to the Bellman operator, each individual expectation is unbiased. The problem arises when we maximize over these noisy estimates. To make this precise, consider a given state-action pair and value function . Define the true continuation value
and its Monte Carlo approximation with samples:
Each estimator is unbiased: . The Monte Carlo Bellman operator is
While each is unbiased, the maximization introduces systematic overestimation. To see why, let be the truly optimal action. The maximum of any collection is at least as large as any particular element:
The inequality is strict whenever multiple actions have nonzero variance. The maximization selects whichever action happens to receive a positive noise realization, and that inflated estimate contributes to the target value. This is Jensen’s inequality for the max operator: for random variables , since the max is convex. Even though we start with unbiased estimators, taking the maximum breaks unbiasedness.
Operationally, this means that repeatedly sampling fresh states and taking the max will not, on average, converge to the true value but systematically land above it. Unlike noise that averages out, this bias persists no matter how many samples we draw. Worse, it compounds across iterations as overestimates feed into future target computations.
Monte Carlo value iteration applies repeatedly. The overestimation bias does not stay confined to a single iteration: it accumulates through the recursion. At iteration 2, we compute Monte Carlo estimates , but is already biased upward from iteration 1. Averaging an overestimated function and then maximizing over noisy estimates compounds the bias: . Without correction, this feedback loop can produce severely inflated value estimates that yield poor policies.
Learning the Bias Correction¶
One approach, developed by Keane and Wolpin Keane & Wolpin (1994) in the context of dynamic discrete choice models, treats the bias itself as a quantity to be learned and subtracted. For a given value function and Monte Carlo sample size , define the bias at state as
While we cannot compute directly (we lack both the expectation and the exact Bellman application), the bias has structure. It depends on observable quantities: the number of actions , the sample size , and the spread of action values. When one action dominates, is small. When several actions have similar values, noise is more likely to flip the maximizer, increasing .
Rather than deriving analytically, Keane and Wolpin proposed learning it empirically. The strategy follows a “simulate on a subset, interpolate everywhere” template common in econometric dynamic programming. At a carefully chosen set of states, we run both high-fidelity simulation (many samples or exact integration) and the low-fidelity -sample estimate used in value iteration. The gap between these estimates provides training data for the bias. We then fit a regression model that predicts from features of the state and action-value distribution. During value iteration, we subtract the predicted bias from the raw Monte Carlo maximum.
Useful features for predicting the bias include the spread of action values (from a separate high-fidelity simulation), the number of actions, and gaps to the best action. These are cheap to compute and track regimes where maximization bias is large. The procedure can be summarized as:
The regression model is trained offline by comparing high and low-fidelity simulations at a grid of states, then applied during each value iteration step. This approach has been influential in econometrics for structural estimation problems. However, it has seen limited adoption in reinforcement learning. The computational overhead is substantial: it requires high-fidelity simulation at training states, careful feature engineering, and maintaining the regression model throughout learning. More critically, the circular dependency between the bias estimate and the value function can amplify errors if is misspecified.
Analytical Bias Correction¶
Lee et al. (2013)Lee & Powell (2019) developed an alternative that derives the bias analytically using extreme value theory rather than learning it empirically. Their analysis considers the distributional structure of the noise in Monte Carlo estimates. When rewards are Gaussian with variance , the bias in can be computed from extreme value theory.
For a single-state MDP where all randomness comes from stochastic rewards, suppose that for each action , the observed reward is where . When we form the target , we are taking the maximum of independent Gaussians. After appropriate centering and scaling, this maximum converges in distribution to a Gumbel random variable. The expected bias can be computed as:
where and is the Euler-Mascheroni constant. This quantity is positive and grows with the number of actions . The bias scales with the noise level and with the action space size.
The corrected target becomes:
where the bias correction term is:
Here is the estimated standard deviation of the reward at state-action pair . This can be computed from multiple observed reward samples at the same pair if available.
This approach is exact for single-state MDPs where Q-values equal expected rewards and all bias comes from reward noise. For general multi-state MDPs, the situation is more complex. The bias arises from the max operator over Q-values at the next state , not from reward stochasticity at the current state. Lee and Powell’s multistate extension addresses this by introducing a separate correction term for transition stochasticity, estimated empirically by averaging over multiple sampled next states from the same pair, rather than using an analytical formula. The single-state analytical correction remains valuable for bandit problems and as a first-order approximation when reward variance dominates, but does not directly address max-operator bias from function approximation error in the Q-values themselves. The correction is optimal specifically for Gaussian errors. If the true errors have heavier tails, the correction may be insufficient.
Lee-Powell’s analytical formula can be viewed as deriving what Keane-Wolpin learned empirically, replacing the regression model with a closed-form expression from extreme value theory. The trade-off is generality versus computational cost: Keane-Wolpin adapts to any noise structure but requires expensive high-fidelity simulations at training states; Lee-Powell assumes Gaussian noise but evaluates instantly once is estimated. Both subtract a correction term from targets while preserving squared error loss.
An alternative to explicit bias correction is to replace the hard max operator with a soft maximum. The regularized MDP chapter discusses smooth Bellman operators that use logsumexp or Gaussian uncertainty-weighted aggregation to avoid the discontinuity and overestimation inherent in taking a hard maximum. These approaches modify the Bellman operator itself rather than correcting its bias after the fact, preserving squared error loss while constructing smoother targets. Another approach, examined in the next chapter, changes the loss function itself to match the noise structure in TD targets.
Decoupling Selection and Evaluation¶
An alternative approach modifies the estimator itself to break the coupling that creates bias. In the standard Monte Carlo update , the same noisy estimate both selects which action looks best and provides the value assigned to that action. To see the problem, decompose the estimator into its mean and noise:
where is zero-mean noise. Whichever action happens to have the largest gets selected, and that same positive noise inflates the target value:
where . This coupling (using the same random variable for both selection and evaluation) produces .
Double Q-learning Van Hasselt et al. (2016) breaks this coupling by maintaining two independent Monte Carlo estimates. Conceptually, for each action at state , we would draw two separate sets of samples from :
where and are independent zero-mean noise terms. We use the first estimate to select the action but the second to evaluate it:
Note that is a random variable. It is the result of maximizing over noisy estimates , and therefore depends on the noise . The target value uses the second estimator , evaluating it at the randomly selected action . Expanding the target value to make the dependencies explicit:
where itself depends on . To see why this eliminates evaluation bias, write the expectation as a nested expectation:
The inner expectation, conditioned on the selected action , equals:
because . Why does this conditional expectation vanish? Because is a random variable determined entirely by (through the argmax operation), while is independent of . Therefore, is independent of . By the fundamental property of independence, for any random variables and where , we have . Applying this:
where the final equality holds because each has zero mean. Taking the outer expectation:
The evaluation noise contributes zero on average because it is independent of the selection. However, double Q-learning does not eliminate all bias. Two distinct sources of bias arise when maximizing over noisy estimates:
Selection bias: Using tends to select actions that received positive noise. The noise causes us to favor actions that got lucky, so is not uniformly distributed but skewed toward actions with positive . This means .
Evaluation bias: In the standard estimator, we use the same noisy estimate that selected to also evaluate it. If was selected because happened to be large and positive, that same inflated estimate provides the target value. This creates additional bias beyond the selection effect.
Double Q-learning eliminates evaluation bias by using independent noise for evaluation: even though is biased toward actions with positive , the evaluation uses which is independent of the selection, so . Selection bias remains, but removing evaluation bias substantially reduces total overestimation.
Remark 2 (Implementation via different Q-functions)
The conceptual framework above assumes we can draw multiple independent samples from for each state-action pair. In practice, this would require resetting a simulator to the same state and sampling multiple times, which is often infeasible. The practical implementation achieves the same effect differently: maintain two Q-functions that are trained on different data or updated at different times (e.g., one is a slowly-updating target network). Since the two Q-functions experience different noise realizations during training, their errors remain less correlated than if we used a single Q-function for both selection and evaluation. This is how Double DQN works, as we’ll see in the next chapter on online learning.
The following algorithm implements this principle with Monte Carlo integration. We maintain two Q-functions and alternate which one selects actions and which one evaluates them. The bias reduction mechanism applies whether we store Q-values in a table or use function approximation.
Note that this algorithm is written in the theoretical form that assumes we can draw samples from each pair. In practice with single-trajectory data (, one quadruple per transition), the double Q-learning principle still applies but we work with the single observed next state and maintain two Q-functions that are trained on different subsets of data or updated at different frequencies to achieve the independence needed for bias reduction.
At lines 9 and 12, selects the action but evaluates it. The noise in influences which action is chosen, but the independent noise in does not inflate the evaluation. Lines 10 and 13 apply the same principle symmetrically for updating .
Weighted Estimators: A Third Approach¶
The Keane-Wolpin and double Q-learning approaches represent two strategies for addressing overestimation bias: explicit correction and decoupled estimation. A third approach, developed by D’Eramo et al. D'Eramo et al. (2016), replaces the hard maximum with a probability-weighted average that accounts for uncertainty in value estimates.
The maximum estimator (ME) used in standard Q-learning takes the maximum of sample mean Q-values:
where is the sample mean Q-value for action . While each individual may be unbiased, the maximum is systematically upward biased. The bias can be bounded by:
where is the number of actions, is the variance of Q-values for action , and is the number of samples used to estimate it.
The double estimator (DE) from double Q-learning reduces this bias by decoupling selection and evaluation, but introduces negative bias:
This pessimism can be problematic when one action is clearly superior to others, as the estimator may systematically undervalue the optimal action.
The weighted estimator (WE) provides a middle ground. Instead of taking a hard maximum, compute a weighted average where weights reflect the probability that each action is optimal. Under a Gaussian approximation for the sampling distribution of Q-value estimates, the weight for action is:
where and are the standard normal PDF and CDF. This integral computes the probability that action achieves the maximum by integrating over all possible true values . For each , the term gives the probability density that action has true value , while the product of terms gives the probability that all other actions have true values below .
The weighted estimator then becomes:
D’Eramo et al. established that this estimator satisfies and . The weighted estimator thus provides a middle ground between the optimism of ME and the pessimism of DE.
The relative performance depends on the problem structure. When one action is clearly superior (), the signal dominates the noise and ME performs well with minimal overestimation. When multiple actions have similar values (), eliminating overestimation becomes critical and DE excels despite its pessimism. In intermediate cases, which represent most practical scenarios, WE adapts to the actual uncertainty and avoids both extremes.
The weighted estimator can be incorporated into Q-learning by maintaining variance estimates alongside Q-values and computing the weighted target at each step. The weights themselves can serve as an exploration policy that naturally adapts to estimation uncertainty. This approach adapts automatically to heterogeneous uncertainty across actions and provides balanced bias, but requires maintaining variance estimates (adding memory overhead) and computing integrals (expensive for large action spaces). The method assumes Gaussian sampling distributions, though it proves robust in practice even when this assumption is violated.
The weighted estimator relates to the smooth Bellman operators discussed in the regularized MDP chapter. Both replace the discontinuous hard maximum with smooth aggregation, but weighted estimation adapts to state-action-specific uncertainty while logsumexp applies uniform smoothing via temperature. The choice among ME, DE, and WE depends on computational constraints and problem characteristics. When variance estimates are available and action spaces are small, WE offers a principled approach that balances the extremes of overestimation and underestimation. When maintaining variance is expensive or action spaces are large, double Q-learning provides a simpler alternative that eliminates evaluation bias without explicit probability weighting.
Summary and Forward Look¶
This chapter developed the simulation-based approach to approximate dynamic programming by replacing analytical integration with Monte Carlo sampling. We showed how deterministic quadrature rules give way to randomized estimation when transition densities are expensive to evaluate or the state space is high-dimensional. We need only the ability to sample from transition distributions, not to compute their densities explicitly.
Three foundational components emerged:
Monte Carlo Bellman operator: Replace exact expectations with sample averages , typically with in practice
Q-functions: Cache state-action values to amortize the cost of action selection, eliminating the need for repeated sampling at decision time
Bias mitigation: Address the systematic overestimation that arises from maximizing over noisy estimates through four approaches: learning empirical corrections (Keane-Wolpin), analytical adjustment from extreme value theory (Lee-Powell), decoupling selection from evaluation (double Q-learning), and probability-weighted aggregation (weighted estimators)
The algorithms presented (Parametric Q-Value Iteration, Keane-Wolpin, Double Q) remain in the theoretical setting where we can draw multiple samples from each state-action pair and choose optimization parameters freely. The next chapter addresses the practical constraints of reinforcement learning: working with fixed offline datasets of single-sample transitions, choosing function approximators and optimization strategies, and understanding the algorithmic design space that yields methods like FQI, NFQI, and DQN.
- Keane, M. P., & Wolpin, K. I. (1994). The Solution and Estimation of Discrete Choice Dynamic Programming Models by Simulation and Interpolation: Monte Carlo Evidence. The Review of Economics and Statistics, 76(4), 648. 10.2307/2109768
- Lee, D., Defourny, B., & Powell, W. B. (2013). Bias-corrected Q-learning to control max-operator bias in Q-learning. 2013 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), 93–99. 10.1109/adprl.2013.6614994
- Lee, D., & Powell, W. B. (2019). Bias-Corrected Q-Learning With Multistate Extension. IEEE Transactions on Automatic Control, 64(10), 4011–4023. 10.1109/tac.2019.2912443
- Van Hasselt, H., Guez, A., & Silver, D. (2016). Deep Reinforcement Learning with Double Q-learning. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1).
- D’Eramo, C., Nuara, A., Restelli, M., & Nowe, A. (2016). Estimating the Maximum Expected Value through Gaussian Approximation. Proceedings of the 33rd International Conference on Machine Learning, 48, 1032–1040.