Posts

any name here

Last edited: August 8, 2025

Anytime Error Minimization Search

Last edited: August 8, 2025

Big picture: combining off-line and on-line approaches maybe the best way to tackle large POMDPs.

Try planning:

  • only where we are
  • only where we can reach

Take into account three factors:

  1. uncertainty in the value function
  2. reachability from the current belief
  3. actions that are likely optimal

It allows policy improvement on any base policy.

Setup

Discrete POMDPs:

  • \(L\), lower bound
  • \(U\), upper-bound
  • \(b_0\): current belief

Two main phases: the algorithm

aosneuhasoneuh

Last edited: August 8, 2025

eansoetuhaosneu

AP Phys C EM Things to Do

Last edited: August 8, 2025
  • Review all the names of units, and their SI conversions
  • Review all eqns of time constants
  • “Amperian Loop”