Houjun Liu

POMCP

Previous monte-carlo tree search methods which are not competitive to PBVI, SARSOP, etc., but those are affected by close-up history.

key point: monte-cargo roll outs best-first tree search + unweighted particle filter (instead of categorical beliefs)

Background

  • History: a trajectory of some \(h = \{a_1, o_1, …\}\)
  • generative model: we perform a random sample of possible next state (weighted by the action you took, meaning an instantiation of \(s’ \sim T(\cdot | s,a)\)) and reward \(R(s,a)\) from current state
  • Rollout: keep sampling at each point, rolling out and calculating future reward

monte-carlo tree search

  1. loop:
    1. sample \(s\) from the belief distribution \(B(h)\) for each node and call that the node state
    2. loop until we reach a leaf:
      1. sample exploratino using UCB 1 via the belief
      2. get observation, reward, next state
    3. add leaf node, add node for each available action
    4. Rollout
    5. backpropegate the obtained value with discounts backwards via POMDP Bellman Backup

During runtime, we choose the action with the best action, prune the tree given what you observed, and do this again in a different.