Why This Matters

Online POMDP planning requires accurate belief state representation to make decisions under uncertainty, but particle filtering struggles when the received observation provides highly informative evidence that moves beliefs far from prior distributions. AIROAS is innovative because it applies importance sampling tempering principles specifically for belief node updates in planning trees, improving upon standard particle filters by carefully controlling the transition between prior and posterior beliefs through sequence of intermediate distributions.

What We Did

AIROAS introduces Annealed Importance Resampling for Observation Adaptation in online POMDP planning, addressing the challenge of belief state representation when direct sampling from optimal posterior distributions is infeasible. The approach maintains particle diversity through annealed importance resampling, creating smoothly interpolated intermediate distributions that bridge proposal and target distributions. This enables more efficient belief updates and superior planning performance compared to standard particle filtering approaches, particularly in deep searches where observation uncertainty is high.

Key Results

Experimental evaluation across multiple POMDP planning domains demonstrates that AIROAS significantly improves planning performance by reducing particle degeneracy issues at deeper search nodes. The approach enables more effective belief representation while maintaining computational efficiency in online planning, achieving better decision quality compared to standard particle filtering approaches.

Cite This Paper

@article{airoas_zhang,
  title = {Observation Adaptation via Annealed Importance Resampling for Partially Observable Markov Decision Processes},
  author = {Zhang, Yunuo and Luo, Baiting and Mukhopadhyay, Ayan and Dubey, Abhishek},
  year = {2025},
  month = {sep},
  journal = {Proceedings of the International Conference on Automated Planning and Scheduling},
  volume = {35},
  number = {1},
  pages = {306--314},
  doi = {10.1609/icaps.v35i1.36132},
  url = {https://ojs.aaai.org/index.php/ICAPS/article/view/36132},
  abstractnote = {Partially observable Markov decision processes (POMDPs) are a general mathematical model for sequential decision-making in stochastic environments under state uncertainty. POMDPs are often solved online, which enables the algorithm to adapt to new information in real time. Online solvers typically use bootstrap particle filters based on importance resampling for updating the belief distribution. Since directly sampling from the ideal state distribution given the latest observation and previous state is infeasible, particle filters approximate the posterior belief distribution by propagating states and adjusting weights through prediction and resampling steps. However, in practice, the importance resampling technique often leads to particle degeneracy and sample impoverishment when the state transition model poorly aligns with the posterior belief distribution, especially when the received observation is noisy. We propose an approach that constructs a sequence of bridge distributions between the state-transition and optimal distributions through iterative Monte Carlo steps, better accommodating noisy observations in online POMDP solvers. Our algorithm demonstrates significantly superior performance compared to state-of-the-art methods when evaluated across multiple challenging POMDP domains.},
  keywords = {partially observable Markov decision processes, importance sampling, particle filtering, belief state representation, online planning, Monte Carlo methods},
  month_numeric = {9}
}
Quick Info
Year 2025
Keywords
partially observable Markov decision processes importance sampling particle filtering belief state representation online planning Monte Carlo methods
Research Areas
POMDP planning scalable AI
Search Tags

Observation, Adaptation, Annealed, Importance, Resampling, Partially, Observable, Markov, Decision, Processes, partially observable Markov decision processes, importance sampling, particle filtering, belief state representation, online planning, Monte Carlo methods, POMDP, planning, scalable AI, 2025, Zhang, Luo, Mukhopadhyay, Dubey