Why This Matters

Most decision-making algorithms assume either completely known environments or stationary dynamics, neither of which holds in real-world systems like emergency response where conditions change unpredictably. This work is innovative because it explicitly addresses the challenge of maintaining safety and performance as the environment evolves. By combining risk-averse tree search with Bayesian uncertainty quantification, the approach enables agents to learn quickly from new data while avoiding pessimistic planning that would sacrifice performance.

What We Did

This paper addresses adaptive decision-making in non-stationary Markov decision processes where the environment changes over time and the agent's learned policy may become outdated. The researchers develop an approach that combines offline learning using stored policy values with online Monte Carlo tree search to handle environments where both the dynamics and reward structures can shift. The method employs a dual-phase adaptive sampling strategy that balances exploration of unfamiliar regions with exploiting promising actions based on both the previous policy and current environment estimates.

Key Results

The proposed approach demonstrates superior adaptation compared to standard Monte Carlo tree search and other baselines across multiple environments including control and navigation tasks. The method successfully learns updated policies for new environments while maintaining robustness to changing dynamics. Experiments on standard benchmarks show that the risk-aware sampling strategy enables faster convergence and better performance than approaches that treat environment changes monolithically, proving the value of explicitly modeling uncertainty.

Full Abstract

Cite This Paper

@inproceedings{baiting2024AAMAS,
  author = {Luo, Baiting and Zhang, Yunuo and Dubey, Abhishek and Mukhopadhyay, Ayan},
  booktitle = {Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems},
  title = {Act as You Learn: Adaptive Decision-Making in Non-Stationary Markov Decision Processes},
  year = {2024},
  address = {Richland, SC},
  pages = {1301–1309},
  acceptance = {20},
  publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
  series = {AAMAS '24},
  abstract = {A fundamental challenge in sequential decision-making is dealing with non-stationary environments, where exogenous environmental conditions change over time. Such problems are traditionally modeled as non-stationary Markov decision processes (NS-MDP). However, existing approaches for decision-making in NS-MDPs have two major shortcomings: first, they assume that the updated environmental dynamics at the current time are known (although future dynamics can change); and second, planning is largely pessimistic, i.e., the agent acts "safely'' to account for the non-stationary evolution of the environment. We argue that both these assumptions are invalid in practice-updated environmental conditions are rarely known, and as the agent interacts with the environment, it can learn about the updated dynamics and avoid being pessimistic, at least in states whose dynamics it is confident about. We present a heuristic search algorithm called Adaptive Monte Carlo Tree Search (ADA-MCTS) that addresses these challenges. We show that the agent can learn the updated dynamics of the environment over time and then act as it learns, i.e., if the agent is in a region of the state space about which it has updated knowledge, it can avoid being pessimistic. To quantify "updated knowledge,'' we disintegrate the aleatoric and epistemic uncertainty in the agent's updated belief and show how the agent can use these estimates for decision-making. We compare the proposed approach with multiple state-of-the-art approaches in decision-making across multiple well-established open-source problems and empirically show that our approach is faster and more adaptive without sacrificing safety.},
  contribution = {colab},
  isbn = {9798400704864},
  keywords = {non-stationary environments, adaptive learning, decision-making under uncertainty, Monte Carlo tree search, policy learning, risk-aware planning, dynamic systems},
  location = {Auckland, New Zealand},
  numpages = {9}
}
Quick Info
Year 2024
Series AAMAS '24
Keywords
non-stationary environments adaptive learning decision-making under uncertainty Monte Carlo tree search policy learning risk-aware planning dynamic systems
Research Areas
POMDP scalable AI middleware
Search Tags

Learn, Adaptive, Decision, Making, Stationary, Markov, Processes, non-stationary environments, adaptive learning, decision-making under uncertainty, Monte Carlo tree search, policy learning, risk-aware planning, dynamic systems, POMDP, scalable AI, middleware, 2024, Luo, Zhang, Dubey, Mukhopadhyay, AAMAS24