The Adoption Problem

Our planning algorithms can compute high-quality decisions for transit dispatch, emergency response, and energy management. Our state estimation methods can maintain accurate beliefs about the world despite noisy sensors and changing conditions. But none of this matters if the people who must act on these recommendations don’t trust them.

This is the adoption problem, and it is fundamentally about explainability. When a transit dispatcher receives an AI recommendation to send a reserve bus to a particular corridor, their first question is not “is this optimal?” — it is “why?” Why this corridor and not the one with the ongoing disruption? What would happen if I sent the bus somewhere else instead? And when the dispatcher has domain knowledge the algorithm lacks — this driver doesn’t handle hills well, that route has construction the map doesn’t show — there is no way to inject that knowledge into the system’s reasoning.

Through extensive engagement with transit agencies, fleet operators, and energy managers, we have found that two gaps consistently block the adoption of AI-driven decision support in practice. First, existing optimization algorithms are purely data-driven and cannot incorporate the tacit, experience-based knowledge that human operators rely on every day. Dispatchers know things that no dataset captures: driver preferences, unofficial road closures, regulatory nuances specific to their service area. Second, the decisions that planning algorithms produce are opaque — operators cannot understand why a particular recommendation was made or what would happen under an alternative scenario. Without the ability to ask questions and get trustworthy answers, operators treat AI systems as black boxes to be overridden rather than partners to collaborate with.

Our research addresses both gaps through a program that spans formal explainability methods, human-guided search, and the broader challenge of human-AI teaming for sequential decision-making.


Explainable Planning Through Formal Logic

The challenge of explaining planning decisions is harder than it appears. A Monte Carlo Tree Search algorithm explores thousands of possible futures, building a search tree where each branch represents a sequence of actions and their probabilistic consequences. The final recommendation emerges from the aggregate statistics of this search — not from a single chain of reasoning that can be easily narrated. Simply asking a large language model to “explain” the decision produces fluent but unreliable text: it may sound right without being grounded in the actual computation that produced the recommendation.

Our LogiEx framework addresses this by combining the rigor of formal logic with the accessibility of natural language. The approach works in three stages. First, when an operator poses a question — “Why was vehicle 3 assigned to passenger 5 instead of vehicle 2?” — the system translates the natural-language query into a formal logical specification using computation tree logic (CTL), which is well-suited to reasoning about the branching structure of search trees. Second, a verification module checks the logical specification against the actual MCTS search tree, extracting the specific evidence that supports or refutes the query: the rollout statistics, the constraint violations, the probability estimates at the relevant decision nodes. Third, this verified evidence is rendered into natural language that the operator can read and act on — not generic boilerplate, but a specific explanation grounded in the computation: “Vehicle 2 was not assigned because it had an 80% probability of exceeding the time window due to sampled traffic congestion on I-40.”

This architecture ensures that explanations are both faithful (they reflect the actual reasoning of the planner) and accessible (they are expressed in terms operators understand). In evaluations, LogiEx achieves up to 7.9x higher semantic similarity and 1.6x higher factual consistency compared to directly prompting LLMs, and user studies with CPS practitioners show it is consistently the most preferred form of explanation.

This work progressed from CTL-based explainability for MCTS (ECAI 2024), to an LLM-integrated version (AAMAS 2025), to the full system with human-guided search (ICCPS 2026) — each step addressing a limitation of the previous one.


Explanations are necessary but not sufficient. Operators don’t just want to understand decisions — they want to influence them. A dispatcher who knows that a particular road is flooded (information not yet in any sensor feed) needs to tell the system, and the system needs to incorporate that knowledge into its planning immediately.

This insight motivates Human-Guided Search (HuGS), introduced in our ICCPS 2026 work. HuGS transforms operators from passive recipients of AI recommendations into active participants who can steer the planning process. The mechanism supports three classes of interaction.

Contrastive queries (“why” questions) allow operators to understand why one action was chosen over another. The system traces the relevant branch in the search tree, retrieves the rollout statistics and applied constraints, and identifies the minimal differentiating factors — the specific reasons the chosen action was preferred. This doesn’t require re-solving the problem; the evidence already exists in the search tree.

Counterfactual queries (“what-if” scenarios) let operators explore alternatives. When an operator asks “What if Jane picked up passenger 5 next?”, the system translates this into a temporary constraint and performs a localized, incremental re-search — not a full re-solve from scratch, but a targeted exploration of the affected portion of the decision space. Within seconds, the operator sees what the consequences of that alternative would be, grounded in the same probabilistic model the planner uses.

Real-time plan refinement closes the loop entirely. When an operator provides new information — a driver needs an early break, a vehicle has broken down, a road is closed — the system converts this input into formal constraints that modify the planning objective and trigger incremental replanning. The planner prunes invalidated branches of the search tree while preserving the statistical estimates on unaffected nodes, producing an updated plan that respects the operator’s input without discarding all prior computation.

Together, these capabilities transform the relationship between human operators and AI planners. The system is no longer a static optimizer that produces a take-it-or-leave-it recommendation. It becomes an adaptive teammate that explains its reasoning, answers questions, explores alternatives, and incorporates human knowledge — all in real time.


Negotiation as a Form of Human-AI Coordination

Not all human-AI interaction takes the form of an operator querying a planner. In some settings, the challenge is coordinating between an optimization system and the people whose behavior it depends on. EV charging is a clear example: a building operator wants to minimize peak energy costs by controlling when vehicles charge and discharge, but drivers have their own preferences about departure times and minimum charge levels. The operator can’t simply dictate charging schedules — drivers will opt out if the system doesn’t respect their needs.

Our CONSENT framework (AAMAS 2026) addresses this through structured negotiation with formal guarantees. The system generates a menu of incentive-backed charging options for each driver, calibrated to their stated flexibility. Drivers choose voluntarily — the mechanism is provably strategy-proof (there’s no benefit to misrepresenting preferences), budget-feasible (the building operator doesn’t overspend on incentives), and participation is strictly voluntary. The result is a coordination protocol that aligns operator and driver objectives: building operators see costs drop by over 3.5% compared to optimized non-negotiating baselines, while drivers’ charging expenses fall 22% below retail rates.

CONSENT illustrates a broader principle: in systems that involve multiple stakeholders with different objectives, the AI’s role is not to override human preferences but to find solutions that respect them. This requires making the system’s reasoning transparent (so stakeholders understand the trade-offs) and interactive (so stakeholders can express their constraints and see how the system responds).


The Broader Vision: Human-AI Teaming

The research threads above — formal explainability, human-guided search, and negotiation-based coordination — converge on a larger vision for how AI should operate in high-stakes, human-centered domains.

Current AI systems for fleet management, transit dispatch, and logistics treat routing and scheduling as closed-world optimization problems: data goes in, a solution comes out, and humans either accept or manually override it. This creates a fundamental tension. The algorithms can handle the combinatorial complexity that overwhelms human planners, but they can’t access the tacit knowledge that operators accumulate through years of experience — knowledge about specific drivers, local road conditions, regulatory nuances, and community needs that no dataset captures.

Our research is building toward a new paradigm where the boundary between human expertise and algorithmic optimization is permeable in both directions. Operators can inject their knowledge into the planning process through natural-language directives that are translated into formal constraints the planner respects. The planner can explain its reasoning through verified, grounded explanations that operators can interrogate. And when conditions change — a vehicle breaks down, a will-call trip arrives, a driver requests a schedule change — the system adapts incrementally rather than recomputing from scratch, fast enough to support genuine collaboration rather than batch-mode handoffs.

This vision is grounded in the recognition that the most effective decision-making systems will not be fully autonomous. They will be collaborative: transparent enough that humans understand the reasoning, interactive enough that humans can guide the search, and responsive enough that the system adapts to human input in real time. The formal foundations we are building — in logic-based explainability, interruptible planning, and neurosymbolic constraint translation — provide the scientific basis for this kind of human-AI teaming at operational scale.

Selected Publications:

  1. Z. An, X. Wang, H. Baier, Z. Chen, A. Dubey, A. Mukhopadhyay, T. T. Johnson, J. Sprinkle, and M. Ma, LogiEx: Logic-Integrated Explanations for Stochastic Planning in Cyber-Physical Systems, in Proceedings of the HSCC/ICCPS 2026: 29th ACM International Conference on Hybrid Systems: Computation and Control and 17th ACM/IEEE International Conference on Cyber-Physical Systems, 2026.
    Summary PDF
    @inproceedings{iccps2026_logiex,
      author = {An, Ziyan and Wang, Xia and Baier, Hendrik and Chen, Zirong and Dubey, Abhishek and Mukhopadhyay, Ayan and Johnson, Taylor T. and Sprinkle, Jonathan and Ma, Meiyi},
      title = {LogiEx: Logic-Integrated Explanations for Stochastic Planning in Cyber-Physical Systems},
      year = {2026},
      booktitle = {Proceedings of the HSCC/ICCPS 2026: 29th ACM International Conference on Hybrid Systems: Computation and Control and 17th ACM/IEEE International Conference on Cyber-Physical Systems},
      location = {Saint Malo, France},
      keywords = {explainable AI, transit planning, formal logic, large language models, Monte Carlo tree search, knowledge graphs, human-AI interaction},
      note = {Acceptance rate: 28\%; Regular Paper; Track: Systems and Applications},
      series = {HSCC/ICCPS '26},
      what = {LogiEx integrates formal logic and large language models to provide explainable sequential planning for human-centered cyber-physical systems like intelligent transportation. The system combines Monte Carlo Tree Search planning with logical reasoning to generate trustworthy explanations for planning decisions. LogiEx categorizes user queries into three types: those answerable from existing search trees, those requiring human-guided search, and those requiring background knowledge. The framework generates logical evidence supporting planning decisions and translates this into natural language explanations that users can verify.},
      why = {Traditional planning algorithms like Monte Carlo Tree Search achieve strong performance but lack transparency, making their outputs unsuitable for high-stakes CPS applications where users need to understand and trust system decisions. LogiEx is innovative because it bridges the transparency gap by combining stochastic search with formal logic verification, allowing the system to explain not just what actions it recommends but why those actions are justified given domain constraints and objectives. This addresses a critical safety concern in AI-driven CPS.},
      results = {Quantitative evaluation demonstrates LogiEx achieves up to 7.9x higher semantic similarity and 1.7x higher factual consistency compared to LLM-only baselines on explaining transportation planning decisions. User studies validate that the framework provides faithful, consistent explanations that help users understand the planning process while maintaining the ability to ask follow-up questions for deeper reasoning.},
      project_tags = {transit, planning, Explainable AI}
    }
    

    Human-centered cyber-physical systems (CPS), such as intelligent transportation services, warehouse robotics operated by human supervisors, and healthcare infrastructures involving clinicians and medical staff, increasingly rely on Artificial Intelligence (AI)-driven sequential decision-making under uncertainty. However, the lack of transparent reasoning in these systems limits trust, verifiability, and human oversight. This challenge is particularly acute for planning algorithms like Monte Carlo Tree Search (MCTS), whose stochastic search processes are opaque to engineers and operators. To address this gap, we introduce LogiEx, a logic-integrated framework that combines large language models (LLMs) with formal methods to generate trustworthy explanations for planning behavior. LogiEx transforms free-form user queries into logical statements with templated variables, then verifies whether evidence extracted from the decision process aligns with both the environment state and the constraints of the stochastic planning model. This enables grounded explanations across a wide range of user questions—from factual retrieval to comparative reasoning. LogiEx also supports Human-Guided Search (HuGS), allowing users to pose conditional ‘what-if” queries that trigger new, scenario-specific searches, ensuring that humans are not passive observers but active participants who can steer and refine the planning process. We evaluate LogiEx through both quantitative assessments and user studies, finding that it consistently outperforms baselines, achieving up to 7.9 higher semantic similarity (BERTScore) and 1.6 higher factual consistency (FactCC) compared to baseline LLMs, and is the most preferred form of explanation among CPS practitioners.

  1. Z. An, X. Wang, H. Baier, Z. Chen, A. Dubey, T. T. Johnson, J. Sprinkle, A. Mukhopadhyay, and M. Ma, LogiEx: Integrating Formal Logic and Large Language Model for Explainable Planning, in Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2025), 2025.
    Summary
    @inproceedings{an2025logiex,
      author = {An, Ziyan and Wang, Xia and Baier, Hendrik and Chen, Zirong and Dubey, Abhishek and Johnson, Taylor T. and Sprinkle, Jonathan and Mukhopadhyay, Ayan and Ma, Meiyi},
      booktitle = {Proceedings of the 24th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2025)},
      title = {LogiEx: Integrating Formal Logic and Large Language Model for Explainable Planning},
      year = {2025},
      note = {Extended Abstract},
      acceptance = {40},
      contribution = {colab}
    }
    
  1. Z. An, H. Baier, A. Dubey, A. Mukhopadhyay, and M. Ma, Enabling MCTS Explainability for Sequential Planning Through Computation Tree Logic, in ECAI 2024 - 27th European Conference on Artificial Intelligence, 2024.
    Summary PDF
    @inproceedings{an2024enablingmctsexplainabilitysequential,
      author = {An, Ziyan and Baier, Hendrik and Dubey, Abhishek and Mukhopadhyay, Ayan and Ma, Meiyi},
      title = {Enabling MCTS Explainability for Sequential Planning Through Computation Tree Logic},
      year = {2024},
      archiveprefix = {arXiv},
      booktitle = {{ECAI} 2024 - 27th European Conference on Artificial Intelligence},
      contribution = {colab},
      acceptance = {23},
      eprint = {2407.10820},
      location = {Santiago de Compostela, Spain},
      primaryclass = {cs.AI},
      url = {https://arxiv.org/abs/2407.10820},
      what = {This paper presents a computation tree logic (CTL)-based explainable Monte Carlo tree search framework for sequential decision-making in transportation systems. The work develops a systematic approach to translate non-technical user queries into CTL logic formulas that can be verified against MCTS search trees. The framework incorporates specialized explainers that generate human-readable natural language responses describing why the planning algorithm selected particular actions, with explanations tailored to user concerns about route efficiency, time constraints, and alternative options.},
      why = {Monte Carlo tree search is a powerful planning algorithm but its decision-making process is opaque to non-technical users like transit dispatchers. This work addresses a critical gap by making sequential planning algorithms interpretable without sacrificing performance. The innovation lies in bridging formal verification methods with practical natural language explanation, enabling AI-based transit systems to communicate their reasoning to human operators who must ultimately trust and validate the recommendations.},
      results = {The CTL-based explainer successfully generates human-readable explanations for MCTS decisions in transit routing scenarios with different query types including efficiency queries, contrastive queries, and tree exploration questions. User studies with 82 participants demonstrate that the proposed framework significantly outperforms baseline visualization methods in user understanding, satisfaction, and trust. The approach maintains computational efficiency while providing comprehensive explanations suitable for real-world transit dispatch applications.},
      keywords = {explainable AI, Monte Carlo tree search, sequential planning, natural language explanation, transportation, user interpretability, transit routing, human-AI interaction},
      project_tags = {transit, Explainable AI, middleware}
    }
    

    Monte Carlo tree search (MCTS) is one of the most capa- ble online search algorithms for sequential planning tasks, with sig- nificant applications in areas such as resource allocation and transit planning. Despite its strong performance in real-world deployment, the inherent complexity of MCTS makes it challenging to understand for users without technical background. This paper considers the use of MCTS in transportation routing services, where the algorithm is integrated to develop optimized route plans. These plans are required to meet a range of constraints and requirements simultaneously, fur- ther complicating the task of explaining the algorithm’s operation in real-world contexts. To address this critical research gap, we intro- duce a novel computation tree logic-based explainer for MCTS. Our framework begins by taking user-defined requirements and translat- ing them into rigorous logic specifications through the use of lan- guage templates. Then, our explainer incorporates a logic verifica- tion and quantitative evaluation module that validates the states and actions traversed by the MCTS algorithm. The outcomes of this anal- ysis are then rendered into human-readable descriptive text using a second set of language templates. The user satisfaction of our ap- proach was assessed through a survey with 82 participants. The re- sults indicated that our explanatory approach significantly outper- forms other baselines in user preference.

  1. R. Sen, F. Liu, J. P. Talusan, A. Pettet, Y. Suzue, M. Bailey, A. Mukhopadhyay, and A. Dubey, CONSENT: A Negotiation Framework for Leveraging User Flexibility in Vehicle-to-Building Charging under Uncertainty, in Proceedings of the 24th Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2026), 2026.
    Summary PDF
    @inproceedings{sen2026negotiations,
      author = {Sen, Rishav and Liu, Fangqi and Talusan, Jose Paolo and Pettet, Ava and Suzue, Yoshinori and Bailey, Mark and Mukhopadhyay, Ayan and Dubey, Abhishek},
      title = {CONSENT: A Negotiation Framework for Leveraging User Flexibility in Vehicle-to-Building Charging under Uncertainty},
      booktitle = {Proceedings of the 24th Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2026)},
      year = {2026},
      note = {Acceptance rate: 25\%},
      location = {Paphos, Cyprus},
      publisher = {International Foundation for Autonomous Agents and Multiagent Systems},
      series = {AAMAS '26},
      keywords = {vehicle-to-building, energy management, negotiation, demand response, incentive design, semi-Markov decision processes, user flexibility},
      what = {CONSENT is a negotiation framework that enables coordination between EV owners and smart buildings under uncertainty in vehicle-to-building charging systems. The work formulates the V2B charging problem as a semi-Markov decision process with negotiation between buildings and users. The system offers personalized charging options based on user flexibility constraints, building energy efficiency goals, and uncertainty in EV arrival patterns, allowing users to express preferences through bounded SoC and departure time adjustments while buildings optimize charging schedules.},
      why = {Vehicle-to-building energy coordination creates a fundamental conflict: buildings want to minimize peak demand costs while users want convenient, low-cost charging. Existing approaches either assume full system control or fail to capture real-world incentive-based coordination where users voluntarily participate. CONSENT is innovative because it explicitly bridges technical control with behavioral negotiation, using formal constraint handling and incentive design to enable mutually beneficial cooperation without requiring users to fully comply with building preferences.},
      results = {Simulation and user study evaluation demonstrates that CONSENT generates mutually beneficial outcomes: buildings achieve 23% cost reductions compared to baseline approaches while users maintain satisfaction with their charging requirements through negotiated flexibility options. The framework proves effective at aligning disparate objectives through structured negotiation, significantly reducing operational costs while ensuring user voluntary participation.},
      project_tags = {energy, CPS, planning}
    }
    

    The growth of Electric Vehicles (EVs) creates a conflict in vehicle-to-building (V2B) settings between building operators, who face high energy costs from uncoordinated charging, and drivers, who prioritize convenience and a full charge. To resolve this, we propose a negotiation-based framework that, by design, guarantees voluntary participation, strategy-proofness, and budget feasibility. It transforms EV charging into a strategic resource by offering drivers a range of incentive-backed options for modest flexibility in their departure time or requested state of charge (SoC). Our framework is calibrated with user survey data and validated using real operational data from a commercial building and an EV manufacturer. Simulations show that our negotiation protocol creates a mutually beneficial outcome: lowering the building operator’s costs by over 3.5% compared to an optimized, non-negotiating smart charging policy, while simultaneously reducing user charging expenses by 22% below the utility’s retail energy rate. By aligning operator and EV user objectives, our framework provides a strategic bridge between energy and mobility systems, transforming EV charging from a source of operational friction into a platform for collaboration and shared savings.