Transactive Energy Systems

The goal of the project to develop transactive energy systems. Transactive energy systems have emerged as a transformative solution for the problems faced by distribution system operators due to an increase in the use of distributed energy resources and rapid growth in renewable energy generation. They are tightly coupled cyber and physical systems, which require resilient and robust financial markets where transactions can be submitted and cleared, while ensuring that erroneous or malicious transactions cannot destabilize the grid. In the last five years, we have used this research vertical to drive our research in the area of resilient decentralized CPS and have developed a novel middleware platform called TRANSAX by enabling participants to trade in an energy futures market, which improves efficiency by finding feasible matches for energy trades, reducing the load on the distribution system operator. It provides privacy to participants by anonymizing their trading activity using a distributed mixing service, while also enforcing constraints that limit trading activity based on safety requirements, such as keeping power flow below line capacity. One of the key innovations in TRANSAX was the development of a novel hybrid solver concept, combining the trustworthiness of distributed ledgers with the efficiency of conventional computational platforms. This hybrid architecture ensures the integrity of data and computational results as long as majority of the ledger nodes are secure while allowing the complex computation to be performed by a set of redundant and efficient solvers. We collaborate actively with Prof. Aron Lazka, University of Houston in this project.

This project is funded in part by National Science Foundation and in Part by Siemens, CT.

Smart Public Transit - Transit Hub

This project addresses the problem of urban transportation and congestion by building analytical tools that help the customers and the transit agencies reduce uncertainties and optimize the transit operations. We adress this problem at three fronts - Data Analytics, Planning and analysis tool for understanding and projecting the impact of transportation choices, and developing scalable data stores that can enable cities to operate their own data lakes and analytics engines. As part of the project we also created an application called Transit Hub. Recently, we have been looking at the endogenous uncertainties and costs of transit operations as part of the the energy optimization project.

This project has been supported in part by the National Science Foundation and Siemens, CT.

Mobility for all - Harnessing Emerging Transit Solutions for Underserved Communities

Public transportation infrastructure is an essential component in cultivating equitable communities. However, public transit agencies have historically struggled to achieve this since they are often severely stressed in terms of resources as they have to make the trade-off between concentrating service into routes that serve large numbers of people and spreading service out to ensure that people everywhere have access to at least some service. A solution that holds great promise for improving public transit systems is the integration of fixed-route services with microtransit systems: multi-passenger transportation services that serve passengers using dynamically generated routes and may expect passengers to make their way to and from common pick-up or drop-off points. However, most microtransit systems have failed in the past due to the lack of community engagement, inability to handle the uncertainty of operations when integrating the fixed transit, and inability to handle the system-level optimization challenges. The project takes a socio-relational approach to community engagement in collaboration with the Chattanooga Area Regional Transportation Authority (CARTA), design a community-centric micro-transit service that augments fixed-line public transit networks (improving transit accessibility), and demonstrate its effectiveness in the representative city of Chattanooga. The outcome of the project will be a deployment-ready software system that can be used by an agency to design and operate a micro-transit service effectively. The algorithmic toolchain will be complemented by mechanisms to optimally select the parameters and sustainably manage the data required by the algorithms. In addition, this project will provide a set of exemplar case studies and a validated social methodology to engage the community and learn their requirements, which will be fed into the algorithms. This will potentially impact a wide range of cities in the U.S. that do not have well-developed transit systems as the project will not only provide a reusable operations system but also demonstrate how integrated socio-technical research and strong community engagement can provide a pattern for sustainability and expansion.

This project is funded by National Science Foundation.

Smart Emergency Response

The objective of this research is to understand and improve the resource coordination and dispatch mechanisms used by first responders. As such we are building StatResp – an open-source integrated tool-chain to aid first responders understand where and when incidents occur, and how to allocate responders in anticipation of incidents. This is important because first-responders are constrained by limited resources, and must attend to different types of incidents like traffic accidents, fires, and distress calls. Solving this problem requires not just sending the nearest emergency responder, but sometimes being proactive placing emergency vehicles in regions with higher incident likelihood. Sending the nearest available responder by euclidean distance ignores road networks and their congestion, as well as where the resources are stationed. Greedily assigning resources to incidents can lead to resources being pulled away from their stations, increasing response times if an incident occurs in the future in the area where responder should be positioned. In prior art, as well as practice, incident forecasting and response are typically siloed by category and department, reducing effectiveness of prediction and precluding efficient coordination of resources. Further, most of these approaches are offline and fail to capture the dynamically changing environments under which critical emergency response occurs. As a consequence, statistical and algorithmic approaches to emergency response have received significant attention in the last few decades. Governments in urban areas are increasingly adopting methods that enable Smart Statistical Emergency Response, which are a combination of forecasting models and visualization tools to understand where and when incidents occur, and optimization approaches to allocate and dispatch responders. Please refer to a preprint of our survey paper for more information. Ultimately, the methods developed in this work can be applied to other domains where multi-resource spatio-temporal scheduling is a challenge. We collaborate with Prof. Yevgeniy Vorobeychik, WUSTL, Prof. Hemant Purohit, GMU and Prof. Saideep Nannapaneni, Wichita State.

This project is funded in part by the National Science Foundation.

Secure and Trustworthy Middleware for Integrated Energy and Mobility in Smart Connected Communities

The rapid evolution of data-driven analytics, Internet of things (IoT) and cyber-physical systems (CPS) are fueling a growing set of Smart and Connected Communities (SCC) applications, including for smart transportation and smart energy. However, the deployment of such technological solutions without proper security mechanisms makes them susceptible to data integrity and privacy attacks, as observed in a large number of recent incidents. The goal of this project is to develop a framework to ensure data privacy, data integrity, and trustworthiness in smart and connected communities. The innovativeness of the project lies in the collaborative effort between team of researchers from US and Japan together. As part of the project the research team is developing privacy-preserving algorithms and models for anomaly detection, trust and reputation scoring used by application providers for data integrity and information assurance. Towards that goal, we are also studing trade-offs between security, privacy, trust levels, resources, and performance using two exemplar applications in smart mobility and smart energy exchange in communities.

This project is funded by National Science Foundation.

Augmenting and Advancing Cognitive Performance of Control Room Operators for Power Grid Resiliency

The goal of the project is to investigate the mechanisms required to integrate recent advances from cognitive neuroscience, artificial intelligence, machine learning, data science, cybersecurity, and power engineering to augment power grid operators for better performance. Two key parameters influencing human performance from the dynamic attentional control (DAC) framework are working memory (WM) capacity, the ability to maintain information in the focus of attention, and cognitive flexibility (CF), the ability to use feedback to redirect decision making given fast changing system scenarios. The project will achieve its goals through analyzing WM and CF and performance of power grid operators during extreme events; augmenting cognitive performance through advanced machine learning based decision support tools and adaptive human-machine system; and developing theory-driven training simulators for advancing cognitive performance of human operators for enhanced grid resilience. We are building a new set of algorithms for data-driven event detection, anomaly flag processing, root cause analysis and decision support using Tree Augmented naive Bayesian Net (TAN) structure, Minimum Weighted Spanning Tree (MWST) using the Mutual Information (MI) metric, and unsupervised learning improved for online learning and decision making. In addition we use a discrete event model that captures the causal and temporal relationships between failure modes (causes) and discrepancies (effects) in a system, thereby modeling the failure cascades while taking into account propagation constraints imposed by operating modes, protection elements, and timing delays. This formalism is called Temporal Causal Diagram (TCD) and can model the effects of faults and protection mechanisms as well as incorporate fine-grain, physics-based diagnostics into an integrated, system-level diagnostics scheme. This project is in collaboration with Prof. Gautam Biswas from ISIS and Prof. Anurag Srivastava from Washington State University.

This project is funded by National Science Foundation.

Resilient Information Architecture Platform for the Smart Grid

The future of the Smart Grid for electrical power depends on computer software that has to be robust, reliable, effective, and secure. This software will continuously grow and evolve, while operating and controlling a complex physical system that modern life and economy depends on. The project aims at engineering and constructing the foundation for such software; a ‘platform’ that provides core services for building effective and powerful apps, not unlike apps on smartphones. The platform is designed by using and advancing state-of-the-art results from electrical, computer, and software engineering, will be documented as an open standard, and will be prototyped as an open source implementation.

This project has been funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0000666 and funded in part by a grant from Siemens, CT.

Addressing Transit Accessibility and Public Health Challenges due to COVID-19

The COVID-19 pandemic has not only disrupted the lives of millions but also created exigent operational and scheduling challenges for public transit agencies. Agencies are struggling to maintain transit accessibility with reduced resources, changing ridership patterns, vehicle capacity constraints due to social distancing, and reduced services due to driver unavailability. A number of transit agencies have also begun to help the local food banks deliver food to shelters, which further strains the available resources if not planned optimally. At the same time, the lack of situational information is creating a challenge for riders who need to understand what seating is available on the vehicles to ensure sufficient distancing. In partnership with the transit agencies of Chattanooga, TN, and Nashville, TN, and Prof. Aron Lazka, University of Houston we are rapidly developing integrated transit operational optimization algorithms, which will provide proactive scheduling and allocation of vehicles to transit and cargo trips, considering exigent vehicle maintenance requirements (i.e., disinfection). A key component of the research is the design of privacy-preserving camera-based ridership detection methods that can help provide commuters with real-time information on available seats considering social-distancing constraints.

This project is funded by National Science Foundation.

Model-based Intent-Driven Adaptive Software (MIDAS)

The goal of the project to develop a new approach to evolutionary software development and deployment that extends the results of model-based software engineering and provides an integrated, end-to-end framework for building software that is focused on growth and adaptation. The envisioned technology is based on the concept of a ‘Model Design Language’ (MDL) that supports the expression of the developer’s objectives (the ‘what’), intentions (the ‘how’), and constraints (the ‘limitations’) related to the software artifacts to be produced. The ‘models’ represented in this language are called the ‘design models’ for the software artifact(s) and they encompass more than what we express today in software models. We consider software development as a continuous process, as in the DevOps paradigm, where the software is undergoing continuous change, improvement, and extension; and our goal is to build the tools to support this. The main idea is that changes in the requirements will result in the designer/developer making changes in the ‘design model’ that will result in changes in the generated artifacts, or changes in the target system, at run-time, as needed. Such tool support is essential for developers as expensive, manual rework cannot be avoided without it. This project is in colaboration with Prof. Gabor Karsai, Prof. Daniel Balasubramanian at ISIS and Alessandro Coglio at Kestrel.

This project is funded by the DARPA.

Integrated Microgrid Control Platform

Dynamic formation of networked microgrids for heterogeneous components is not a solved problem. System integrators must often put together a microgrid from available components that communicate different information, at different rates, using different protocols. Due to variations in the microgrid architectures and their generation and load mix, each microgrid solution is customized and site-specific. Building on the Resilient Information Architecture Platform for Smart Grid, the goal of this project is to demonstrate a technology for microgrid integration and control based on distributed computing techniques, advanced software engineering methods, and state-of-the-art control algorithms that provides a scalable and reusable solution yielding a highly configurable Integrated Microgrid Control Platform (IMCP) . Our solution addresses the heterogeneity problem by encapsulating the specific details of protocols into reusable device software components with common interfaces, and the dynamic grid management and reconfiguration problem with advanced distributed algorithms that form the foundation for a decentralized and expandable microgrid controller. Investigators Prof. Gabor Karsai (PI), Prof. Abhishek Dubey (Co-PI) and Prof. Srdjan Lukic (Co-PI).

This project has been funded in part by the DOD ESCTP program

Interdisciplinary Approach to Prepare Undergraduates for Data Science Using Real-World Data from High Frequency Monitoring Systems

With support from the National Science Foundation (NSF) Improving Undergraduate STEM Education Program and in collaboration with Prof. Gautam Biswas, this project aims to help the incorporation of data science concepts and skill development in undergraduate courses in biology, computer science, engineering, and environmental science. Through a collaboration between Virginia Tech, Vanderbilt University, and North Carolina Agricultural and Technical State University, we are developing interdisciplinary learning modules based on high frequency, real-time data from water and traffic monitoring systems. The learning module topics will include Interdisciplinary Learning, Data Analytics, and Industry Partnerships. These topics will facilitate incorporation of real-world data sets to enhance the student learning experience and they are broad enough that they can incorporate other data sets in the future. Such expertise will better prepare students to enter the STEM workforce, especially those STEM professions that focus on smart and connected computing. The project will investigate how and in what ways the modules support student learning of data science. The project is also investigating how implementation of the modules varies across the collaborating institutions. It is expected that the project will define key considerations for integrating data science concepts into STEM courses and will host workshops to introduce faculty to these considerations and strategies so they can incorporate the learning modules into the STEM courses that they teach. Collaborators : V. Lohani, R. Dymond, & K. Xia (Virginia Tech); G. Biswas, Erin Hotchkiss, C. Vanags (Vanderbilt); M.K. Jha, N. Aryal, & E.H. Park (North Carolina Agricultural & Technical State University).

This project is funded by National Science Foundation.

High-dimensional Data-driven Energy optimization for Multi-Modal transit Agencies (HD-EMMA)

The goal of the project is to enable the development and evaluation of tools to promote energy efficiency within mobility as a service system currently operational in Chattanooga. For this purpose, we are developing real-time data sets containing information about engine telemetry, including engine speed, GPS position, fuel usage and state of charge (electrical vehicles) from all vehicles in addition to traffic congestion, current events in the city and the braking and acceleration patterns. These high-dimensional dataset allow us to train accurate data-driven predictors using deep neural networks, for energy consumption given various routes and schedules. CARTA is planning to use these predictors for the energy optimization of its fleet of vehicles. We are planning to evaluate our framework by comparing the energy consumption, comfort, etc. of the routes and schedules found using our data-driven framework to existing routes and schedules. We believe that such predictors will revolutionize the transportation sector in a way that is similar to the capabilities provided by high-definition maps used in autonomous driving. This project complements the DOE national labs effort on vehicle energy consumption model by exploiting new data to investigate impacts of road/driver factors on vehicle energy consumption. We collaborate actively with Prof. Aron Lazka, University of Houston and Philip Pugliese, Chattanooga Regional Transit Authority and Prof. Yuche Chen from University of South Carolina in this project.

This project is funded by the Department of Energy.

Building Resilient Electric Grid

Reliable operation of cyber-physical systems (CPS) of societal importance such as Smart Electric Grids is critical for the seamless functioning of a vibrant economy. Sustained power outages can lead to major disruptions over large areas costing millions of dollars. Efficient computational techniques and tools that curtail such systematic failures by performing fault diagnosis and prognostics are therefore necessary. The Smart Electric Grid is a CPS: it consists of networks of physical components (including generation, transmission, and distribution facilities) interfaced with cyber components (such as intelligent sensors, communication networks, and control software). We are developing new methods to build models for the smart grid representing the failure dependencies in both physical and cyber components. These models will be used to build an integrated system-wide solution for diagnosing faults and predicting future failure propagations that can account for existing protection mechanisms. The original contribution of this work is in the integrated modeling of failures on multiple levels in a large distributed cyber-physical system and the development of novel, hierarchical, robust, online algorithms for diagnostics and prognostics.

This project has been supported in part by the National Science Foundation grants.

CHARIOT (Cyber-pHysical Application aRchItecture with Objective-based reconfiguraTion)

The CHARIOT (Cyber-pHysical Application aRchItecture with Objective-based reconfiguraTion) project, aims to address the challenges stemming from the need to resolve various challenges within extensible CPS found in smart Cities. CHARIOT is an application architecture that enables design, analysis, deployment, and maintenance of extensible CPS by using a novel design-time modeling tool and run-time computation infrastructure. In addition to physical properties, timing properties and resource requirements, CHARIOT also considers heterogeneity and resilience of these systems. The CHARIOT design environment follows a modular objective decomposition approach for developing and managing the system. Each objective is mapped to one or more data workflows implemented by different software components. This function to component association enables us to assess the impact of individual failures on the system objectives. The runtime architecture of CHARIOT provides a universal cyber-physical component model that allows distributed CPS applications to be constructed using software components and hardware devices without being tied down to any specific platform or middleware. It extends the principles of health management, software fault tolerance and goal based design.

This project has been supported in part by a grant from Siemens Corporate Technology and in part by National Science Foundation grants.

Blockchain Middleware for Multi-stakeholder Cyber physical systems

We are focusing on creating smart and connected community solutions, which provide participants the capability to not only exchange data and services in a decentralized and perhaps anonymous manner, but also provide them with the capability to preserve an immutable and auditable record of all transactions in the system. Blockchains form a key component of these platforms because they enable participants to reach a consensus on any state variable in the system, without relying on a trusted third party or trusting each other. Distributed consensus not only solves the trust issue, but also provides fault-tolerance since consensus is always reached on the correct state as long as the number of faulty nodes is below a threshold. However, it also introduces new assurance challenges such as privacy and correctness that must be addressed before protocols and implementations can live up to their potential. For instance, smart contracts deployed in practice are riddled with bugs and security vulnerabilities. Our group has been working on a number of projects in this interesting area, including work on transactive energy systems. Our research focuses on both the reusable middleware aspect as well as the foundational technologies required to ensure the rigor and correctness of the platform. We collaborate actively with Prof. Aron Lazka, University of Houston in this project.

The work in this area has been supported by grants from Siemens, CT and National Science Foundation.

Assuring Cyber-Physical Systems with Learning Enabled Components

In recent years, AI based components are being heavily used in CPS. Despite their impressive capability, using them in safety critical applications is challenging because they learn from training data, and subtle changes in the images during testing could cause these components to predict erroneously, In addition, testing and verifying these components is complex and sometimes not possible and as a result safety and assurance case development of systems using these components is complicated. The group in collaboration with Prof. Gabor Karsai, Prof. Taylor Johnson, Prof. Xenofon Koutsoukos, Prof. Ted Bapty and Prof. Janos Sztipanovits have been focusing on methods to identify anomalies and recover from failures as well as develop system level safety assurance arguments. Till now, the SCOPE-Lab research group have developed a methodology to use a class of variational autoencoder called Beta-VAE in combination with dissimilarity metrics like Kullback-Leibler divergence to perform anomaly detection on the input data streams. Once an anomaly is detected we use a weighted simplex strategy to transition to a safe controller. Instead of using only a single control output (as in Simplex Architecture), we designed a weighted ensemble of the two control outputs. The weights are computed dynamically to improve the balance of safety versus performance of the system. We are also working on a methodology to semi-automate the generation of assurance cases for CPS with AI components. We have also built a test-bed called Deep NN-Car for experimentation and validation of these approaches.

This project is funded by DARPA.


In this project we designed and Implemented a Secure Information Architecture for the DARPA Systems F6 program. The information architecture platform we developed is a layered stack containing a novel real-time operating system, middleware and a component layer. This work further enabled Distributed Real-time Embedded Managed Systems (DREMS), a special class of distributed embedded computing systems that are remotely controlled and managed, but they operate in and are integrated into a local physical environment. The complete software platform and a model-driven software development toolchain that can be used to design, implement, and operate DREMS can be obtained upon request.

Development of the DREMS code base was supported by the DARPA System F6 program through NASA ARC.


Software has become a key enabler and integrator for modern systems. Understanding the physical mechanics of software fault propagation is difficult for general class of systems. Without this knowledge, we often see that the software breaks all the time and the system breaks as a result. In this project, we studied technicals, patterns and architectural frameworks to make the software intensive system more resilient. In this work we accepted that software is going to fail and developed techniques that can be used to compare different designs for resiliency. We also studied the tradeoff between redundancy and runtime reconfiguration in this project. Finally, we designed tools for mapping distributed application configuration models to reliability block diagrams and using the redundancy information to compute resilience metrics used for comparing alternative deployments. More information and the tools are available.

This project was sponsored by Air Force Research Laboratory