AI for Cyber-Physical Systems
Context: The fundamental advantage of AI methods is their ability to handle high-dimensional state-space and learn decision procedures/control algorithms from data rather than models. This is critical as real-world state spaces are complex, dynamic, and hard to model. Our goal is to bridge this gap and learn an abstract representation of these state spaces and develop decision procedures for use in the different research verticals we investigate - proactive emergency response systems, transit management systems, and electric power grids. However, the challenge is that AI components learn from training data, which may not necessarily cover the real-world distribution. Further, testing and verifying these components is complex and sometimes not possible. Our work in this area, focuses on the development of the decision procedures to be used in the system along with runtime monitors, and assurance cases to show that the system at runtime will be safe if there is a fault in the component of the system (software, hardware, or AI). We deploy these procedures in the research domains we work in.
Innovation and Research Products:
ReSonAte - We have designed a dynamic assurance approach called ReSonAte that computes the likelihood of unsafe conditions or system failures considering the safety requirements, assumptions made at design time, past failures in a given operating context, and the likelihood of system component failures. The system has been demonstrated in simulations using two separate autonomous system simulations: CARLA and an unmanned underwater vehicle. The system was evaluated across 600 separate simulation scenes where we tested scenes with distribution shifts, component failures, and a high likelihood of collisions (based on past observations). Through the tests, we were able to show that our methodology has a precision of 73% and a recall of 79%. We are currently working on methods to dynamically estimate and learn the conditional probabilities and improve the precision values. On average, the framework takes 0.3 milliseconds for the computation of risk scores.
Assurance Monitors - While ReSonAte monitors the safety assurance at the system-level, we need monitors at component levels to detect its anomalous behavior. Monitors to detect anomalies like data validity, pre-condition, and post-condition failures, user-code failure have been designed for conventional CPS, but a LEC based CPS would require complex monitors or assurance monitors to detect OOD. For this, we have designed an assurance monitor using a 𝛃-Variational Autoencoder (𝛃-VAE) 2,3, which can detect if an input (e.g. image) to a LEC is OOD or not. Besides, it can also diagnose the precise changes (e.g. brightness, blurriness, occlusion, weather change, etc.) in the input that caused the input to be OOD. Conceptually, we generate an interpretable latent representation for inputs using the 𝛃-VAE and then perform a correspondence between the latent units and generative factors to perform a latent space-based OOD detection and diagnosis. The disentanglement based diagnosis capability of the 𝛃-VAE monitor is the key innovation of this work, and our analysis shows it can be utilized as a multi-class classifier for multi-label datasets.
Runtime Recovery Procedures - We have also developed runtime recovery procedures that manage the health of the system by using the system design information including the system information flow, requirement models and function decomposition models, temporal failure propagation graphs at runtime to identify the problems and recover the system by solving a dynamic constraint problem at runtime. The goal of the runtime problem is to identify the optimal component configuration that can provide the lowest risk of subsequent failures given the currently available resources and environment information.
DeepNNCar - to test the efficacy of our methods on embedded systems, we have developed a low-cost research testbed that was designed in our lab. It is built upon the chassis of Traxxas Slash 2WD 1/10 Scale RC car, and is mounted with a USB forward-looking camera, IR- optocoupler, and a 2D LIDAR. The speed and steer for the robot are controlled using pulse-width modulation (PWM), by varying the duty cycle. Further, for autonomous driving, the robot uses a modified NVIDIA DAVE-II CNN model that takes in the front camera image and speed to predict the steering. More information about the car is available in this Medium Article. Videos of DeepNNCar with different controllers is available here.
Follow ups : Further information is available at following links.