Why This Matters

Testing autonomous vehicles comprehensively before deployment to real-world is essential for safety assurance, but manual test case generation is extremely time-consuming and expensive. This work is innovative because it provides automated mechanisms for generating adversarial test cases that expose weaknesses in autonomous driving systems. The combination of domain-specific languages with intelligent samplers enables effective exploration of testing scenarios.

What We Did

This paper presents ANTI-CARLA, a framework for adversarial testing of autonomous vehicles using the CARLA simulator. The system combines test case description languages, scenario generators, and samplers to automatically generate and evaluate test cases that cause system failures. The approach uses domain-specific modeling languages for specifying testing scenarios and integrates with the CARLA autonomous driving pipeline.

Key Results

The framework was evaluated on the CARLA benchmark and demonstrated effectiveness in generating diverse test cases that fail the tested system. The Learning-Based Control approach achieved 100% accuracy on the CARLA benchmark despite adversarial testing. The system successfully identified failure modes and provided insights for improving autonomous driving controllers.

Full Abstract

Cite This Paper

@inproceedings{ramakrishna2022anticarla,
  author = {Ramakrishna, Shreyas and Luo, Baiting and Kuhn, Christopher B. and Karsai, Gabor and Dubey, Abhishek},
  booktitle = {2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)},
  title = {ANTI-CARLA: An Adversarial Testing Framework for Autonomous Vehicles in CARLA},
  year = {2022},
  pages = {2620-2627},
  abstract = {Despite recent advances in autonomous driving systems, accidents such as the fatal Uber crash in 2018 show these systems are still susceptible to edge cases. Such systems need to be thoroughly tested and validated before being deployed in the real world to avoid such events. Testing in open-world scenarios can be difficult, time-consuming, and expensive. These challenges can be addressed by using driving simulators such as CARLA instead. A key part of such tests is adversarial testing, in which the goal is to find scenarios that lead to failures of the given system. While several independent efforts in adversarial testing have been made, a well-established testing framework that enables adaptive stress testing has yet to be made available for CARLA. We therefore propose ANTI-CARLA, an adversarial testing framework in CARLA. The operating conditions in which a given system should be tested are specified in a scenario description language. The framework offers an adversarial search mechanism that searches for operating conditions that will fail the tested system. In this way, ANTI-CARLA extends the CARLA simulator with the capability of performing adversarial testing on any given driving pipeline. We use ANTI-CARLA to test the driving pipeline trained with Learning By Cheating (LBC) approach. The simulation results demonstrate that ANTI-CARLA can effectively and automatically find a range of failure cases despite LBC reaching an accuracy of 100% in the CARLA benchmark.},
  contribution = {lead},
  doi = {10.1109/ITSC55140.2022.9921776},
  keywords = {autonomous vehicles, adversarial testing, test case generation, scenario description, CARLA simulator}
}
Quick Info
Year 2022
Keywords
autonomous vehicles adversarial testing test case generation scenario description CARLA simulator
Research Areas
CPS ML for CPS Explainable AI
Search Tags

ANTI, CARLA, Adversarial, Testing, Framework, Autonomous, Vehicles, autonomous vehicles, adversarial testing, test case generation, scenario description, CARLA simulator, CPS, ML for CPS, Explainable AI, 2022, Ramakrishna, Luo, Kuhn, Karsai, Dubey