Why This Matters
Learning-enabled components in autonomous systems are widely used for perception tasks but often fail on out-of-distribution inputs not seen during training, creating safety risks. Traditional out-of-distribution detection approaches struggle with multi-label datasets where multiple environmental factors vary simultaneously. This work is innovative because it provides a practical approach using generative models to detect out-of-distribution images in complex scenarios, supporting safe deployment of learning-enabled autonomous systems.