Preimage Approximation for Neural Networks and Beyond

Abstract:

Neural network verification mainly focuses on local robustness properties. However, often it is important to know whether a given property holds globally for the whole input domain, and if not then for what proportion of the input the property is true. While exact preimage generation can construct an equivalent representation of neural networks that can aid such (quantitative) global robustness verification, it is intractable at scale. In this work, we propose an efficient and practical anytime algorithm for generating symbolic under-approximations of the preimage of neural networks based on linear relaxation. Our algorithm iteratively minimizes the volume approximation error by partitioning the input region into subregions, where the neural network relaxation bounds become tighter. We further employ sampling and differentiable approximations to the volume in order to prioritize regions to split and optimize the parameters of the relaxation, leading to faster improvement and more compact under-approximations. Evaluation results demonstrate that our approach is able to generate preimage approximations significantly faster than exact methods and scales to neural network controllers for which exact preimage generation is intractable. We also demonstrate an application of our approach to quantitative global verification.

 

Bio: 
张喜悦, 英国牛津大学计算机科学系副研究员,2022年于北京大学数学科学学院获应用数学博士学位,2017年获信息与计算科学学士学位。研究领域为人工智能系统的安全性和可信性保障, 研究重点包括深度学习模型的自动验证、深度学习驱动系统的安全验证,以及相关泛化应用如AI安全。近期工作为深度学习系统的抽象和验证技术。