Deep learning techniques are now being considered for embedded critical applications, which require a high level of insurance that the system behaves reliably. Formal methods are used at an industrial level in avionics and had success verifying traditional (non machine learning) software systems. By contrast, the use of formal verification techniques in machine learning is extremely limited. The goal of the PhD is to explore the use of abstract interpretation to verify neural networks, with a focus on avionic applications. We address two key challenges currently limiting a broader application of formal verification to data science: the lack of specifications and the difficulty to achieve both scalability and precision. We will explore the verification of functional properties, and go beyond the state of the art by tackling complex, realistic properties through our collaboration with Airbus. The expected work will consist in: selecting (with the help of our contacts at Airbus), representative examples of neural networks and properties of interest, 2) defining their semantics and specifications, 3) designing adapted abstract domains, both from the semantic and algorithmic aspects, 4) proving formally their soundness using abstract interpretation, 5) implementing them and experimenting on the case studies, and use this feedback to improve the abstraction at step 3.
Keywords : deep learning, neural networks, verification, abstract interpretation, embedded systems, avionics
This PhD research project has been submitted for a funding request to “Sorbonne Center for Artificial Intelligence (SCAI)”. The PhD candidate selected by the project leader will therefore participate in the project selection process (including a file and an interview) to obtain funding.