Show simple item record

dc.contributor.authorBrucker, AD
dc.contributor.authorStell, A
dc.date.accessioned2022-12-12T11:47:58Z
dc.date.issued2023-03-03
dc.date.updated2022-12-12T10:55:13Z
dc.description.abstractNeural networks are being used successfully to solve classification problems, e.g., for detecting objects in images. It is well known that neural networks are susceptible if small changes applied to their input result in misclassification. Situations in which such a slight input change, often hardly noticeable by a human expert, results in a misclassification are called adversarial examples. If such inputs are used for adversarial attacks, they can be life-threatening if, for example, they occur in image classification systems used in autonomous cars or medical diagnosis. Systems employing neural networks, e.g., for safety or security-critical functionality, are a particular challenge for formal verification, which usually expects a formal specification (e.g., given as source code in a programming language for which a formal semantics exists). Such a formal specification does, per se, not exist for neural networks. In this paper, we address this challenge by presenting a formal embedding of feedforward neural networks into Isabelle/HOL and discussing desirable properties for neural networks in critical applications. Our Isabelle-based prototype can import neural networks trained in TensorFlow, and we demonstrate our approach using a neural network trained for the classification of digits on a dot-matrix display.en_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC)en_GB
dc.identifier.citationVol. 14000, pp. 427 - 444en_GB
dc.identifier.doi10.1007/978-3-031-27481-7_24
dc.identifier.grantnumber670002170en_GB
dc.identifier.urihttp://hdl.handle.net/10871/132003
dc.identifierORCID: 0000-0002-6355-1200 (Brucker, Achim D)
dc.language.isoenen_GB
dc.publisherSpringeren_GB
dc.relation.urlhttps://doi.org/10.5281/zenodo.7418170
dc.rights.embargoreasonUnder embargo until 3 March 2024 in compliance with publisher policyen_GB
dc.rights© 2023 Springer
dc.subjectisabelle/HOLen_GB
dc.subjectFormalizationen_GB
dc.subjectVerificationen_GB
dc.subjectNeural Networken_GB
dc.titleVerifying Feedforward Neural Networks for Classification in Isabelle/HOLen_GB
dc.typeArticleen_GB
dc.date.available2022-12-12T11:47:58Z
dc.contributor.editorChechik, M
dc.contributor.editorKatoen, J-P
dc.contributor.editorLeucker, M
dc.identifier.isbn978-3-642-38915-3
dc.descriptionThis is the author accepted manuscript. The final version is available from Springer via the DOI in this recorden_GB
dc.descriptionFormal Methods: 25th International Symposium, FM 2023, Lübeck, Germany, 6 - 10 March 2023en_GB
dc.descriptionData Availability Statement: The formalisation and case studies are available to view on Zenodo: https://doi.org/10.5281/zenodo.7418170. The materials include both the Isabelle/HOL implementation and the detailed documentation generated by Isabelle.
dc.identifier.journalLecture Notes in Computer Scienceen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2022-12-12
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2022-12-12T10:55:16Z
refterms.versionFCDAM
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record