dc.contributor.author | Zhang, T | |
dc.contributor.author | Ruan, W | |
dc.contributor.author | Fieldsend, JE | |
dc.date.accessioned | 2022-07-05T14:33:31Z | |
dc.date.issued | 2023-03-17 | |
dc.date.updated | 2022-07-05T13:54:10Z | |
dc.description.abstract | In safety-critical deep learning applications robustness measurement is a vital pre-deployment phase. However, existing robustness verification methods are not sufficiently practical for deploying machine learning systems in the real world. On the one hand, these methods attempt to claim that no perturbations can “fool” deep neural networks (DNNs), which may be too stringent in practice. On the other hand, existing works rigorously consider Lp
bounded additive perturbations on the pixel space, although perturbations, such as colour shifting and geometric transformations, are more practically and frequently occurring in the real world. Thus, from the practical standpoint, we present a novel and general probabilistic robustness assessment method (PRoA) based on the adaptive concentration, and it can measure the robustness of deep learning models against functional perturbations. PRoA can provide statistical guarantees on the probabilistic robustness of a model, i.e., the probability of failure encountered by the trained model after deployment. Our experiments demonstrate the effectiveness and flexibility of PRoA in terms of evaluating the probabilistic robustness against a broad range of functional perturbations, and PRoA can scale well to various large-scale deep neural networks compared to existing state-of-the-art baselines. For the purpose of reproducibility, we release our tool on GitHub: https://github.com/TrustAI/PRoA. | en_GB |
dc.description.sponsorship | Engineering and Physical Sciences Research Council (EPSRC) | en_GB |
dc.description.sponsorship | Exeter-CSC scholarship | en_GB |
dc.identifier.citation | In: Machine Learning and Knowledge Discovery in Databases; ECML PKDD 2022, edited by Massih-Reza Amini, Stéphane Canu, Asja Fischer, Tias Guns, Petra Kralj Novak, and Grigorios Tsoumakas, pp. 154–170. Lecture Notes in Computer Science Volume 13715 | |
dc.identifier.doi | 10.1007/978-3-031-26409-2_10 | |
dc.identifier.grantnumber | EP/R026173/1 | en_GB |
dc.identifier.grantnumber | 202108060090 | en_GB |
dc.identifier.uri | http://hdl.handle.net/10871/130163 | |
dc.identifier | ORCID: 0000-0003-4881-2406 (Zhang, Tianle) | |
dc.language.iso | en | en_GB |
dc.publisher | Springer | en_GB |
dc.rights.embargoreason | Under embargo until 17 March 2024 in compliance with publisher policy | en_GB |
dc.rights | © 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG | |
dc.subject | Verification | en_GB |
dc.subject | Probabilistic Robustness | en_GB |
dc.subject | Functional Perturbation | en_GB |
dc.subject | Neural Networks | en_GB |
dc.title | PRoA: A Probabilistic Robustness Assessment against Functional Perturbations | en_GB |
dc.type | Conference paper | en_GB |
dc.date.available | 2022-07-05T14:33:31Z | |
exeter.location | Grenoble, France | |
dc.description | This is the author accepted manuscript. The final version is available from Springer via the DOI in this record | en_GB |
dc.description | ECML PKDD 2022: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Grenoble, France, 19 - 23 September 2022 | |
dc.rights.uri | http://www.rioxx.net/licenses/all-rights-reserved | en_GB |
dcterms.dateAccepted | 2022-06-14 | |
rioxxterms.version | AM | en_GB |
rioxxterms.licenseref.startdate | 2022-06-14 | |
rioxxterms.type | Conference Paper/Proceeding/Abstract | en_GB |
refterms.dateFCD | 2022-07-05T13:54:12Z | |
refterms.versionFCD | AM | |
refterms.dateFOA | 2024-03-17T00:00:00Z | |
refterms.panel | B | en_GB |
pubs.name-of-conference | European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases | |