Show simple item record

dc.contributor.authorZhang, T
dc.contributor.authorRuan, W
dc.contributor.authorFieldsend, JE
dc.date.accessioned2022-07-05T14:33:31Z
dc.date.issued2023-03-17
dc.date.updated2022-07-05T13:54:10Z
dc.description.abstractIn safety-critical deep learning applications robustness measurement is a vital pre-deployment phase. However, existing robustness verification methods are not sufficiently practical for deploying machine learning systems in the real world. On the one hand, these methods attempt to claim that no perturbations can “fool” deep neural networks (DNNs), which may be too stringent in practice. On the other hand, existing works rigorously consider Lp bounded additive perturbations on the pixel space, although perturbations, such as colour shifting and geometric transformations, are more practically and frequently occurring in the real world. Thus, from the practical standpoint, we present a novel and general probabilistic robustness assessment method (PRoA) based on the adaptive concentration, and it can measure the robustness of deep learning models against functional perturbations. PRoA can provide statistical guarantees on the probabilistic robustness of a model, i.e., the probability of failure encountered by the trained model after deployment. Our experiments demonstrate the effectiveness and flexibility of PRoA in terms of evaluating the probabilistic robustness against a broad range of functional perturbations, and PRoA can scale well to various large-scale deep neural networks compared to existing state-of-the-art baselines. For the purpose of reproducibility, we release our tool on GitHub: https://github.com/TrustAI/PRoA.en_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC)en_GB
dc.description.sponsorshipExeter-CSC scholarshipen_GB
dc.identifier.citationIn: Machine Learning and Knowledge Discovery in Databases; ECML PKDD 2022, edited by Massih-Reza Amini, Stéphane Canu, Asja Fischer, Tias Guns, Petra Kralj Novak, and Grigorios Tsoumakas, pp. 154–170. Lecture Notes in Computer Science Volume 13715
dc.identifier.doi10.1007/978-3-031-26409-2_10
dc.identifier.grantnumberEP/R026173/1en_GB
dc.identifier.grantnumber202108060090en_GB
dc.identifier.urihttp://hdl.handle.net/10871/130163
dc.identifierORCID: 0000-0003-4881-2406 (Zhang, Tianle)
dc.language.isoenen_GB
dc.publisherSpringeren_GB
dc.rights.embargoreasonUnder embargo until 17 March 2024 in compliance with publisher policyen_GB
dc.rights© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
dc.subjectVerificationen_GB
dc.subjectProbabilistic Robustnessen_GB
dc.subjectFunctional Perturbationen_GB
dc.subjectNeural Networksen_GB
dc.titlePRoA: A Probabilistic Robustness Assessment against Functional Perturbationsen_GB
dc.typeConference paperen_GB
dc.date.available2022-07-05T14:33:31Z
exeter.locationGrenoble, France
dc.descriptionThis is the author accepted manuscript. The final version is available from Springer via the DOI in this recorden_GB
dc.descriptionECML PKDD 2022: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Grenoble, France, 19 - 23 September 2022
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2022-06-14
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2022-06-14
rioxxterms.typeConference Paper/Proceeding/Abstracten_GB
refterms.dateFCD2022-07-05T13:54:12Z
refterms.versionFCDAM
refterms.dateFOA2024-03-17T00:00:00Z
refterms.panelBen_GB
pubs.name-of-conferenceEuropean Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases


Files in this item

This item appears in the following Collection(s)

Show simple item record