Show simple item record

dc.contributor.authorZhang, C
dc.date.accessioned2024-06-11T17:14:24Z
dc.date.issued2024-06-10
dc.date.updated2024-06-10T08:53:52Z
dc.description.abstractLearning-enabled systems (LESs) have demonstrated incredible promise across computer science and cyber-physical applications. Powered by machine learning techniques, these systems excel at many tasks including autonomous robots, adaptive cruise control, and medical diagnosis. However, machine learning models, especially deep learning models are vulnerable to adversarial perturbations, leading to safety concerns in learning-enabled systems. Adversarial techniques can reveal the unsafe situation of the machine learning models by searching for adversarial examples. However, they cannot guarantee full safety or robustness when no adversary examples are found. Verification, on the other hand, provides safety guarantees for deep learning models. It can verify whether learning-enabled systems satisfy certain safety properties, ensuring their reliability and security. This thesis focuses on robustness verification for three typical learning-enabled systems: deep learning systems, neural network control systems (NNCSs), and out-of-distribution (OOD) detection systems. For deep learning systems, a model-agnostic verification framework called DeepAgn is created, which solves the neural network (NN) verification problem by reachability estimation, calculating the output range of an NN through Lipschitz optimization and nested scheme. It further determines the maximum safe radius by binary search, generating the ground-truth adversarial example. For NNCS, a verification tool named DeepNNC is built that calculates reachable sets for NNCS systems with disturbed initial states. Initially, the Lipschitz continuity of closed-loop NNCSs is demonstrated by unrolling and eliminating the loops. Subsequently, the estimation problem for reachable sets is transformed into a sequence of Lipschitz optimization problems. The estimation error of DeepNNC is not affected by control time, thus reducing the wrapping effects in NNCS verification. For OOD detection systems, the tool Vood is introduced. This tool is designed to assess the robustness of OOD detection systems in the face of both noise-based and naturally occurring perturbations. Vood to estimating the Lipschitz constant for noise perturbations involves leveraging extreme value theory (EVT) while addressing functional perturbation through space-filling Lipschitz optimization. The verification solutions introduced by this thesis outperform baseline methods in terms of efficiency and accuracy. The proposed methods take advantage of Lipschitz optimization, eliminating the need for detailed information about the learning-enabled systems. Moreover, the verification tools presented in this thesis can handle complex models, providing a safety guarantee for real-world learning-enabled systems. This thesis marks an important step toward trustworthy artificial intelligence by analyzing the overall robustness of intelligent systems and mitigating the risks associated with unexpected or adverse situations. By providing confidence in the operation of learning-enabled systems, this work envisions an expansion in the application of trustworthy intelligent systems across diverse domains, such as autonomous vehicles, industrial robots, and intelligent medical diagnosis.en_GB
dc.identifier.urihttp://hdl.handle.net/10871/136254
dc.language.isoenen_GB
dc.publisherUniversity of Exeteren_GB
dc.rights.embargoreasonThis thesis is embargoed until 31/Dec/2025 as the author wishes to publish their research.en_GB
dc.subjectTrustworthy AIen_GB
dc.subjectVerificationen_GB
dc.titleRobustness Verification on Learning-enabled Systems with Provable Guaranteesen_GB
dc.typeThesis or dissertationen_GB
dc.date.available2024-06-11T17:14:24Z
dc.contributor.advisorWenjie, Ruan
dc.contributor.advisorGeyong, Min
dc.contributor.advisorZhongdong, Wang
dc.publisher.departmentComputer Science
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dc.type.degreetitlePhD in Computer Science
dc.type.qualificationlevelDoctoral
dc.type.qualificationnameDoctoral Thesis
rioxxterms.versionNAen_GB
rioxxterms.licenseref.startdate2024-06-10
rioxxterms.typeThesisen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record