Show simple item record

dc.contributor.authorWu, H
dc.date.accessioned2024-10-21T15:25:36Z
dc.date.issued2024-10-21
dc.date.updated2024-10-21T12:32:00Z
dc.description.abstractAdvances in deep neural networks have opened a new era of robotics, intelligent robots. Compared with traditional robots that perform repetitive tasks based on manual control or predefined rules, intelligent robots possess a more comprehensive perception of environments and make more sophisticated decisions for various tasks. For example, autonomous vehicles, one type of mobile robot, rely on deep learning models to perceive their surroundings and navigate complex environments. However, it is no more a secret that deep learning models are vulnerable to adversarial attacks. Recent research unveils that deep neural networks can be fooled by adding human unperceivable perturbations to the input data, posing threats against autonomous vehicles that rely on deep neural networks to achieve image classification, object detection and tracking, etc. This thesis addresses the question: Can we develop practical adversarial attacks against deep learning applications? To answer the question, two real-time white-box attacks against the NVIDIA end-to-end driving model are presented. The end-to-end driving model takes images captured by the front camera as input and outputs the steering angle. We design both image-specific and image-agnostic attacks to alter the steering angle, intentionally deviating it from the model’s original output. For modular autonomous driving systems, we devise real-time white-box attacks against object detection models. These attacks generate human unperceivable perturbations of arbitrary shapes to fabricate objects at desired locations. We further introduce a Human-in-the-Middle hardware attack to inject Universal Adversarial Perturbation (UAP) into a USB camera. The evaluation results on the VOC2012 and CARLA autonomous driving datasets show that our attacks produce more stable false bounding boxes than in previous work. Additionally, the attack significantly reduces the tracking accuracy of the Tracking-By-Detection (TBD) framework. Lastly, we propose a distributed black-box attack to accelerate attacks on machine-learning cloud services. By targeting cloud APIs directly, rather than local models, we avoid mistakes made in prior research that obtained unfair advantages by applying the perturbation before image encoding and preprocessing.en_GB
dc.identifier.urihttp://hdl.handle.net/10871/137741
dc.identifierORCID: 0000-0002-1778-1814 (Wu, Han)
dc.language.isoenen_GB
dc.publisherUniversity of Exeteren_GB
dc.subjectAdversarial Attacksen_GB
dc.subjectDeep Learningen_GB
dc.subjectObject Detectionen_GB
dc.subjectAutonomous Drivingen_GB
dc.titlePractical Adversarial Attacks against Deep Learning Modelsen_GB
dc.typeThesis or dissertationen_GB
dc.date.available2024-10-21T15:25:36Z
dc.contributor.advisorWahlström, Johan
dc.contributor.advisorRowlands, Sareh
dc.publisher.departmentComputer Science
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dc.type.degreetitlePhD in Computer Science
dc.type.qualificationlevelDoctoral
dc.type.qualificationnameDoctoral Thesis
rioxxterms.versionNAen_GB
rioxxterms.licenseref.startdate2024-10-21
rioxxterms.typeThesisen_GB
refterms.dateFOA2024-10-21T15:27:18Z


Files in this item

This item appears in the following Collection(s)

Show simple item record