Show simple item record

dc.contributor.authorAlamri, F
dc.contributor.authorDutta, A
dc.date.accessioned2021-09-17T11:19:03Z
dc.date.issued2022-01-01
dc.description.abstractMost of the existing Zero-Shot Learning (ZSL) methods focus on learning a compatibility function between the image representation and class attributes. Few others concentrate on learning image representation combining local and global features. However, the existing approaches still fail to address the bias issue towards the seen classes. In this paper, we propose implicit and explicit attention mechanisms to address the existing bias problem in ZSL models. We formulate the implicit attention mechanism with a self-supervised image angle rotation task, which focuses on specific image features aiding to solve the task. The explicit attention mechanism is composed with the consideration of a multi-headed self-attention mechanism via Vision Transformer model, which learns to map image features to semantic space during the training stage. We conduct comprehensive experiments on three popular benchmarks: AWA2, CUB and SUN. The performance of our proposed attention mechanisms has proved its effectiveness, and has achieved the state-of-the-art harmonic mean on all the three datasets.en_GB
dc.description.sponsorshipDefence Science and Technology Laboratoryen_GB
dc.description.sponsorshipAlan Turing Instituteen_GB
dc.identifier.citationVol. 13024, pp. 467 - 483en_GB
dc.identifier.doi10.1007/978-3-030-92659-5_30
dc.identifier.urihttp://hdl.handle.net/10871/127096
dc.language.isoenen_GB
dc.publisherSpringeren_GB
dc.rights.embargoreasonUnder embargo until 1 January 2023 in compliance with publisher policy
dc.rights© Springer Nature Switzerland AG 2021
dc.subjectZero-shot Learningen_GB
dc.subjectAttention Mechanismen_GB
dc.subjectSelf-Supervised Learningen_GB
dc.subjectVision Transformeren_GB
dc.titleImplicit and Explicit Attention for Zero-Shot Learningen_GB
dc.typeConference paperen_GB
dc.date.available2021-09-17T11:19:03Z
dc.identifier.issn0302-9743
dc.descriptionThis is the author accepted manuscript. The final version is available from Springer via the DOI in this recorden_GB
dc.descriptionPattern recognition: 43rd DAGM German Conference, DAGM GCPR 2021, 28 September - 1 October, Bonn, Germany, edited by Christian Bauckhage, Juergen Gall, and Alexander Schwingen_GB
dc.identifier.journalLecture Notes in Computer Scienceen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
pubs.funder-ackownledgementYesen_GB
dcterms.dateAccepted2021-08-22
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2021-08-22
rioxxterms.typeConference Paper/Proceeding/Abstracten_GB
refterms.dateFCD2021-09-15T13:57:22Z
refterms.versionFCDAM
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record