Show simple item record

dc.contributor.authorHe, S
dc.contributor.authorTavakoli, HR
dc.contributor.authorBorji, A
dc.contributor.authorMi, Y
dc.contributor.authorPugeault, N
dc.date.accessioned2019-03-12T13:04:06Z
dc.date.issued2020-01-09
dc.description.abstractRecently, data-driven deep saliency models have achieved high performance and have outperformed classical saliency models, as demonstrated by results on datasets such as the MIT300 and SALICON. Yet, there remains a large gap between the performance of these models and the inter-human baseline. Some outstanding questions include what have these models learned, how and where they fail, and how they can be improved. This article attempts to answer these questions by analyzing the representations learned by individual neurons located at the intermediate layers of deep saliency models. To this end, we follow the steps of existing deep saliency models, that is borrowing a pre-trained model of object recognition to encode the visual features and learning a decoder to infer the saliency. We consider two cases when the encoder is used as a fixed feature extractor and when it is fine-tuned, and compare the inner representations of the network. To study how the learned representations depend on the task, we fine-tune the same network using the same image set but for two different tasks: saliency prediction versus scene classification. Our analyses reveal that: 1) some visual regions (e.g. head, text, symbol, vehicle) are already encoded within various layers of the network pre-trained for object recognition, 2) using modern datasets, we find that fine-tuning pre-trained models for saliency prediction makes them favor some categories (e.g. head) over some others (e.g. text), 3) although deep models of saliency outperform classical models on natural images, the converse is true for synthetic stimuli (e.g. pop-out search arrays), an evidence of significant difference between human and data-driven saliency models, and 4) we confirm that, after-fine tuning, the change in inner-representations is mostly due to the task and not the domain shift in the data.en_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC)en_GB
dc.identifier.citationCVPR-2019, 16-20 June 2019, Long Beach, California, USen_GB
dc.identifier.doi10.1109/CVPR.2019.01045
dc.identifier.grantnumberEP/N035399/1en_GB
dc.identifier.urihttp://hdl.handle.net/10871/36411
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_GB
dc.rights© 2020 IEEE
dc.subjectdata visualisation
dc.subjectfeature extraction
dc.subjectimage classification
dc.subjectimage representation
dc.subjectlearning (artificial intelligence)
dc.subjectneural nets
dc.subjectobject recognition
dc.subjectLow-level Vision
dc.subjectDeep Learning
dc.titleUnderstanding and Visualizing Deep Visual Saliency Modelsen_GB
dc.typeConference paperen_GB
dc.date.available2019-03-12T13:04:06Z
dc.descriptionThis is the author accepted manuscript. The final version is available from IEEE via the DOI in this recorden_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2019-03-04
exeter.funder::Engineering and Physical Sciences Research Council (EPSRC)en_GB
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2019-03-04
rioxxterms.typeConference Paper/Proceeding/Abstracten_GB
refterms.dateFCD2019-03-11T22:36:07Z
refterms.versionFCDAM
refterms.dateFOA2020-02-11T10:58:01Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record