Show simple item record

dc.contributor.authorHe, S
dc.contributor.authorPugeault, N
dc.date.accessioned2017-07-28T11:33:04Z
dc.date.issued2017-08
dc.description.abstractDeep convolutional neural networks have achieved impressive performance on a broad range of problems, beating prior art on estab- lished benchmarks, but it often remains unclear what are the representations learnt by those systems and how they achieve such performance. This article examines the specific problem of saliency detection, where benchmarks are currently dominated by CNN-based approaches, and investigates the properties of the learnt rep- resentation by visualizing the artificial neurons’ receptive fields. We demonstrate that fine tuning a pre-trained network on the saliency detection task lead to a profound transformation of the network’s deeper layers. Moreover we argue that this transformation leads to the emergence of receptive fields conceptually similar to the centre-surround filters hypothesized by early research on visual saliency.en_GB
dc.description.sponsorshipThis work was supported by the EPSRC project DEVA EP/N035399/1.en_GB
dc.identifier.citation34th International Conference on Machine Learning, 6-11 August 2017, Sidney, Australiaen_GB
dc.identifier.urihttp://hdl.handle.net/10871/28702
dc.language.isoenen_GB
dc.publisherInternational Machine Learning Societyen_GB
dc.relation.urlhttp://www.machinelearning.org/icml.htmlen_GB
dc.rights.embargoreasonUnder embargo until after conferenceen_GB
dc.rightsCopyright 2017 by the author(s).en_GB
dc.titleDeep saliency: What is learnt by a deep network about saliency?en_GB
dc.typeConference paperen_GB
dc.description2nd Workshop on Visualization for Deep Learningen_GB
dc.descriptionThis is the author accepted manuscript. The final version is available from the International Machine Learning Society via the URL in this record.en_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record