Show simple item record

dc.contributor.authorMendez, O
dc.contributor.authorHadfield, S
dc.contributor.authorPugeault, N
dc.contributor.authorBowden, R
dc.date.accessioned2019-10-30T15:56:53Z
dc.date.issued2019-09-28
dc.description.abstractThe use of human-level semantic information to aid robotic tasks has recently become an important area for both Computer Vision and Robotics. This has been enabled by advances in Deep Learning that allow consistent and robust semantic understanding. Leveraging this semantic vision of the world has allowed human-level understanding to naturally emerge from many different approaches. Particularly, the use of semantic information to aid in localisation and reconstruction has been at the forefront of both fields. Like robots, humans also require the ability to localise within a structure. To aid this, humans have designed high-level semantic maps of our structures called floorplans. We are extremely good at localising in them, even with limited access to the depth information used by robots. This is because we focus on the distribution of semantic elements, rather than geometric ones. Evidence of this is that humans are normally able to localise in a floorplan that has not been scaled properly. In order to grant this ability to robots, it is necessary to use localisation approaches that leverage the same semantic information humans use. In this paper, we present a novel method for semantically enabled global localisation. Our approach relies on the semantic labels present in the floorplan. Deep Learning is leveraged to extract semantic labels from RGB images, which are compared to the floorplan for localisation. While our approach is able to use range measurements if available, we demonstrate that they are unnecessary as we can achieve results comparable to state-of-the-art without them.en_GB
dc.description.sponsorshipEPSRCen_GB
dc.description.sponsorshipInnovate UKen_GB
dc.description.sponsorshipNVIDIA Corporationen_GB
dc.identifier.citationPublished online 28-September-2019en_GB
dc.identifier.doi10.1007/s11263-019-01239-4
dc.identifier.grantnumberEP/R512217/1en_GB
dc.identifier.grantnumberEP/R03298X/1en_GB
dc.identifier.grantnumber104273en_GB
dc.identifier.urihttp://hdl.handle.net/10871/39402
dc.language.isoenen_GB
dc.publisherSpringer Verlagen_GB
dc.rightsOpen Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecomm ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were madeen_GB
dc.subjectRoboticsen_GB
dc.subjectLocalisationen_GB
dc.subjectDeep Learningen_GB
dc.subjectSemanticen_GB
dc.subjectMCLen_GB
dc.subjectMonte-Carloen_GB
dc.subjectTurtleboten_GB
dc.subjectROSen_GB
dc.subjectHuman-levelen_GB
dc.subjectSegmentationen_GB
dc.subjectIndooren_GB
dc.titleSeDAR: Reading Floorplans Like a Human—Using Deep Learning to Enable Human-Inspired Localisationen_GB
dc.typeArticleen_GB
dc.date.available2019-10-30T15:56:53Z
dc.identifier.issn0920-5691
dc.descriptionThis is the final version. Available from Springer Verlag via the DOI in this record. en_GB
dc.identifier.journalInternational Journal of Computer Visionen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2019-09-18
rioxxterms.versionVoRen_GB
rioxxterms.licenseref.startdate2019-09-18
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2019-10-30T15:52:03Z
refterms.versionFCDVoR
refterms.dateFOA2019-10-30T15:56:56Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record