dc.contributor.author | Saputra, MRU | |
dc.contributor.author | de Gusmao, PPB | |
dc.contributor.author | Lu, CX | |
dc.contributor.author | Almalioglu, Y | |
dc.contributor.author | Rosa, S | |
dc.contributor.author | Chen, C | |
dc.contributor.author | Wahlstrom, J | |
dc.contributor.author | Wang, W | |
dc.contributor.author | Markham, A | |
dc.contributor.author | Trigoni, N | |
dc.date.accessioned | 2020-07-22T14:30:34Z | |
dc.date.issued | 2020-01-24 | |
dc.description.abstract | Visual odometry shows excellent performance in a wide range of environments. However, in visually-denied scenarios (e.g. heavy smoke or darkness), pose estimates degrade or even fail. Thermal cameras are commonly used for perception and inspection when the environment has low visibility. However, their use in odometry estimation is hampered by the lack of robust visual features. In part, this is as a result of the sensor measuring the ambient temperature profile rather than scene appearance and geometry. To overcome this issue, we propose a Deep Neural Network model for thermal-inertial odometry (DeepTIO) by incorporating a visual hallucination network to provide the thermal network with complementary information. The hallucination network is taught to predict fake visual features from thermal images by using Huber loss. We also employ selective fusion to attentively fuse the features from three different modalities, i.e thermal, hallucination, and inertial features. Extensive experiments are performed in hand-held and mobile robot data in benign and smoke-filled environments, showing the efficacy of the proposed model. | en_GB |
dc.identifier.citation | Vol. 5, pp. 1672 - 1679 | en_GB |
dc.identifier.doi | 10.1109/lra.2020.2969170 | |
dc.identifier.uri | http://hdl.handle.net/10871/122084 | |
dc.language.iso | en | en_GB |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_GB |
dc.rights | © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be
obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes, creating new
collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted
component of this work in other works. | en_GB |
dc.subject | Localization | en_GB |
dc.subject | sensor fusion | en_GB |
dc.subject | deep learning in robotics and automation | en_GB |
dc.subject | thermal-inertial odometry | en_GB |
dc.title | DeepTIO: a deep thermal-inertial odometry with visual hallucination | en_GB |
dc.type | Article | en_GB |
dc.date.available | 2020-07-22T14:30:34Z | |
dc.description | This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record | en_GB |
dc.identifier.journal | IEEE Robotics and Automation Letters | en_GB |
dc.rights.uri | http://www.rioxx.net/licenses/all-rights-reserved | en_GB |
dcterms.dateAccepted | 2020-01-09 | |
rioxxterms.version | AM | en_GB |
rioxxterms.licenseref.startdate | 2020-01-24 | |
rioxxterms.type | Journal Article/Review | en_GB |
refterms.dateFCD | 2020-07-22T14:28:28Z | |
refterms.versionFCD | AM | |
refterms.dateFOA | 2020-07-22T14:30:39Z | |
refterms.panel | B | en_GB |