Show simple item record

dc.contributor.authorBianchi, FM
dc.contributor.authorLivi, L
dc.contributor.authorMikalsen, KØ
dc.contributor.authorKampffmeyer, M
dc.contributor.authorJenssen, R
dc.date.accessioned2019-10-16T12:45:20Z
dc.date.issued2019-07-19
dc.description.abstractLearning compressed representations of multivariate time series (MTS) facilitates data analysis in the presence of noise and redundant information, and for a large number of variates and time steps. However, classical dimensionality reduction approaches are designed for vectorial data and cannot deal explicitly with missing values. In this work, we propose a novel autoencoder architecture based on recurrent neural networks to generate compressed representations of MTS. The proposed model can process inputs characterized by variable lengths and it is specifically designed to handle missing data. Our autoencoder learns fixed-length vectorial representations, whose pairwise similarities are aligned to a kernel function that operates in input space and that handles missing values. This allows to learn good representations, even in the presence of a significant amount of missing data. To show the effectiveness of the proposed approach, we evaluate the quality of the learned representations in several classification tasks, including those involving medical data, and we compare to other methods for dimensionality reduction. Successively, we design two frameworks based on the proposed architecture: one for imputing missing data and another for one-class classification. Finally, we analyze under what circumstances an autoencoder with recurrent layers can learn better compressed representations of MTS than feed-forward architectures.en_GB
dc.description.sponsorshipNorwegian Research Councilen_GB
dc.identifier.citationVol. 96, article 106973en_GB
dc.identifier.doi10.1016/j.patcog.2019.106973
dc.identifier.grantnumber239844en_GB
dc.identifier.urihttp://hdl.handle.net/10871/39236
dc.language.isoenen_GB
dc.publisherElsevier for Pattern Recognition Societyen_GB
dc.rights.embargoreasonUnder embargo until 19 July 2020 in compliance with publisher policyen_GB
dc.rights© 2019. This version is made available under the CC-BY-NC-ND 4.0 license: https://creativecommons.org/licenses/by-nc-nd/4.0/  en_GB
dc.subjectRepresentation learningen_GB
dc.subjectMultivariate time seriesen_GB
dc.subjectAutoencodersen_GB
dc.subjectRecurrent neural networksen_GB
dc.subjectKernel methodsen_GB
dc.titleLearning representations of multivariate time series with missing dataen_GB
dc.typeArticleen_GB
dc.date.available2019-10-16T12:45:20Z
dc.identifier.issn0031-3203
dc.descriptionThis is the author accepted manuscript. The final version is available from Elsevier via the DOI in this recorden_GB
dc.identifier.journalPattern Recognitionen_GB
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/  en_GB
dcterms.dateAccepted2019-07-15
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2019-07-15
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2019-10-16T12:42:46Z
refterms.versionFCDAM
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record

© 2019. This version is made available under the CC-BY-NC-ND 4.0 license: https://creativecommons.org/licenses/by-nc-nd/4.0/  
Except where otherwise noted, this item's licence is described as © 2019. This version is made available under the CC-BY-NC-ND 4.0 license: https://creativecommons.org/licenses/by-nc-nd/4.0/