Show simple item record

dc.contributor.authorDeng, J
dc.contributor.authorRoussos, A
dc.contributor.authorChrysos, G
dc.contributor.authorVerveras, E
dc.contributor.authorKotsia, I
dc.contributor.authorShen, J
dc.contributor.authorZafeiriou, S
dc.date.accessioned2019-04-11T12:46:59Z
dc.date.issued2018-11-29
dc.description.abstractIn this article, we present the Menpo 2D and Menpo 3D benchmarks, two new datasets for multi-pose 2D and 3D facial landmark localisation and tracking. In contrast to the previous benchmarks such as 300W and 300VW, the proposed benchmarks contain facial images in both semi-frontal and profile pose. We introduce an elaborate semi-automatic methodology for providing high-quality annotations for both the Menpo 2D and Menpo 3D benchmarks. In Menpo 2D benchmark, different visible landmark configurations are designed for semi-frontal and profile faces, thus making the 2D face alignment full-pose. In Menpo 3D benchmark, a united landmark configuration is designed for both semi-frontal and profile faces based on the correspondence with a 3D face model, thus making face alignment not only full-pose but also corresponding to the real-world 3D space. Based on the considerable number of annotated images, we organised Menpo 2D Challenge and Menpo 3D Challenge for face alignment under large pose variations in conjunction with CVPR 2017 and ICCV 2017, respectively. The results of these challenges demonstrate that recent deep learning architectures, when trained with the abundant data, lead to excellent results. We also provide a very simple, yet effective solution, named Cascade Multi-view Hourglass Model, to 2D and 3D face alignment. In our method, we take advantage of all 2D and 3D facial landmark annotations in a joint way. We not only capitalise on the correspondences between the semi-frontal and profile 2D facial landmarks but also employ joint supervision from both 2D and 3D facial landmarks. Finally, we discuss future directions on the topic of face alignment.en_GB
dc.description.sponsorshipImperial College Londonen_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Council (EPSRC)en_GB
dc.identifier.citationPublished online 29 November 2018en_GB
dc.identifier.doi10.1007/s11263-018-1134-y
dc.identifier.grantnumberEP/N007743/1en_GB
dc.identifier.urihttp://hdl.handle.net/10871/36785
dc.language.isoenen_GB
dc.publisherSpringer Verlagen_GB
dc.rights© The Author(s) 2018. Open Access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.en_GB
dc.subject2D face alignmenten_GB
dc.subject3D face alignmenten_GB
dc.subjectMenpo challengeen_GB
dc.titleThe Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Trackingen_GB
dc.typeArticleen_GB
dc.date.available2019-04-11T12:46:59Z
dc.identifier.issn0920-5691
dc.descriptionThis is the final version. Available on open access from Springer Verlag via the DOI in this recorden_GB
dc.identifier.journalInternational Journal of Computer Visionen_GB
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_GB
dcterms.dateAccepted2018-11-14
rioxxterms.versionVoRen_GB
rioxxterms.licenseref.startdate2018-01-01
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2019-04-11T12:33:08Z
refterms.versionFCDVoR
refterms.dateFOA2019-04-11T12:47:03Z
refterms.panelBen_GB
refterms.depositExceptionpublishedGoldOA


Files in this item

This item appears in the following Collection(s)

Show simple item record

© The Author(s) 2018.
Open Access.
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Except where otherwise noted, this item's licence is described as © The Author(s) 2018. Open Access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.