The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking
dc.contributor.author | Deng, J | |
dc.contributor.author | Roussos, A | |
dc.contributor.author | Chrysos, G | |
dc.contributor.author | Ververas, E | |
dc.contributor.author | Kotsia, I | |
dc.contributor.author | Shen, J | |
dc.contributor.author | Zafeiriou, S | |
dc.date.accessioned | 2019-04-11T12:46:59Z | |
dc.date.issued | 2018-11-29 | |
dc.description.abstract | In this article, we present the Menpo 2D and Menpo 3D benchmarks, two new datasets for multi-pose 2D and 3D facial landmark localisation and tracking. In contrast to the previous benchmarks such as 300W and 300VW, the proposed benchmarks contain facial images in both semi-frontal and profile pose. We introduce an elaborate semi-automatic methodology for providing high-quality annotations for both the Menpo 2D and Menpo 3D benchmarks. In Menpo 2D benchmark, different visible landmark configurations are designed for semi-frontal and profile faces, thus making the 2D face alignment full-pose. In Menpo 3D benchmark, a united landmark configuration is designed for both semi-frontal and profile faces based on the correspondence with a 3D face model, thus making face alignment not only full-pose but also corresponding to the real-world 3D space. Based on the considerable number of annotated images, we organised Menpo 2D Challenge and Menpo 3D Challenge for face alignment under large pose variations in conjunction with CVPR 2017 and ICCV 2017, respectively. The results of these challenges demonstrate that recent deep learning architectures, when trained with the abundant data, lead to excellent results. We also provide a very simple, yet effective solution, named Cascade Multi-view Hourglass Model, to 2D and 3D face alignment. In our method, we take advantage of all 2D and 3D facial landmark annotations in a joint way. We not only capitalise on the correspondences between the semi-frontal and profile 2D facial landmarks but also employ joint supervision from both 2D and 3D facial landmarks. Finally, we discuss future directions on the topic of face alignment. | en_GB |
dc.description.sponsorship | Imperial College London | en_GB |
dc.description.sponsorship | Engineering and Physical Sciences Research Council (EPSRC) | en_GB |
dc.identifier.citation | Published online 29 November 2018 | en_GB |
dc.identifier.doi | 10.1007/s11263-018-1134-y | |
dc.identifier.grantnumber | EP/N007743/1 | en_GB |
dc.identifier.uri | http://hdl.handle.net/10871/36785 | |
dc.language.iso | en | en_GB |
dc.publisher | Springer Verlag | en_GB |
dc.rights | © The Author(s) 2018. Open Access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | en_GB |
dc.subject | 2D face alignment | en_GB |
dc.subject | 3D face alignment | en_GB |
dc.subject | Menpo challenge | en_GB |
dc.title | The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking | en_GB |
dc.type | Article | en_GB |
dc.date.available | 2019-04-11T12:46:59Z | |
dc.identifier.issn | 0920-5691 | |
dc.description | This is the final version. Available on open access from Springer Verlag via the DOI in this record | en_GB |
dc.identifier.journal | International Journal of Computer Vision | en_GB |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | en_GB |
dcterms.dateAccepted | 2018-11-14 | |
rioxxterms.version | VoR | en_GB |
rioxxterms.licenseref.startdate | 2018-01-01 | |
rioxxterms.type | Journal Article/Review | en_GB |
refterms.dateFCD | 2019-04-11T12:33:08Z | |
refterms.versionFCD | VoR | |
refterms.dateFOA | 2019-04-11T12:47:03Z | |
refterms.panel | B | en_GB |
refterms.depositException | publishedGoldOA |
Files in this item
This item appears in the following Collection(s)
Except where otherwise noted, this item's licence is described as © The Author(s) 2018.
Open Access.
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.