Show simple item record

dc.contributor.authorKoujan, MR
dc.date.accessioned2022-03-14T12:11:55Z
dc.date.issued2022-03-14
dc.date.updated2022-03-12T22:40:37Z
dc.description.abstractHuman faces have always been of a special interest to researchers in the computer vision and graphics areas. There has been an explosion in the number of studies around accurately modelling, analysing and synthesising realistic faces for various applications. The importance of human faces emerges from the fact that they are invaluable means of effective communication, recognition, behaviour analysis, conveying emotions, etc. Therefore, addressing the automatic visual perception of human faces efficiently could open up many influential applications in various domains, e.g. virtual/augmented reality, computer-aided surgeries, security and surveillance, entertainment, and many more. However, the vast variability associated with the geometry and appearance of human faces captured in unconstrained videos and images renders their automatic analysis and understanding very challenging even today. The primary objective of this thesis is to develop novel methodologies of 3D computer vision for human faces that go beyond the state of the art and achieve unprecedented quality and robustness. In more detail, this thesis advances the state of the art in 3D facial shape reconstruction and tracking, fine-grained 3D facial motion estimation, expression recognition and facial synthesis with the aid of 3D face modelling. We give a special attention to the case where the input comes from monocular imagery data captured under uncontrolled settings, a.k.a. \textit{in-the-wild} data. This kind of data are available in abundance nowadays on the internet. Analysing these data pushes the boundaries of currently available computer vision algorithms and opens up many new crucial applications in the industry. We define the four targeted vision problems (3D facial reconstruction $\&$ tracking, fine-grained 3D facial motion estimation, expression recognition, facial synthesis) in this thesis as the four 3D-based essential systems for the automatic facial behaviour understanding and show how they rely on each other. Finally, to aid the research conducted in this thesis, we collect and annotate a large-scale videos dataset of monocular facial performances. All of our proposed methods demonstarte very promising quantitative and qualitative results when compared to the state-of-the-art methods.en_GB
dc.identifier.urihttp://hdl.handle.net/10871/129044
dc.publisherUniversity of Exeteren_GB
dc.title3D Face Modelling, Analysis and Synthesisen_GB
dc.typeThesis or dissertationen_GB
dc.date.available2022-03-14T12:11:55Z
dc.contributor.advisorRoussos, Anastasios Tassos
dc.publisher.departmentEngineering, Mathematics and Physical Sciences
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dc.type.degreetitlePhD in Computer Science
dc.type.qualificationlevelDoctoral
dc.type.qualificationnameDoctoral Thesis
rioxxterms.versionNAen_GB
rioxxterms.licenseref.startdate2022-03-14
rioxxterms.typeThesisen_GB
refterms.dateFOA2022-03-14T12:12:04Z


Files in this item

This item appears in the following Collection(s)

Show simple item record