Show simple item record

dc.contributor.authorAbuhammad, H
dc.date.accessioned2019-10-24T11:20:23Z
dc.date.issued2019-10-21
dc.description.abstractWe present an automated new approach for facial expression recognition of seven emotions. The main objective of this thesis is building a model that can classify the spontaneous facial expressions, rather than the acted ones, and apply this model images and videos. Moreover, we will investigate if a combination of more than one image feature descriptor will improve the classification rate, and the efficacy of the texture descriptors on videos sequences. Three types of texture features from static images were combined: Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG) and Dense Speeded Up Robust Features (D-SURF). The resulting features are classified using random forests. The use of random forests allows for the identification of the most important feature types and facial locations for emotion classification. Regions around the eyes, forehead, sides of the nose and mouth are found to be most significant. We classified the important features with random forest and Support Vector Machines. We also found that the classification performance became better than using all of the extracted facial features. We achieved better than state-of-the-art accuracies using multiple texture feature descriptors. Current emotion recognition datasets comprise posed portraits of actors displaying emotions. To evaluate the recognition algorithms on spontaneous facial expressions, we introduced an unposed dataset called the ``Emotional Faces in the Wild'' (eLFW), a citizen-labelling of 1310 faces from the Labelled Faces in the Wild data. To collect this data, we built a website and asked citizens to label photos according to the emotion displayed. The citizens were also asked to label a selection of KDEF faces. We evaluated the common misclassification of the faces, similar to what people do; machine algorithms perform worst regarding distinguishing between sad, angry and fearful expressions. We describe a new weighted voting algorithm for multi-calcification, in which the predictions of the classifiers trained on pairs of classes are combined with weights learned using an evolutionary algorithm. This method yields superior results, particularly for the hard-to-distinguish emotions. The method was applied to the DynEmo video database. We investigated some methods to smooth the classifier predictions in order to exploit temporal continuity emotions and therefore classification error. Several smoothing techniques were investigated and optimised, and we found that the simple moving average and linear fit Lowess smoothing performed best.en_GB
dc.identifier.urihttp://hdl.handle.net/10871/39319
dc.publisherUniversity of Exeteren_GB
dc.subjectfacial Emotionen_GB
dc.subjectTexture Descriptorsen_GB
dc.subjectEmotion Classificationen_GB
dc.subjectSVMen_GB
dc.subjectRandom Forestsen_GB
dc.titleEmotion Classification Using Combinations of Texture Descriptorsen_GB
dc.typeThesis or dissertationen_GB
dc.date.available2019-10-24T11:20:23Z
dc.contributor.advisorEverson, Ren_GB
dc.contributor.advisorChristmas, Jen_GB
dc.publisher.departmentComputer scienceen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dc.type.degreetitleDoctor of Philosophy in Computer Scienceen_GB
dc.type.qualificationlevelDoctoralen_GB
dc.type.qualificationnameDoctoral Thesisen_GB
rioxxterms.versionNAen_GB
rioxxterms.licenseref.startdate2019-10-21
rioxxterms.typeThesisen_GB
refterms.dateFOA2019-10-24T11:20:30Z


Files in this item

This item appears in the following Collection(s)

Show simple item record