Show simple item record

dc.contributor.authorZhu, H
dc.contributor.authorKong, X
dc.contributor.authorXie, W
dc.contributor.authorHuang, X
dc.contributor.authorShen, L
dc.contributor.authorLiu, L
dc.contributor.authorGunes, H
dc.contributor.authorSong, S
dc.date.accessioned2024-10-30T10:15:08Z
dc.date.issued2024-10-28
dc.date.updated2024-10-29T20:56:03Z
dc.description.abstractHuman facial reactions play crucial roles in dyadic human-human interactions, where individuals (i.e., listeners) with varying cognitive process styles may display different but appropriate facial reactions in response to an identical behaviour expressed by their conversational partners. While several existing facial reaction generation approaches are capable of generating multiple appropriate facial reactions (AFRs) in response to each given human behaviour, they fail to take human's personalised cognitive process in AFRs generation. In this paper, we propose the first online personalised multiple appropriate facial reaction generation (MAFRG) approach which learns a unique personalised cognitive style from the target human listener's previous facial behaviours and represents it as a set of network weight shifts. These personalised weight shifts are then applied to edit the weights of a pre-trained generic MAFRG model, allowing the obtained personalised model to naturally mimic the target human listener's cognitive process in its reasoning for multiple AFRs generations. Experimental results show that our approach not only largely outperformed all existing approaches in generating more appropriate and diverse generic AFRs, but also serves as the first reliable personalised MAFRG solution.en_GB
dc.description.sponsorshipEngineering and Physical Sciences Research Councilen_GB
dc.description.sponsorshipNational Natural Science Foundation of Chinaen_GB
dc.description.sponsorshipGuangdong Basic and Applied Basic Research Foundationen_GB
dc.description.sponsorshipGuangdong Provincial Key Laboratoryen_GB
dc.description.sponsorshipNational Natural Science Foundation of Chinaen_GB
dc.format.extent9495-9504
dc.identifier.citation32nd ACM International Conference on Multimedia (MM '24), 28 October-1 November 2024, Melbourne, Victoria, pp. 9495-9504en_GB
dc.identifier.doihttps://doi.org/10.1145/3664647.3680752
dc.identifier.grantnumberEP/Y018281/1en_GB
dc.identifier.grantnumber82261138629en_GB
dc.identifier.grantnumber2023A1515010688en_GB
dc.identifier.grantnumber2023B1212060076en_GB
dc.identifier.grantnumber62001173en_GB
dc.identifier.grantnumber62171188en_GB
dc.identifier.urihttp://hdl.handle.net/10871/137834
dc.language.isoen_USen_GB
dc.publisherAssociation for Computing Machineryen_GB
dc.relation.urlhttps://github.com/xk0720/PerFRDiffen_GB
dc.rights©2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.en_GB
dc.subjectFacial Reactionen_GB
dc.subjectPersonalisationen_GB
dc.subjectWeight Editingen_GB
dc.titlePerFRDiff: Personalised weight editing for multiple appropriate facial reaction generationen_GB
dc.typeConference paperen_GB
dc.date.available2024-10-30T10:15:08Z
dc.identifier.isbn979-8-4007-0686-8/24/10
dc.descriptionThis is the final version. Available from the Association for Computing Machinery via the DOI in this record. en_GB
dc.descriptionOur code is made available at https://github.com/xk0720/PerFRDiff.en_GB
dc.relation.ispartofProceedings of the 32nd ACM International Conference on Multimedia
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/en_GB
dcterms.dateAccepted2024
rioxxterms.versionVoRen_GB
rioxxterms.licenseref.startdate2024-10-28
rioxxterms.typeConference Paper/Proceeding/Abstracten_GB
refterms.dateFCD2024-10-30T09:57:32Z
refterms.versionFCDAM
refterms.dateFOA2024-10-30T10:16:23Z
refterms.panelBen_GB
refterms.dateFirstOnline2024-10-28
pubs.name-of-conferenceMM '24: The 32nd ACM International Conference on Multimedia


Files in this item

This item appears in the following Collection(s)

Show simple item record

©2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
Except where otherwise noted, this item's licence is described as ©2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.