PerFRDiff: Personalised weight editing for multiple appropriate facial reaction generation
dc.contributor.author | Zhu, H | |
dc.contributor.author | Kong, X | |
dc.contributor.author | Xie, W | |
dc.contributor.author | Huang, X | |
dc.contributor.author | Shen, L | |
dc.contributor.author | Liu, L | |
dc.contributor.author | Gunes, H | |
dc.contributor.author | Song, S | |
dc.date.accessioned | 2024-10-30T10:15:08Z | |
dc.date.issued | 2024-10-28 | |
dc.date.updated | 2024-10-29T20:56:03Z | |
dc.description.abstract | Human facial reactions play crucial roles in dyadic human-human interactions, where individuals (i.e., listeners) with varying cognitive process styles may display different but appropriate facial reactions in response to an identical behaviour expressed by their conversational partners. While several existing facial reaction generation approaches are capable of generating multiple appropriate facial reactions (AFRs) in response to each given human behaviour, they fail to take human's personalised cognitive process in AFRs generation. In this paper, we propose the first online personalised multiple appropriate facial reaction generation (MAFRG) approach which learns a unique personalised cognitive style from the target human listener's previous facial behaviours and represents it as a set of network weight shifts. These personalised weight shifts are then applied to edit the weights of a pre-trained generic MAFRG model, allowing the obtained personalised model to naturally mimic the target human listener's cognitive process in its reasoning for multiple AFRs generations. Experimental results show that our approach not only largely outperformed all existing approaches in generating more appropriate and diverse generic AFRs, but also serves as the first reliable personalised MAFRG solution. | en_GB |
dc.description.sponsorship | Engineering and Physical Sciences Research Council | en_GB |
dc.description.sponsorship | National Natural Science Foundation of China | en_GB |
dc.description.sponsorship | Guangdong Basic and Applied Basic Research Foundation | en_GB |
dc.description.sponsorship | Guangdong Provincial Key Laboratory | en_GB |
dc.description.sponsorship | National Natural Science Foundation of China | en_GB |
dc.format.extent | 9495-9504 | |
dc.identifier.citation | 32nd ACM International Conference on Multimedia (MM '24), 28 October-1 November 2024, Melbourne, Victoria, pp. 9495-9504 | en_GB |
dc.identifier.doi | https://doi.org/10.1145/3664647.3680752 | |
dc.identifier.grantnumber | EP/Y018281/1 | en_GB |
dc.identifier.grantnumber | 82261138629 | en_GB |
dc.identifier.grantnumber | 2023A1515010688 | en_GB |
dc.identifier.grantnumber | 2023B1212060076 | en_GB |
dc.identifier.grantnumber | 62001173 | en_GB |
dc.identifier.grantnumber | 62171188 | en_GB |
dc.identifier.uri | http://hdl.handle.net/10871/137834 | |
dc.language.iso | en_US | en_GB |
dc.publisher | Association for Computing Machinery | en_GB |
dc.relation.url | https://github.com/xk0720/PerFRDiff | en_GB |
dc.rights | ©2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. | en_GB |
dc.subject | Facial Reaction | en_GB |
dc.subject | Personalisation | en_GB |
dc.subject | Weight Editing | en_GB |
dc.title | PerFRDiff: Personalised weight editing for multiple appropriate facial reaction generation | en_GB |
dc.type | Conference paper | en_GB |
dc.date.available | 2024-10-30T10:15:08Z | |
dc.identifier.isbn | 979-8-4007-0686-8/24/10 | |
dc.description | This is the final version. Available from the Association for Computing Machinery via the DOI in this record. | en_GB |
dc.description | Our code is made available at https://github.com/xk0720/PerFRDiff. | en_GB |
dc.relation.ispartof | Proceedings of the 32nd ACM International Conference on Multimedia | |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ | en_GB |
dcterms.dateAccepted | 2024 | |
rioxxterms.version | VoR | en_GB |
rioxxterms.licenseref.startdate | 2024-10-28 | |
rioxxterms.type | Conference Paper/Proceeding/Abstract | en_GB |
refterms.dateFCD | 2024-10-30T09:57:32Z | |
refterms.versionFCD | AM | |
refterms.dateFOA | 2024-10-30T10:16:23Z | |
refterms.panel | B | en_GB |
refterms.dateFirstOnline | 2024-10-28 | |
pubs.name-of-conference | MM '24: The 32nd ACM International Conference on Multimedia |
Files in this item
This item appears in the following Collection(s)
Except where otherwise noted, this item's licence is described as ©2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.