Algorithmic and human prediction of success in human collaboration from visual features
dc.contributor.author | Saveski, M | |
dc.contributor.author | Awad, E | |
dc.contributor.author | Rahwan, I | |
dc.contributor.author | Cebrian, M | |
dc.date.accessioned | 2021-02-25T14:23:34Z | |
dc.date.issued | 2021-02-02 | |
dc.description.abstract | As groups are increasingly taking over individual experts in many tasks, it is ever more important to understand the determinants of group success. In this paper, we study the patterns of group success in Escape The Room, a physical adventure game in which a group is tasked with escaping a maze by collectively solving a series of puzzles. We investigate (1) the characteristics of successful groups, and (2) how accurately humans and machines can spot them from a group photo. The relationship between these two questions is based on the hypothesis that the characteristics of successful groups are encoded by features that can be spotted in their photo. We analyze >43K group photos (one photo per group) taken after groups have completed the game—from which all explicit performance-signaling information has been removed. First, we find that groups that are larger, older and more gender but less age diverse are significantly more likely to escape. Second, we compare humans and off-the-shelf machine learning algorithms at predicting whether a group escaped or not based on the completion photo. We find that individual guesses by humans achieve 58.3% accuracy, better than random, but worse than machines which display 71.6% accuracy. When humans are trained to guess by observing only four labeled photos, their accuracy increases to 64%. However, training humans on more labeled examples (eight or twelve) leads to a slight, but statistically insignificant improvement in accuracy (67.4%). Humans in the best training condition perform on par with two, but worse than three out of the five machine learning algorithms we evaluated. Our work illustrates the potentials and the limitations of machine learning systems in evaluating group performance and identifying success factors based on sparse visual cues. | en_GB |
dc.identifier.citation | Vol. 11, article 2756 | en_GB |
dc.identifier.doi | 10.1038/s41598-021-81145-3 | |
dc.identifier.uri | http://hdl.handle.net/10871/124921 | |
dc.language.iso | en | en_GB |
dc.publisher | Nature Research | en_GB |
dc.relation.url | https://doi.org/10.7910/DVN/HDT2RN | en_GB |
dc.relation.url | http://hdl.handle.net/10871/124927 | |
dc.rights | © The Author(s) 2021. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ | en_GB |
dc.title | Algorithmic and human prediction of success in human collaboration from visual features | en_GB |
dc.type | Article | en_GB |
dc.date.available | 2021-02-25T14:23:34Z | |
dc.description | This is the final version. Available on open access from Nature Research via the DOI in this record | en_GB |
dc.description | Data availability: The full dataset including aggregated features of each group of the >43K groups used in our analyses is available at the following link: https://doi.org/10.7910/DVN/HDT2RN. All photos used in this work are publicly available, posted on public Facebook pages. However, we do not release the raw images, or the individual-level raw features extracted using the Face++ API. More details are provided at the link above. | en_GB |
dc.description | The publisher correction to this article is available in ORE at http://hdl.handle.net/10871/124927 | |
dc.identifier.eissn | 2045-2322 | |
dc.identifier.journal | Scientific Reports | en_GB |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_GB |
dcterms.dateAccepted | 2020-12-07 | |
rioxxterms.version | VoR | en_GB |
rioxxterms.licenseref.startdate | 2021-02-02 | |
rioxxterms.type | Journal Article/Review | en_GB |
refterms.dateFCD | 2021-02-25T14:21:52Z | |
refterms.versionFCD | VoR | |
refterms.dateFOA | 2021-02-25T14:23:41Z | |
refterms.panel | C | en_GB |
Files in this item
This item appears in the following Collection(s)
Except where otherwise noted, this item's licence is described as © The Author(s) 2021. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/