dc.description.abstract | Narratives surrounding algorithmic surveillance typically emphasise negativity and concerns about privacy. In contrast, we argue that current research underestimates potentially positive consequences of algorithmic surveillance in the form of group-based recognition. Specifically, we test whether (accurate) algorithmic surveillance (i.e., the extent to which those surveilled believe surveillance mirrors their own self-concept) provides a vehicle for group-based recognition in two contexts: (1) those under outgroup surveillance and (2) surveillance from the perspective of stigmatised and misrecognised groups. In turn, we test whether this can lead to more positive (and less negative) feelings towards surveillance. Alongside this, we also test whether a countervailing negative pathway exists, whereby more accurate surveillance is associated with more privacy concern, and in turn, more negative (and less positive) feelings towards surveillance. The final study also tests whether positive perceptions of accurate surveillance arising through group-based recognition are limited only to misrecognised groups, or whether this is true for people more generally. Across seven studies, we test the core hypothesis that group-based recognition from accurate surveillance provides a basis for positive reactions to algorithmic surveillance that countervails the negative pathway through privacy concern. Overall, we found support for the positive pathway, whereby more accurate surveillance was associated with more positive feelings towards surveillance through group-based recognition. The positive pathway was present for both typically recognised and misrecognised groups. We also found partial support for the negative pathway; whereby privacy concern was associated with less positive feelings towards surveillance. However, we did not find that surveillance accuracy was associated with privacy concern; one implication of this is that the presence of surveillance per se overwhelms any additional effect of surveillance accuracy. Additionally, surveiller social identity (ingroup vs. outgroup) influenced both the positive and negative pathways: surveillance from an outgroup was considered less trustworthy than ingroup surveillance, which in turn predicted less positive outcomes in the form of more privacy concern and less group-based recognition. This thesis challenges the current techno-pessimistic view that algorithms are inherently negative and contributes to research that endeavours to gain a greater understanding of society’s relationship with algorithms and artificial intelligence. | en_GB |