Show simple item record

dc.contributor.authorHutt, H
dc.contributor.authorEverson, R
dc.contributor.authorGrant, M
dc.contributor.authorLove, J
dc.contributor.authorLittlejohn, G
dc.date.accessioned2016-03-30T13:43:03Z
dc.date.issued2013-10-31
dc.description.abstractThe use of citizen science to obtain annotations from multiple annotators has been shown to be an effective method for annotating datasets in which computational methods alone are not feasible. The way in which the annotations are obtained is an important consideration which affects the quality of the resulting consensus estimates. In this paper, we examine three separate approaches to obtaining scores for instances rather than merely classifications. To obtain a consensus score annotators were asked to make annotations in one of three paradigms: classification, scoring and ranking. A web-based citizen science experiment is described which implements the three approaches as crowdsourced annotation tasks. The tasks are evaluated in relation to the accuracy and agreement among the participants using both simulated and real-world data from the experiment. The results show a clear difference in performance between the three tasks, with the ranking task obtaining the highest accuracy and agreement among the participants. We show how a simple evolutionary optimiser may be used to improve the performance by reweighting the importance of annotators.en_GB
dc.identifier.citation2013 13th UK Workshop on Computational Intelligence (UKCI), Guildford, UK, 9 - 11 September 2013, pp. 136-143en_GB
dc.identifier.doi10.1109/UKCI.2013.6651298
dc.identifier.urihttp://hdl.handle.net/10871/20875
dc.language.isoenen_GB
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_GB
dc.rightsCopyright © 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.en_GB
dc.subjectInterneten_GB
dc.subjectevolutionary computationen_GB
dc.subjectgroupwareen_GB
dc.subjectimage classificationen_GB
dc.subjectpattern clusteringen_GB
dc.subjectWeb-based citizen scienceen_GB
dc.subjectclassificationen_GB
dc.subjectconsensus scoreen_GB
dc.subjectcrowdsourced annotation tasksen_GB
dc.subjectevolutionary optimiseren_GB
dc.subjectimage clumpen_GB
dc.subjectrankingen_GB
dc.subjectscoringen_GB
dc.titleHow clumpy is my image? Evaluating crowdsourced annotation tasksen_GB
dc.typeConference paperen_GB
dc.date.available2016-03-30T13:43:03Z
dc.identifier.isbn9781479915668
dc.descriptionThis is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.en_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record