Show simple item record

dc.contributor.authorHutt, Hugo
dc.contributor.authorEverson, Richard M.
dc.contributor.authorGrant, Murray
dc.contributor.authorLove, John
dc.contributor.authorLittlejohn, George
dc.date.accessioned2014-09-19T12:55:14Z
dc.date.issued2014-05-18
dc.description.abstractThe use of citizen science to obtain annotations from multiple annotators has been shown to be an effective method for annotating datasets in which computational methods alone are not feasible. The way in which the annotations are obtained is an important consideration which affects the quality of the resulting consensus annotation. In this paper, we examine three separate approaches to obtaining consensus scores for instances rather than merely binary classifications. To obtain a consensus score, annotators were asked to make annotations in one of three paradigms: classification, scoring and ranking. A web-based citizen science experiment is described which implements the three approaches as crowdsourced annotation tasks. The tasks are evaluated in relation to the accuracy and agreement among the participants using both simulated and real-world data from the experiment. The results show a clear difference in performance between the three tasks, with the ranking task obtaining the highest accuracy and agreement among the participants. We show how a simple evolutionary optimiser may be used to improve the performance by reweighting the importance of annotators.en_GB
dc.identifier.citationVol. 19, pp. 1541–1552en_GB
dc.identifier.doi10.1007/s00500-014-1303-z
dc.identifier.urihttp://hdl.handle.net/10871/15617
dc.language.isoenen_GB
dc.publisherSpringer Verlagen_GB
dc.relation.urlhttp://hdl.handle.net/10871/34988
dc.rights.embargoreasonPublisher policyen_GB
dc.subjectWeb-based citizen scienceen_GB
dc.subjectClassificationen_GB
dc.subjectConsensus scoreen_GB
dc.subjectCrowdsourced annotation tasksen_GB
dc.subjectEvolutionary optimiseren_GB
dc.subjectImage clumpen_GB
dc.subjectRankingen_GB
dc.subjectScoringen_GB
dc.subjectInterneten_GB
dc.subjectEvolutionary computationen_GB
dc.subjectImage classificationen_GB
dc.subjectPattern clusteringen_GB
dc.subjectMicroscopyen_GB
dc.subjectCorrelationen_GB
dc.titleHow clumpy is my image? Scoring in crowdsourced annotation tasksen_GB
dc.typeArticleen_GB
dc.identifier.issn1432-7643
dc.descriptionThe final publication is available at Springer via http://dx.doi.org/10.1007/s00500-014-1303-zen_GB
dc.descriptionThere is another record in ORE for this publication: http://hdl.handle.net/10871/34988
dc.identifier.eissn1433-7479
dc.identifier.journalSoft Computingen_GB
refterms.dateFOA2015-04-30T23:00:00Z


Files in this item

This item appears in the following Collection(s)

Show simple item record