Show simple item record

dc.contributor.authorHutt, H
dc.contributor.authorEverson, R
dc.contributor.authorGrant, M
dc.contributor.authorLove, J
dc.contributor.authorLittlejohn, G
dc.date.accessioned2018-12-04T15:05:40Z
dc.date.issued2014-05-18
dc.description.abstractThe use of citizen science to obtain annotations from multiple annotators has been shown to be an effective method for annotating datasets in which computational methods alone are not feasible. The way in which the annotations are obtained is an important consideration which affects the quality of the resulting consensus annotation. In this paper, we examine three separate approaches to obtaining consensus scores for instances rather than merely binary classifications. To obtain a consensus score, annotators were asked to make annotations in one of three paradigms: classification, scoring and ranking. A web-based citizen science experiment is described which implements the three approaches as crowdsourced annotation tasks. The tasks are evaluated in relation to the accuracy and agreement among the participants using both simulated and real-world data from the experiment. The results show a clear difference in performance between the three tasks, with the ranking task obtaining the highest accuracy and agreement among the participants. We show how a simple evolutionary optimiser may be used to improve the performance by reweighting the importance of annotators.en_GB
dc.identifier.citationVol. 19 (6), pp. 1541 - 1552en_GB
dc.identifier.doi10.1007/s00500-014-1303-z
dc.identifier.urihttp://hdl.handle.net/10871/34988
dc.language.isoenen_GB
dc.publisherSpringeren_GB
dc.relation.urlhttp://hdl.handle.net/10871/15617
dc.rights© 2014, Springer-Verlag Berlin Heidelberg.en_GB
dc.subjectWeb-based citizen scienceen_GB
dc.subjectClassificationen_GB
dc.subjectConsensus scoreen_GB
dc.subjectCrowdsourced annotation tasksen_GB
dc.subjectEvolutionary optimiseren_GB
dc.subjectImage clumpen_GB
dc.subjectRankingen_GB
dc.subjectScoringen_GB
dc.subjectInterneten_GB
dc.subjectEvolutionary computationen_GB
dc.subjectImage classificationen_GB
dc.subjectPattern clusteringen_GB
dc.subjectMicroscopyen_GB
dc.subjectCorrelationen_GB
dc.titleHow clumpy is my image? Scoring in crowdsourced annotation tasksen_GB
dc.typeArticleen_GB
dc.date.available2018-12-04T15:05:40Z
dc.identifier.issn1432-7643
dc.description This is the author accepted manuscript. The final version is available from Springer via the DOI in this recorden_GB
dc.descriptionThere is another record in ORE for this publication: http://hdl.handle.net/10871/15617
dc.identifier.journalSoft Computingen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2014-04-01
rioxxterms.versionAMen_GB
rioxxterms.licenseref.startdate2014-05-18
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2018-12-04T15:02:56Z
refterms.versionFCDAM
refterms.dateFOA2018-12-04T15:05:42Z
refterms.panelBen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record