Show simple item record

dc.contributor.authorMontgomery, AA
dc.contributor.authorGraham, A
dc.contributor.authorEvans, PH
dc.contributor.authorFahey, T
dc.date.accessioned2019-03-13T16:58:07Z
dc.date.issued2002-03-26
dc.description.abstractBACKGROUND: Checklists for peer review aim to guide referees when assessing the quality of papers, but little evidence exists on the extent to which referees agree when evaluating the same paper. The aim of this study was to investigate agreement on dimensions of a checklist between two referees when evaluating abstracts submitted for a primary care conference. METHODS: Anonymised abstracts were scored using a structured assessment comprising seven categories. Between one (poor) and four (excellent) marks were awarded for each category, giving a maximum possible score of 28 marks. Every abstract was assessed independently by two referees and agreement measured using intraclass correlation coefficients. Mean total scores of abstracts accepted and rejected for the meeting were compared using an unpaired t test. RESULTS: Of 52 abstracts, agreement between reviewers was greater for three components relating to study design (adjusted intraclass correlation coefficients 0.40 to 0.45) compared to four components relating to more subjective elements such as the importance of the study and likelihood of provoking discussion (0.01 to 0.25). Mean score for accepted abstracts was significantly greater than those that were rejected (17.4 versus 14.6, 95% CI for difference 1.3 to 4.1, p = 0.0003). CONCLUSIONS: The findings suggest that inclusion of subjective components in a review checklist may result in greater disagreement between reviewers. However in terms of overall quality scores, abstracts accepted for the meeting were rated significantly higher than those that were rejected.en_GB
dc.identifier.citationVol. 2 (8)en_GB
dc.identifier.doihttps://doi.org/10.1186/1472-6963-2-8
dc.identifier.urihttp://hdl.handle.net/10871/36454
dc.language.isoenen_GB
dc.publisherBioMed Centralen_GB
dc.relation.urlhttps://www.ncbi.nlm.nih.gov/pubmed/11914164en_GB
dc.rights© 2002 Montgomery et al; licensee BioMed Central Ltd. Verbatim copying and redistribution of this article are permitted in any medium for any purpose, provided this notice is preserved along with the article's original URL.en_GB
dc.subjectAbstracting and Indexing as Topicen_GB
dc.subjectCongresses as Topicen_GB
dc.subjectConsensusen_GB
dc.subjectData Interpretation, Statisticalen_GB
dc.subjectHumansen_GB
dc.subjectJudgmenten_GB
dc.subjectManuscripts, Medicalen_GB
dc.subjectObserver Variationen_GB
dc.subjectPeer Review, Researchen_GB
dc.subjectPrimary Health Careen_GB
dc.subjectUnited Kingdomen_GB
dc.titleInter-rater agreement in the scoring of abstracts submitted to a primary care research conferenceen_GB
dc.typeArticleen_GB
dc.date.available2019-03-13T16:58:07Z
dc.identifier.issn1472-6963
exeter.place-of-publicationEnglanden_GB
dc.descriptionPublished onlineen_GB
dc.descriptionJournal Articleen_GB
dc.descriptionResearch Support, Non-U.S. Gov'ten_GB
dc.descriptionThis is the final version. Available from BioMed Central via the DOI in this record.en_GB
dc.identifier.journalBMC Health Services Researchen_GB
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dcterms.dateAccepted2002-03-26
rioxxterms.versionVoRen_GB
rioxxterms.licenseref.startdate2002-03-26
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2019-03-13T16:54:47Z
refterms.versionFCDVoR
refterms.dateFOA2019-03-13T16:58:10Z


Files in this item

This item appears in the following Collection(s)

Show simple item record