Show simple item record

dc.contributor.authorCampbell, JL
dc.contributor.authorAbel, G
dc.date.accessioned2016-06-24T12:34:27Z
dc.date.issued2016-06-02
dc.description.abstractOBJECTIVES: To inform the rational deployment of assessor resource in the evaluation of applications to the UK Advisory Committee on Clinical Excellence Awards (ACCEA). SETTING: ACCEA are responsible for a scheme to financially reward senior doctors in England and Wales who are assessed to be working over and above the standard expected of their role. PARTICIPANTS: Anonymised applications of consultants and senior academic GPs for awards were considered by members of 14 regional subcommittees and 2 national assessing committees during the 2014-2015 round of applications. DESIGN: It involved secondary analysis of complete anonymised national data set. PRIMARY AND SECONDARY OUTCOME MEASURES: We analysed scores for each of 1916 applications for a clinical excellence award across 4 levels of award. Scores were provided by members of 16 subcommittees. We assessed the reliability of assessments and described the variance in the assessment of scores. RESULTS: Members of regional subcommittees assessed 1529 new applications and 387 renewal applications. Average scores increased with the level of application being made. On average, applications were assessed by 9.5 assessors. The highest contributions to the variance in individual assessors' assessments of applications were attributable to assessors or to residual variance. The applicant accounted for around a quarter of the variance in scores for new bronze applications, with this proportion decreasing for higher award levels. Reliability in excess of 0.7 can be attained where 4 assessors score bronze applications, with twice as many assessors being required for higher levels of application. CONCLUSIONS: Assessment processes pertaining in the competitive allocation of public funds need to be credible and efficient. The present arrangements for assessing and scoring applications are defensible, depending on the level of reliability judged to be required in the assessment process. Some relatively minor reconfiguration in approaches to scoring might usefully be considered in future rounds of assessment.en_GB
dc.description.sponsorshipThe study was supported by a research award from the UK National Institute for Health Research Policy Research Programme (Award: PR-ST-0915-10014). The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.en_GB
dc.identifier.citationVol. 6, pp. e011958 -en_GB
dc.identifier.doi10.1136/bmjopen-2016-011958
dc.identifier.otherbmjopen-2016-011958
dc.identifier.urihttp://hdl.handle.net/10871/22256
dc.language.isoenen_GB
dc.publisherBMJ Publishing Groupen_GB
dc.relation.urlhttp://www.ncbi.nlm.nih.gov/pubmed/27256095en_GB
dc.relation.urlhttp://bmjopen.bmj.com/content/6/6/e011958en_GB
dc.rightsThis is the final version of the article. Available from BMJ Publishing Group via the DOI in this record.en_GB
dc.subjectclinical excellenceen_GB
dc.subjectqualityen_GB
dc.subjectreliabilityen_GB
dc.titleClinical excellence: evidence on the assessment of senior doctors' applications to the UK Advisory Committee on Clinical Excellence Awards. Analysis of complete national data set.en_GB
dc.typeArticleen_GB
dc.date.available2016-06-24T12:34:27Z
dc.identifier.issn2044-6055
exeter.place-of-publicationEnglanden_GB
dc.descriptionPublished onlineen_GB
dc.descriptionJournal Articleen_GB
dc.identifier.journalBMJ Openen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record