Show simple item record

dc.contributor.authorScheinert Idodo, L
dc.date.accessioned2024-10-07T11:58:18Z
dc.date.issued2024-10-07
dc.date.updated2024-10-06T10:08:48Z
dc.description.abstractEvaluating judicial training at the system level has been a long-standing challenge in judicial training evaluation. To tackle it, this thesis devises a consolidated framework for evaluation at all levels: the approach provides accountability and feedback for training improvement while addressing potential concerns about judicial independence, which have often acted as a barrier or “bullet-proof vest” against evaluation. The thesis starts by offering a clear definition of judicial training, conceptualising it as non-formal learning; this acknowledges the absence of fixed syllabi or external accreditation and implies independence-maintaining methods for system-level evaluation. From that, the study develops a context-specific, purpose-driven, mixed methods framework of training effectiveness evaluation at all levels. Finally, it implements the framework via a case study of the understudied United Kingdom’s First-tier Tribunal Immigration and Asylum Chamber (FtTIAC), an administrative tribunal deciding complex immigration and refugee status cases in often controversial political contexts. The approach proposes a novel, hypothesis-led application of methods from the Artificial Intelligence and Law/ judicial analytics fields to support system-level evaluation based on judicial decisions analysis and benefits from rare access to judges, Judicial College staff, and training courses. Findings demonstrate 1) a mismatch between levels of current training aims formulation and evaluation, 2) mixed outcomes (judges learn and they do not, they change their practice and they do not), and 3) persisting overturn rates and error types for the overall small percentage of appealed FtTIAC decisions. The study makes three key empirically based recommendations: training could 1) better leverage social interactions to support learning, 2) differentiate settings, methods, and contents for more targeted provision, and 3) draw on the full range of activities and materials to effect learning and practice change. While reflecting critically on the ability to evaluate training at the system level, the proposed approach, which should be tested in other contexts, turns the “bullet-proof vest” into a tool kit – after all, it is effective training that “bullet-proofs” judges for their independent adjudication task.en_GB
dc.description.sponsorshipEconomic and Social Research Council (ESRC)en_GB
dc.identifier.urihttp://hdl.handle.net/10871/137626
dc.language.isoenen_GB
dc.publisherUniversity of Exeteren_GB
dc.rights.embargoreasonThis thesis is embargoed until 07/Apr/2026 as the author plans to publish papers using material that is substantially drawn from the thesis.en_GB
dc.subjectjudicial training evaluationen_GB
dc.subjectmachine learningen_GB
dc.subjectnatural language processingen_GB
dc.subjectjudicial decision classificationen_GB
dc.subjectmixed methods researchen_GB
dc.subjectKirkpatrick evaluation modelen_GB
dc.subjectFirst-tier Tribunal Immigration and Asylum Chamberen_GB
dc.subjectimpact evaluationen_GB
dc.subjectjudicial analyticsen_GB
dc.titleThe ‘bullet-proof vest’: remits and limits of judicial training and its evaluation. An exploration of the United Kingdom’s First-tier Tribunal Immigration and Asylum Chamberen_GB
dc.typeThesis or dissertationen_GB
dc.date.available2024-10-07T11:58:18Z
dc.contributor.advisorGill, Nick
dc.contributor.advisorTonkin, Emma
dc.contributor.advisorBeduschi, Ana
dc.publisher.departmentGeography
dc.rights.urihttp://www.rioxx.net/licenses/all-rights-reserveden_GB
dc.type.degreetitleDoctor of Philosophy in Advanced Quantitative Methods in Social Sciences
dc.type.qualificationlevelDoctoral
dc.type.qualificationnameDoctoral Thesis
rioxxterms.versionNAen_GB
rioxxterms.licenseref.startdate2024-10-07
rioxxterms.typeThesisen_GB


Files in this item

This item appears in the following Collection(s)

Show simple item record