Show simple item record

dc.contributor.authorPierce, RL
dc.contributor.authorVan Biesen, W
dc.contributor.authorVan Cauwenberge, D
dc.contributor.authorDecruyenaere, J
dc.contributor.authorSterckx, S
dc.date.accessioned2022-11-01T14:20:12Z
dc.date.issued2022-09-19
dc.date.updated2022-11-01T13:18:10Z
dc.description.abstractThe combination of "Big Data" and Artificial Intelligence (AI) is frequently promoted as having the potential to deliver valuable health benefits when applied to medical decision-making. However, the responsible adoption of AI-based clinical decision support systems faces several challenges at both the individual and societal level. One of the features that has given rise to particular concern is the issue of explainability, since, if the way an algorithm arrived at a particular output is not known (or knowable) to a physician, this may lead to multiple challenges, including an inability to evaluate the merits of the output. This "opacity" problem has led to questions about whether physicians are justified in relying on the algorithmic output, with some scholars insisting on the centrality of explainability, while others see no reason to require of AI that which is not required of physicians. We consider that there is merit in both views but find that greater nuance is necessary in order to elucidate the underlying function of explainability in clinical practice and, therefore, its relevance in the context of AI for clinical use. In this paper, we explore explainability by examining what it requires in clinical medicine and draw a distinction between the function of explainability for the current patient versus the future patient. This distinction has implications for what explainability requires in the short and long term. We highlight the role of transparency in explainability, and identify semantic transparency as fundamental to the issue of explainability itself. We argue that, in day-to-day clinical practice, accuracy is sufficient as an "epistemic warrant" for clinical decision-making, and that the most compelling reason for requiring explainability in the sense of scientific or causal explanation is the potential for improving future care by building a more robust model of the world. We identify the goal of clinical decision-making as being to deliver the best possible outcome as often as possible, and find-that accuracy is sufficient justification for intervention for today's patient, as long as efforts to uncover scientific explanations continue to improve healthcare for future patients.en_GB
dc.description.sponsorshipResearch Foundation Flanders (FWO)en_GB
dc.format.extent903600-
dc.format.mediumElectronic-eCollection
dc.identifier.citationVol. 13, article 903600en_GB
dc.identifier.doihttps://doi.org/10.3389/fgene.2022.903600
dc.identifier.grantnumber3G068619en_GB
dc.identifier.urihttp://hdl.handle.net/10871/131542
dc.language.isoenen_GB
dc.publisherFrontiers Mediaen_GB
dc.relation.urlhttps://www.ncbi.nlm.nih.gov/pubmed/36199569en_GB
dc.rights© 2022 Pierce, Van Biesen, Van Cauwenberge, Decruyenaere and Sterckx. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.en_GB
dc.subjectartificial intelligence in medicineen_GB
dc.subjectcausalityen_GB
dc.subjectclinical decision supporten_GB
dc.subjectexplainabilityen_GB
dc.subjectsemantic transparencyen_GB
dc.subjecttransparencyen_GB
dc.titleExplainability in medicine in an era of AI-based clinical decision support systemsen_GB
dc.typeArticleen_GB
dc.date.available2022-11-01T14:20:12Z
dc.identifier.issn1664-8021
exeter.article-numberARTN 903600
exeter.place-of-publicationSwitzerland
dc.descriptionThis is the final version. Available on open access from Frontiers Media via the DOI in this recorden_GB
dc.descriptionData availability statement: The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author.en_GB
dc.identifier.eissn1664-8021
dc.identifier.journalFrontiers in Geneticsen_GB
dc.relation.ispartofFront Genet, 13
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_GB
dcterms.dateAccepted2022-08-19
dc.rights.licenseCC BY
rioxxterms.versionVoRen_GB
rioxxterms.licenseref.startdate2022-09-19
rioxxterms.typeJournal Article/Reviewen_GB
refterms.dateFCD2022-11-01T14:17:14Z
refterms.versionFCDVoR
refterms.dateFOA2022-11-01T14:20:17Z
refterms.panelCen_GB
refterms.dateFirstOnline2022-09-19


Files in this item

This item appears in the following Collection(s)

Show simple item record

© 2022 Pierce, Van Biesen, Van Cauwenberge, Decruyenaere and Sterckx. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Except where otherwise noted, this item's licence is described as © 2022 Pierce, Van Biesen, Van Cauwenberge, Decruyenaere and Sterckx. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.