Show simple item record

dc.contributor.authorPartridge, Derek
dc.contributor.authorFieldsend, Jonathan E.
dc.contributor.authorKrzanowski, Wojtek J.
dc.contributor.authorBailey, Trevor C.
dc.contributor.authorEverson, Richard M.
dc.contributor.authorSchetinin, Vitaly
dc.date.accessioned2013-07-09T09:55:50Z
dc.date.issued2006
dc.description.abstractBayes’ rule is introduced as a coherent strategy for multiple recomputations of classifier system output, and thus as a basis for assessing the uncertainty associated with a particular system results --- i.e. a basis for confidence in the accuracy of each computed result. We use a Markov-Chain Monte Carlo method for efficient selection of recomputations to approximate the computationally intractable elements of the Bayesian approach. The estimate of the confidence to be placed in any classification result provides a sound basis for rejection of some classification results. We present uncertainty envelopes as one way to derive these confidence estimates from the population of recomputed results. We show that a coarse SURE or UNSURE confidence rating based on a threshold of agreed classifications works well, not only pinpointing those results that are reliable but also in indicating input data problems, such as corrupted or incomplete data, or application of an inadequate classifier model.en_GB
dc.identifier.citationInternet, Processing, Systems, and Interdisciplinary (Research) Conference (IPSI-2006 FRANCE), Carcassonne, France, April 27-30, 2006en_GB
dc.identifier.urihttp://hdl.handle.net/10871/11586
dc.language.isoenen_GB
dc.titleComputing with confidence: a Bayesian approachen_GB
dc.typeConference paperen_GB
dc.date.available2013-07-09T09:55:50Z


Files in this item

This item appears in the following Collection(s)

Show simple item record