Optimising diversity in classifier ensembles of classification trees
Ivaşcu, C; Everson, RM; Fieldsend, JE
Date: 1 April 2021
Journal
Lecture Notes in Computer Science
Publisher
Springer Verlag
Abstract
Ensembles of predictors have been generally found to have better performance than single predictors. Although diversity is widely thought to be an important factor in building successful ensembles, there have been contradictory results in the literature regarding the influence of diversity on the generalisation error. Fundamental to ...
Ensembles of predictors have been generally found to have better performance than single predictors. Although diversity is widely thought to be an important factor in building successful ensembles, there have been contradictory results in the literature regarding the influence of diversity on the generalisation error. Fundamental to this may be the way diversity itself is defined. We present two new diversity measures, based on the idea of ambiguity, obtained from the bias-variance decomposition by using the cross-entropy error or the hinge-loss. If random sampling is used to select patterns on which ensemble members are trained, we find that generalisation error is negatively correlated with diversity at high sampling rates; conversely generalisation error is positively correlated with diversity when the sampling rate is low and the diversity high. We use evolutionary optimisers to select the subsets of patterns for predictor training by maximising these diversity measures on training data. Evaluation of their generalisation performance on a range of classification datasets from the literature shows that the ensembles obtained by maximising the cross-entropy diversity measure generalise well, enhancing the performance of small ensembles. Contrary to expectation, we find that there is no correlation between whether a pattern is selected and its proximity to the decision boundary.
Computer Science
Faculty of Environment, Science and Economy
Item views 0
Full item downloads 0