Abstract
To construct a strong classifier ensemble, base classifiers should be accurate and diverse. However, there is no uniform standard for the definition and measurement of diversity. This work proposes a learners’ interpretability diversity (LID) to measure the diversity of interpretable machine learners. It then proposes a LID-based classifier ensemble. Such an ensemble concept is novel because: 1) interpretability is used as an important basis for diversity measurement and 2) before its training, the difference between two interpretable base learners can be measured. To verify the proposed method’s effectiveness, we choose a decision-tree-initialized dendritic neuron model (DDNM) as a base learner for ensemble design. We apply it to seven benchmark datasets. The results show that the DDNM ensemble combined with LID obtains superior performance in terms of accuracy and computational efficiency compared to some popular classifier ensembles. A random-forest-initialized dendritic neuron model (RDNM) combined with LID is an outstanding representative of the DDNM ensemble.
Original language | English (US) |
---|---|
Pages (from-to) | 1-14 |
Number of pages | 14 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
DOIs | |
State | Accepted/In press - 2023 |
All Science Journal Classification (ASJC) codes
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence
Keywords
- Classification
- Computational modeling
- Dendrites (neurons)
- Ensemble learning
- Machine learning
- Media
- Neurons
- Training
- dendritic neuron model (DNM)
- ensemble learning
- interpretability diversity
- random forest (RF)