TY - JOUR
T1 - Dendritic Neuron Model with Effective Learning Algorithms for Classification, Approximation, and Prediction
AU - Gao, Shangce
AU - Zhou, Mengchu
AU - Wang, Yirui
AU - Cheng, Jiujun
AU - Yachi, Hanaki
AU - Wang, Jiahai
N1 - Funding Information:
Manuscript received November 8, 2017; revised March 3, 2018; accepted June 7, 2018. Date of publication July 10, 2018; date of current version January 21, 2019. This work was supported in part by the National Natural Science Foundation of China under Grant 61673403, Grant U1611262, and Grant 61472284, in part by the Opening Project of Guangdong High-Performance Computing Society under Grant 2017060109, and in part by JSPS KAKENHI under Grant JP17K12751. (Corresponding authors: MengChu Zhou; Jiujun Cheng.) S. Gao, Y. Wang, and H. Yachi are with the Faculty of Engineering, University of Toyama, Toyama 930-8555, Japan (e-mail: gaosc@eng.u-toyama.ac.jp; wyr607@foxmail.com; hanaki.fatidll@gmail.com).
Publisher Copyright:
© 2012 IEEE.
PY - 2019/2
Y1 - 2019/2
N2 - An artificial neural network (ANN) that mimics the information processing mechanisms and procedures of neurons in human brains has achieved a great success in many fields, e.g., classification, prediction, and control. However, traditional ANNs suffer from many problems, such as the hard understanding problem, the slow and difficult training problems, and the difficulty to scale them up. These problems motivate us to develop a new dendritic neuron model (DNM) by considering the nonlinearity of synapses, not only for a better understanding of a biological neuronal system, but also for providing a more useful method for solving practical problems. To achieve its better performance for solving problems, six learning algorithms including biogeography-based optimization, particle swarm optimization, genetic algorithm, ant colony optimization, evolutionary strategy, and population-based incremental learning are for the first time used to train it. The best combination of its user-defined parameters has been systemically investigated by using the Taguchi's experimental design method. The experiments on 14 different problems involving classification, approximation, and prediction are conducted by using a multilayer perceptron and the proposed DNM. The results suggest that the proposed learning algorithms are effective and promising for training DNM and thus make DNM more powerful in solving classification, approximation, and prediction problems.
AB - An artificial neural network (ANN) that mimics the information processing mechanisms and procedures of neurons in human brains has achieved a great success in many fields, e.g., classification, prediction, and control. However, traditional ANNs suffer from many problems, such as the hard understanding problem, the slow and difficult training problems, and the difficulty to scale them up. These problems motivate us to develop a new dendritic neuron model (DNM) by considering the nonlinearity of synapses, not only for a better understanding of a biological neuronal system, but also for providing a more useful method for solving practical problems. To achieve its better performance for solving problems, six learning algorithms including biogeography-based optimization, particle swarm optimization, genetic algorithm, ant colony optimization, evolutionary strategy, and population-based incremental learning are for the first time used to train it. The best combination of its user-defined parameters has been systemically investigated by using the Taguchi's experimental design method. The experiments on 14 different problems involving classification, approximation, and prediction are conducted by using a multilayer perceptron and the proposed DNM. The results suggest that the proposed learning algorithms are effective and promising for training DNM and thus make DNM more powerful in solving classification, approximation, and prediction problems.
KW - Approximation
KW - brain
KW - classification
KW - dendritic neuron model (DNM)
KW - global learning algorithms
KW - prediction
UR - http://www.scopus.com/inward/record.url?scp=85059111349&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85059111349&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2018.2846646
DO - 10.1109/TNNLS.2018.2846646
M3 - Article
C2 - 30004892
AN - SCOPUS:85059111349
SN - 2162-237X
VL - 30
SP - 601
EP - 614
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 2
M1 - 8409490
ER -