TY - JOUR

T1 - Pattern Retrieval and Learning in Nets of Asynchronous Binary Threshold Elements

AU - Kam, Moshe

AU - Cheng, Roger

AU - Guez, Allon

N1 - Funding Information:
work was supported by tfe’ National Science Foundation under Grant IRI-88101186 and under Grant DMCE-8505235. This paper was recommended by Associate Editor M. Ilic.

PY - 1989/3

Y1 - 1989/3

N2 - We study the state space of a popular network of asynchronous multi-connected linear threshold elements. The properties of the state space are analyzed during a learning process: the network learns a set of patterns which appear in its environment in a random sequence. The patterns influence the network's weights and thresholds through an adaptive algorithm, which is based on the Hebbian hypothesis. The algorithm tries to install the patterns as fixed points in the network's state space, and to guarantee that a large region of attraction surrounds each fixed point. We obtain the stabilization probabilities of each pattern in the learned set, as well as the stabilization rate, as a function of the training time. In addition, we obtain a lower bound on the probability of convergence to any stored pattern, from an initial state at a given Hamming distance from it. A specific case of our training algorithm is the widely used, nonadaptive sum-of-outer-products parameter assignment. Properties of networks with this assignment can, therefore, be evaluated and compared to properties that are obtained under the adaptive training, which is more suited to the pattern environment. Our derivation allows the evaluation of the quality of information storage for a given set of patterns, and the comparison of different information-coding schemes for items that need to be stored and retrieved from the network. Also evaluated are the steady-state values of the network's parameters, following a long training with a stationary set of patterns. Finally, we study the differences between networks that were trained by “hard” and “soft” limiter learning curves.

AB - We study the state space of a popular network of asynchronous multi-connected linear threshold elements. The properties of the state space are analyzed during a learning process: the network learns a set of patterns which appear in its environment in a random sequence. The patterns influence the network's weights and thresholds through an adaptive algorithm, which is based on the Hebbian hypothesis. The algorithm tries to install the patterns as fixed points in the network's state space, and to guarantee that a large region of attraction surrounds each fixed point. We obtain the stabilization probabilities of each pattern in the learned set, as well as the stabilization rate, as a function of the training time. In addition, we obtain a lower bound on the probability of convergence to any stored pattern, from an initial state at a given Hamming distance from it. A specific case of our training algorithm is the widely used, nonadaptive sum-of-outer-products parameter assignment. Properties of networks with this assignment can, therefore, be evaluated and compared to properties that are obtained under the adaptive training, which is more suited to the pattern environment. Our derivation allows the evaluation of the quality of information storage for a given set of patterns, and the comparison of different information-coding schemes for items that need to be stored and retrieved from the network. Also evaluated are the steady-state values of the network's parameters, following a long training with a stationary set of patterns. Finally, we study the differences between networks that were trained by “hard” and “soft” limiter learning curves.

UR - http://www.scopus.com/inward/record.url?scp=0024628097&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0024628097&partnerID=8YFLogxK

U2 - 10.1109/31.17581

DO - 10.1109/31.17581

M3 - Article

AN - SCOPUS:0024628097

SN - 0098-4094

VL - 36

SP - 353

EP - 364

JO - IEEE Transactions on Circuits and Systems

JF - IEEE Transactions on Circuits and Systems

IS - 3

ER -