Abstract
The adaptive-resonance-theory (ART) architectures comprise a family of neural networks that efficiently self-organize stable recognition codes in response to arbitrary sequences of input patterns. This paper focuses on the ART 2 network architecture that is designed for the processing of both binary- and analog-valued patterns. The problems encountered in implementing the primary ART 2 architecture as originally presented by Gail Carpenter and Stephen Grossberg are discussed. An enhanced ART 2 architecture that receives its input from a functional-link preprocessor is proposed. Experimental results that demonstrate this architecture's superior performance over the previous ART 2 architecture are provided. Finally, a text-free speaker recognition system that employs the enhanced ART 2 architecture as its classifier is described.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 233-253 |
| Number of pages | 21 |
| Journal | Information sciences |
| Volume | 76 |
| Issue number | 3-4 |
| DOIs | |
| State | Published - 1994 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Theoretical Computer Science
- Computer Science Applications
- Information Systems and Management
- Artificial Intelligence