Abstract
Design of parallel binary decision fusion systems is often performed under the assumption that the decision integrator (the data fusion center, DFC) possesses perfect knowledge of the local-detector (LD) statistics. In most studies, other statistical parameters are also assumed to be known, namely the a priori probabilities of the hypotheses, and the transition probabilities of DFC-LD channels. Under these circumstances, the DFC's sufficient statistic is a weighted sum of the local decisions. When these statistics are unknown, we propose to tune the weights on-line, guided by correct examples or by past experience. We develop a supervised training scheme that employs correct input-output examples to train the DFC. This scheme is then made into an unsupervised learning technique by replacing the examples with a self-assessment of the DFC, based on its own past decisions. In both cases the DFC minimizes the squared error between the actual and the desired values of its discriminant function. When supervised, the DFC obtains the desirable value from the supervisor. When unsupervised, the DFC estimates the desirable value from its last decision. This estimation includes rejection of data that is deemed unreliable.
Original language | English (US) |
---|---|
Pages (from-to) | 1304-1308 |
Number of pages | 5 |
Journal | Proceedings of the American Control Conference |
Volume | 2 |
State | Published - Dec 1 1994 |
Externally published | Yes |
Event | Proceedings of the 1994 American Control Conference. Part 1 (of 3) - Baltimore, MD, USA Duration: Jun 29 1994 → Jul 1 1994 |
All Science Journal Classification (ASJC) codes
- Electrical and Electronic Engineering