Neural Networks for Pattern Recognition takes the pioneering work in artificial neural networks by Stephen Grossberg and his colleagues to a new level. In a simple and accessible way it extends embedding field theory into areas of machine intelligence that have not been clearly dealt with before. Following a tutorial of existing neural networks for pattern classification, Nigrin expands on these networks to present fundamentally new architectures that perform realtime pattern classification of embedded and synonymous patterns and that will aid in tasks such as vision, speech recognition, sensor fusion, and constraint satisfaction.Nigrin presents the new architectures in two stages. First he presents a network called Sonnet 1 that already achieves important properties such as the ability to learn and segment continuously varied input patterns in real time, to process patterns in a context sensitive fashion, and to learn new patterns without degrading existing categories. He then removes simplifications inherent in Sonnet 1 and introduces radically new architectures. These architectures have the power to classify patterns that may have similar meanings but that have different external appearances (synonyms). They also have been designed to represent patterns in a distributed fashion, both in short-term and long-term memory.Albert Nigrin is Assistant Professor in the Department of Computer Science and Information Systems at American University.
Includes bibliographical references (p. [399]-405) and index.