Stephen Grossberg
Stephen Grossberg | |
---|---|
Grossberg in July, 2016. | |
Born |
New York City, NY | 31 December 1939
Nationality | United States |
Stephen Grossberg (born December 31, 1939) is a cognitive scientist, theoretical and computational psychologist, neuroscientist, mathematician, biomedical engineer, and neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a Professor of Mathematics, Psychology, and Biomedical Engineering at Boston University.[1]
Education and early research
Grossberg first lived in Woodside, Queens, in New York City. His father died from Hodgkin’s lymphoma when he was one year old. He moved with his mother and older brother, Mitchell, to Jackson Heights, Queens, at five years of age when his mother remarried. The New York City subway system enabled him, along with thousands of other students, to attend Stuyvesant High School in lower Manhattan after passing its competitive entrance exam. He graduated first in his class from Stuyvesant in 1957.
His work on developing models that link brains to minds began unexpectedly when he took the introductory psychology course as a freshman at Dartmouth College in 1957. When he was exposed there to classical human and animal data about learning, the philosophical paradoxes that were implicit in these data triggered an intellectual inquiry that led him to introduce, during his freshman year, the modern paradigm of using nonlinear differential equations with which to describe neural networks that model brain dynamics, as well as the basic equations that many scientists use for this purpose today (see Research).
Grossberg knew no neuroscience when he derived his first neural models in 1957-58 from a real-time analysis of behavioral learning data. This behavioral derivation led to neural network models, often called the Additive and Shunting models today (see Research), that include cell bodies, axons, and synapses in which short-term memory (STM) and long-term memory (LTM) traces have a natural interpretation in terms of neural potentials, signals, and the regulation of chemical transmitters. This derivation showed, for the first time, that brain mechanisms could be derived by analyzing how behavior adapts autonomously in real time to a changing world. This discovery led Grossberg to study both psychology and neuroscience intensely from that time on, and to develop a theoretical method whereby to discover models capable of linking brain to mind.[2]
Artificial Intelligence was just being introduced at Dartmouth when Grossberg began this pioneering work. It is an interesting historical coincidence that the first major conference on AI occurred in 1956 during the Dartmouth Summer Research Project on Artificial Intelligence, a year before Grossberg came to Dartmouth as a freshman.
Grossberg received support for his undergraduate research from the Dartmouth chairman of psychology, Albert Hastorf, who went on to become a popular Dean, Provost, and Vice President at Stanford, and from the chairman of mathematics, John Kemeny, who with Thomas Kurtz invented the computer language Basic and introduced the first time-sharing computer center, before becoming President of Dartmouth. Dartmouth had a Senior Fellow program that enabled a small number of students to do research, instead of taking regular classes, during their senior year. Grossberg extended his early discoveries as a Senior Fellow, and summarized them in his Senior Fellow thesis. He received a B.A. in 1961 from Dartmouth as its first joint major in mathematics and psychology.
Grossberg then sought to continue his training and research in graduate school, and went to Stanford University to be close to the leading theoretical psychology institute at that time, The Institute for Mathematical Studies in the Social Sciences, whose faculty included many of the most distinguished researchers in the then nascent field of mathematical psychology, including William Estes, Richard Atkinson, Gordon Bower, and Patrick Suppes. Grossberg also went to Stanford to become a graduate student in mathematics in order to acquire the mathematical tools that his differential equation models indicated would be needed, and to learn the mathematical skills that could help him to read fluently the theoretical literatures of the multiple sciences that are relevant to understanding mind and brain. At Stanford, the psychologists were using finite Markov chains to analyze group learning data ("stimulus sampling theory"), and were unaccustomed to the idea of deriving properties of individual behavior from real-time adaptive neural networks. The mathematicians were perplexed by a mathematics student who was committed to doing theoretical psychology and neuroscience.
After taking 90 credits of graduate mathematics and reading extensively in multiple fields, Grossberg therefore left Stanford in 1964 with an MS in mathematics and transferred to The Rockefeller Institute for Medical Research (now The Rockefeller University) in Manhattan, which had a number of famous neuroscientists on its faculty as well as mathematicians and physicists who might be interested in behavioral and neural modeling, notably the famous probability theorist and statistical physicist, Mark Kac. In his first year at Rockefeller, Grossberg wrote a 440-page student monograph called The Theory of Embedding Fields with Applications to Psychology and Neurophysiology[3] that summarized his discoveries over the past decade. The monograph was distributed by Rockefeller to 125 of the leading labs in psychology and neuroscience at that time. Grossberg received a PhD in mathematics from Rockefeller in 1967 for a thesis that proved the first global content addressable memory theorems about the neural learning models that he had discovered at Dartmouth. His PhD thesis advisor was Gian-Carlo Rota, whose unusual breadth as a mathematician and philosopher enabled him to provide personal and political support for Grossberg’s unusual research interests.
Grossberg was then hired as an assistant professor of applied mathematics at MIT on the strength of his PhD thesis and strong recommendations from Kac and Rota. At MIT, Grossberg was kindly received by Norman Levinson, at that time the most famous MIT mathematician and an Institute Professor, and his wife Zipporah, or Fagi, who treated him like a scientific son. Levinson and Rota, who returned to MIT when Grossberg arrived there, each submitted some of Grossberg’s early articles in 1967-1971 on the foundational concepts and equations of neural networks, global content addressable memory theorems, and constructions of specialized networks for spatial and spatio-temporal pattern learning, for publication in prestigious scientific and mathematical journals, notably the Proceedings of the National Academy of Sciences.[4] In 1969, Grossberg was promoted to associate professor after publishing a stream of conceptual and mathematical results about many aspects of neural networks.
Grossberg was hired as a full professor at Boston University in 1975, where he is still on the faculty today. While at Boston University, he received a great deal of support from the BU President, John Silber, and the BU Dean and Provost, Dennis Berkey, which enabled him to found the Department of Cognitive and Neural Systems, several interdisciplinary research centers, and various international institutions. See Career and Infrastructure Development.
Research
Grossberg is a founder of the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology. His work focuses upon the design principles and mechanisms that enable the behavior of individuals, or machines, to adapt autonomously in real time to unexpected environmental challenges. This research has included neural models of vision and image processing; object, scene, and event learning, pattern recognition, and search; audition, speech and language; cognitive information processing and planning; reinforcement learning and cognitive-emotional interactions; autonomous navigation; adaptive sensory-motor control and robotics; self-organizing neurodynamics; and mental disorders. Grossberg also collaborates with experimentalists to design experiments that test theoretical predictions and fill in conceptually important gaps in the experimental literature, carries out analyses of the mathematical dynamics of neural systems, and transfers biological neural models to applications in engineering and technology. He has published seventeen books or journal special issues, over 500 research articles, and has seven patents.
As noted in the section on Education and Early Research, Grossberg has studied how brains give rise to minds since he took the introductory psychology course as a freshman at Dartmouth College in 1957. At that time, Grossberg introduced the paradigm of using nonlinear systems of differential equations to show how brain mechanisms can give rise to behavioral functions.[5] This paradigm is helping to solve the classical mind/body problem, and is the basic mathematical formalism that is used in biological neural network research today. In particular, in 1957-1958, Grossberg discovered widely used equations for (1) short-term memory (STM), or neuronal activation (often called the Additive and Shunting models, or the Hopfield model after John Hopfield's 1984 application of the Additive model equation); (2) medium-term memory (MTM), or activity-dependent habituation (often called habituative transmitter gates, or depressing synapses after Larry Abbott's 1997 introduction of this term); and (3) long-term memory (LTM), or neuronal learning (often called gated steepest descent learning). One variant of these learning equations, called Instar Learning, was introduced by Grossberg in 1976 into Adaptive Resonance Theory and Self-Organizing Maps for the learning of adaptive filters in these models. This learning equation was also used by Kohonen in his applications of Self-Organizing Maps starting in 1984. Another variant of these learning equations, called Outstar Learning, was used by Grossberg starting in 1967 for spatial pattern learning. Outstar and Instar learning were combined by Grossberg in 1976 in a three-layer network for the learning of multi-dimensional maps from any m-dimensional input space to any n-dimensional output space. This application was called Counter-propagation by Hecht-Nielsen in 1987.
Building on his 1964 Rockefeller PhD thesis, in the 1960s and 1970s, Grossberg generalized the Additive and Shunting models to a class of dynamical systems that included these models as well as non-neural biological models, and proved content addressable memory theorems for this more general class of models. As part of this analysis, he introduced a Liapunov functional method to help classify the limiting and oscillatory dynamics of competitive systems by keeping track of which population is winning through time. This Liapunov method led him and Michael Cohen to discover in 1981 and publish in 1982 and 1983 a Liapunov function that they used to prove that global limits exist in a class of dynamical systems with symmetric interaction coefficients that includes the Additive and Shunting models.[6] John Hopfield published this Liapunov function for the Additive model in 1984. Some scientists started to call Hopfield’s contribution the Hopfield model. In an attempt to correct this historical error, other scientists called the more general model and Liapunov function the Cohen-Grossberg model. Still other scientists call it the Cohen-Grossberg-Hopfield model.[7] In 1987, Bart Kosko adapted the Cohen-Grossberg model and Liapunov function, which proved global convergence of STM, to define an Adaptive Bidirectional Associative Memory that combines STM and LTM and which also globally converges to a limit.
Grossberg has introduced, and developed with his colleagues, fundamental concepts, mechanisms, models, and architectures across a wide spectrum of topics about brain and behavior. He has collaborated with over 100 PhD students and postdoctoral fellows.[8]
Models that Grossberg introduced and helped to develop include, for:
the foundations of neural network research: competitive learning, self-organizing maps, instars, and masking fields (for classification), outstars (for spatial pattern learning), avalanches (for serial order learning and performance), gated dipoles (for opponent processing);
perceptual and cognitive development, social cognition, working memory, cognitive information processing, planning, numerical estimation, and attention: Adaptive Resonance Theory (ART), ARTMAP, STORE, CORT-X, SpaN, LIST PARSE, lisTELOS, SMART, CRIB;
visual perception, attention, object and scene learning, recognition, predictive remapping, and search: BCS/FCS, FACADE, 3D LAMINART, aFILM, LIGHTSHAFT, Motion BCS, 3D FORMOTION, MODE, VIEWNET, dARTEX, ARTSCAN, pARTSCAN, dARTSCAN, 3D ARTSCAN, ARTSCAN Search, ARTSCENE, ARTSCENE Search;
auditory streaming, perception, speech, and language processing: SPINET, ARTSTREAM, NormNet, PHONET, ARTPHONE, ARTWORD;
cognitive-emotional dynamics, reinforcement learning, motivated attention, and adaptively timed behavior: CogEM, START, MOTIVATOR; Spectral Timing;
visual and spatial navigation: SOVEREIGN, STARS, ViSTARS, GRIDSmap, GridPlaceMap, Spectral Spacing;
adaptive sensory-motor control of eye, arm, and leg movements: VITE, FLETE, VITEWRITE, DIRECT, VAM, CPG, SACCART, TELOS, SAC-SPEM;
autism: iSTART
Career and Infrastructure Development
Given that there was little or no infrastructure to support the fields that he and other modeling pioneers were advancing, Grossberg founded several institutions aimed at providing interdisciplinary training, research, and publication outlets in the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology. In 1981, he founded the Center for Adaptive Systems at Boston University and remains its Director. In 1991, he founded the Department of Cognitive and Neural Systems at Boston University and served as its Chairman until 2007. In 2004, he founded the NSF Center of Excellence for Learning in Education, Science, and Technology (CELEST)[9] and served as its Director until 2009.[10] All of these institutions were aimed at answering two related questions: How does the brain control behavior? How can technology emulate biological intelligence? In addition, Grossberg founded and was first President of the International Neural Network Society (INNS), which grew to 3700 members from 49 states of the United States and 38 countries during the fourteen months of his presidency. The formation of INNS soon led to the formation of the European Neural Network Society (ENNS) and the Japanese Neural Network Society (JNNS). Grossberg also founded the INNS official journal,[11] and was its Editor-in-Chief from 1988 - 2010.[12] Neural Networks is also the archival journal of ENNS and JNNS.
Grossberg’s lecture series at MIT Lincoln Laboratory triggered the national DARPA Neural Network Study in 1987-88, which led to heightened government interest in neural network research. He was General Chairman of the first IEEE International Conference on Neural Networks (ICNN) in 1987 and played a key role in organizing the first INNS annual meeting in 1988, whose fusion in 1989 led to the International Joint Conference on Neural Networks (IJCNN), which remains the largest annual meeting devoted to neural network research. Grossberg has also organized and chaired the annual International Conference on Cognitive and Neural Systems (ICCNS) since 1997, as well as many other conferences in the neural networks field.[13]
Grossberg has served on the editorial board of 30 journals, including Journal of Cognitive Neuroscience, Behavioral and Brain Sciences, Cognitive Brain Research, Cognitive Science, Neural Computation, IEEE Transactions on Neural Networks, IEEE Expert, and the International Journal of Humanoid Robotics.
Awards
Grossberg won the first 1991 IEEE Neural Network Pioneer Award, the 1992 INNS Leadership Award, the 1992 Boston Computer Society Thinking Technology Award, the 2000 Information Science Award of the Association for Intelligent Machinery, the 2002 Charles River Laboratories prize of the Society for Behavioral Toxicology, and the 2003 INNS Helmholtz Award. He is a 1990 member of the Memory Disorders Research Society, a 1994 Fellow of the American Psychological Association, a 1996 Fellow of the Society of Experimental Psychologists, a 2002 Fellow of the American Psychological Society, a 2005 IEEE Fellow, a 2008 Inaugural Fellow of the American Educational Research Association, and a 2011 INNS Fellow. Grossberg received the 2015 Norman Anderson Lifetime Achievement Award of the Society of Experimental Psychologists "for his pioneering theoretical research on how brains give rise to minds and his foundational contributions to computational neuroscience and connectionist cognitive science".[14] His acceptance speech can be found here.[15]
He received the 2017 Institute of Electrical and Electronic Engineers (IEEE) Frank Rosenblatt Award with the following citation: "For contributions to understanding brain cognition and behavior and their emulation by technology".
ART theory
With Gail Carpenter, Grossberg developed the adaptive resonance theory (ART). ART is a cognitive and neural theory of how the brain can quickly learn, and stably remember and recognize, objects and events in a changing world. ART proposed a solution of the stability-plasticity dilemma; namely, how a brain or machine can learn quickly about new objects and events without just as quickly being forced to forget previously learned, but still useful, memories. ART predicts how learned top-down expectations focus attention on expected combinations of features, leading to a synchronous resonance that can drive fast learning. ART also predicts how large enough mismatches between bottom-up feature patterns and top-down expectations can drive a memory search, or hypothesis testing, for recognition categories with which to better learn to classify the world. ART thus defines a type of self-organizing production system. ART was practically demonstrated through the ART family of classifiers (e.g., ART 1, ART 2, ART 2A, ART 3, ARTMAP, fuzzy ARTMAP, ART eMAP, distributed ARTMAP), developed with Gail Carpenter, which has been used in large-scale applications in engineering and technology where fast, yet stable, incrementally learned classification and prediction are needed.
New computational paradigms
Grossberg has introduced and led the development of two computational paradigms that are relevant to biological intelligence and its applications:
Complementary Computing
What is the nature of brain specialization? Many scientists have proposed that our brains possess independent modules, as in a digital computer. The brain’s organization into distinct anatomical areas and processing streams shows that brain processing is indeed specialized. However, independent modules should be able to fully compute their particular processes on their own. Much behavioral data argue against this possibility.
Complementary Computing (Grossberg, 2000,[16] 2012[17]) concerns the discovery that pairs of parallel cortical processing streams compute complementary properties in the brain. Each stream has complementary computational strengths and weaknesses, much as in physical principles like the Heisenberg Uncertainty Principle. Each cortical stream can also possess multiple processing stages. These stages realize a hierarchical resolution of uncertainty. "Uncertainty" here means that computing one set of properties at a given stage prevents computation of a complementary set of properties at that stage.
Complementary Computing proposes that the computational unit of brain processing that has behavioral significance consists of parallel interactions between complementary cortical processing streams with multiple processing stages to compute complete information about a particular type of biological intelligence.
Laminar Computing
The cerebral cortex, the seat of higher intelligence in all modalities, is organized into layered circuits (often six main layers) that undergo characteristic bottom-up, top-down, and horizontal interactions. How do specializations of this shared laminar design embody different types of biological intelligence, including vision, speech and language, and cognition? Laminar Computing proposes how this can happen (Grossberg, 1999,[18] 2012[17]).
Laminar Computing explains how the laminar design of neocortex may realize the best properties of feedforward and feedback processing, digital and analog processing, and bottom-up data-driven processing and top-down attentive hypothesis-driven processing. Embodying such designs into VLSI chips promises to enable the development of increasingly general-purpose adaptive autonomous algorithms for multiple applications.
See also
References
- ↑ Faculty page at Boston University
- ↑ Grossberg Interests
- ↑ Grossberg Embedding Fields
- ↑ Profile
- ↑ Towards building a neural networks community
- ↑ Cohen-Grossberg theorem
- ↑ Recurrent neural networks
- ↑ Grossberg's PhD students and postdocs
- ↑ CELEST at Boston University
- ↑ "$36.5 Million for Three Centers to Explore How Humans, Animals, and Machines Learn", National Science Foundation, cited at Newswise, September 30, 2004
- ↑ Neural Networks journal
- ↑ "Elsevier Announces New Co-Editor-In-Chief for Neural Networks", Elsevier, December 23, 2010
- ↑ Grossberg conferences
- ↑ SEP Lifetime Achievement Award
- ↑ SEP Lifetime Achievement Award Acceptance Speech
- ↑ The complementary brain: Unifying brain dynamics and modularity.
- 1 2 Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world.
- ↑ How does the cerebral cortex work? Learning, attention and grouping by the laminar circuits of visual cortex.