- <div style="background-image:url(/live/image/gid/32/width/1600/height/300/crop/1/41839_V14Cover_Lynch_Artwork.2.rev.1520229233.png)"/>
Artificial Intelligence that is Inspired by Neuroscience
Department of Biology
Lake Forest College
Lake Forest, Illinois 60045
The history of neuroscience and artificial intelligence (AI) is intertwined, however, the two fields are not as collaborative as they once were. A better grasp of biological brains could indeed play a very important role in building AI. Surveying historical interactions between the different AI fields and neuroscience can make a big improvement in the two fields’ compatibility. Another way to make the two fields more compatible is by analyzing the current advances in the field of AI, which have been influenced by the study of human and animal neural computation. There are shared themes that can be key for future research in both fields.
There has been rapid progress in both fields, AI and neuroscience, however, the recent interaction between the two vastly different subjects has become way more complex and narrower. There is an importance to accelerating and guiding AI research with neuroscience at the center of it all. Hassabis et al. (2017) begin their research by stating the premise of building a human-level AI would be a very intimidating task. They argue that all of this emphasizes the need to examine the various inner workings of the human brain. The study of animal cognition can provide a window into an assortment of higher-level general intelligences.
Hassabis et al. (2017) say that there are two different benefits in using neuroscience to develop AI. First, neuroscience provides a source of inspiration for new algorithms that are independent and very complementary to the mathematical and logic-based ideas that have dominated AI. Second, the use of neuroscience can provide a specific type of validation to the already existent AI techniques.
Hassabis et al. (2017) are primarily interested in the neuroscience-level understanding of the brain: representations it utilizes, algorithms, functions, and architectures. That understanding corresponds to the analysis that Marr (Marr and Poggio, 1976) states are required to understand complex biological systems. Two of the three levels are as follows: the computational level and the algorithmic level. The computational level are the goals of the system and the algorithmic level are the various computations to fully realize this goal. By fully focusing on these different levels, they (Hassabis et. al, 2017) have gained insights into the mechanisms of the functions of the brain. They then began to unpack the previous points by looking at the past, present, and future of the AI and neuroscience realm. They have stated that when they say neuroscience, they mean all of the fields that are involved with the study of the brain. As well as when they say AI, they mean work with statistics, machine learning, and research.
The Past – Deep Learning
The investigations of neural computation began in the 1940s with the establishment of artificial neural networks that could calculate logical functions (McCulloch and Pitts, 1943). A few years later, the scientists proposed mechanisms could learn incrementally from feedback (Rosenblatt, 1958) or logically encode environmental statistics in an autonomous fashion (Hebb, 1949). At the time of PDP (parallel distributed processing) (Rumelhart et al., 1986), almost all of the AI research was based on creating countless local processing systems that were based on an approach that was inspired by the notion that human intelligence is controlled by manipulations of representations that are very symbolic (Haugeland, 1985). Even though this PDP approach was originally only used for extremely small-scale problems, it showed outrageous amounts of success in computing for a plethora of human behaviors (Hinton et al., 1986). Overall, neuroscience has provided inaugural guidance toward algorithmic and architectural impediments that could be what brings us to a successful neural network application for AI.
The second pillar of AI is the emergence of the field called reinforcement learning (RL). The methods of RL show someone how to maximize the reward by mapping the environment to actions in different states, which is the most widely used tools in AI research (Sutton and Barto, 1998). RL methods were originally inspired by the development of temporal-difference (TD) methods in animal learning. In the illustration of deep learning, examinations provoked by neuroscience led to further developments that have forcibly shaped the way of AI research.
The Present – Attention
Biological brains are indeed modular, contained with uniting subsystems supported by requisite functions: cognitive control, language, and memory (Anderson et al., Shallice, 1988). This information from neuroscience has been imported into a multitude of areas in present AI. Up until recently, most CNN models worked exactly on complete image pixels at the preliminary stage of processing, however, the visual system of primates works differently. Instead of parallel input processing, the visual attention shifts among locations and objects, it actually centers processing resources and authentic coordinates.
The most recent AI research has drawn on an abundance of ideas to master the gradual learning characteristics of deep RL systems, developing structures that execute episodic control (Blundell et al., 2016). Those specific networks store experiences (e.g. actions and reward outcomes associated with any game) and choose unused actions based on the resemblance between the current situation code and the prior circumstances stored in memory, taking the reward associated with those preceding events into account (Figure 1B, Hassabis et al., 2017).
Human intelligence is characterized by an exceptional ability to retain and direct information within an energetic store, known as working memory, which is thought to be detected within the prefrontal cortex and interwoven areas (Goldman-Rakic, 1990).
Both biological and artificial agents must have a magnitude for frequent learning as a capacity to adept new engagements without abandoning the info on how to perform prior tasks (Thrun and Mitchell, 1995). While animals come across as being relatively proficient at continual learning, neural networks are afflicted by the problem of dreadful forgetting (French, 1999; McClelland et al., 1995).
The rate of current examination in the world of AI has been remarkable. However, much work still needs to be done to bridge the gap between machine and human-level intelligence. Working toward closing the gap, scientists (Hassabis et al., 2017) believe that concepts from neuroscience will become progressively imperative. The importance of neuroscience, both as a source of algorithmic instruments and as a blueprint for the AI research agenda is incredibly vital in the following key areas.
Intuitive Understanding of the Physical World
Current standpoints highlighting key ingredients of human intellect are nonexistent in AI but are very prominent in human infants (Gilmore et al., 2007; Gopnik and Schulz, 2004; Lake et al., 2016). The concepts concerning space, entities, and numbers allow people to build various models that can be an indicator for the future (Battaglia et al., 2013; Spelke and Kinzler, 2007).
The characterization of human understanding is shown by being able to learn new concepts quickly and using prior knowledge to help with a task. Lake et al. (2016) wanted to test the notion that this human ability is difficult for AI. They tested this out by having an observer distinguish one example of handwriting unfamiliar from what they were shown before.
There is most definitely progress being made in the development of AI; they can now allow zero-shot assumptions about various shapes (Higgins et al., 2016; Figure 2C).
Imagination and Planning
The inexpensive RL models suffer from two substantial downsides. First, it requires a lot of experience. Lastly, it is not in any way flexible (Daw et al., 2005).
Virtual Brain Analytics
Hassabis et al. (2017) only understand the poor nature of the data processing that occurs during the learning of intricate tasks. However, by enhancing tools from neuroscience to fit in AI systems, they can acquire an appreciation to the key operators of AI review and increase the explicability of these networks.
From AI to Neuroscience
AI and neuroscience are compatible in a multitude of different ideas. The first point to this is that neural networks deal with external memory (Sukhbaatar et al., 2015). The second point deals with the recent work emphasis on the probable advantages of “meta-reinforcement learning” (Duan et al., 2016; Wang et al., 2016). Both of these strands of AI research should hopefully motivate the future research in neuroscience.
Hassabis et al. (2017) reviewed some of the many ways in which neuroscience has made rudimentary contributions to advancing AI research. Neuroscience has been very functional in galvanizing different reservations of animal learning and intelligence of curiosity in regard to AI researchers. They truly believe that the search to create a new type of AI will ultimately lead to better understanding of our own minds and thought processes. Comparing AI to the human brain may form insights into the many mysteries of the mind such as dreams, creativity, and consciousness.
Marr, D., and Poggio, T. (1976). From understanding computation to under-
standing neural circuitry. A.I. Memo 357, 1–22.
McCulloch, W., and Pitts, W. (1943). A logical calculus of ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133.
Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 386–408.
Hebb, D.O. (1949). The Organization of Behavior (John Wiley & Sons).
Rumelhart, D.E., McClelland, J.L., and Group, P.R. (1986). Parallel Distributed Processing: Explorations in the Microstructures of Cognition, Volume 1 (MIT Press).
Haugeland, J. (1985). Artificial Intelligence: The Very Idea (MIT Press).
Hinton, G.E., McClelland, J.L., and Rumelhart, D.E. (1986). Distributed Repre- sentations. In Explorations in the Microstructure of Cognition (MIT Press), pp. 77–109.
Sutton, R., and Barto, A. (1998). Reinforcement Learning (MIT Press).
Anderson, J.R., Bothell, D., Byrne, M.D., Douglass, S., Lebiere, C., and Qin, Y. (2004). An integrated theory of the mind. Psychol. Rev. 111, 1036–1060.
Shallice, T. (1988). From Neuropsychology to Mental Structure (Cambridge University Press).
Blundell, C., Uria, B., Pritzel, A., Yazhe, L., Ruderman, A., Leibo, J.Z., Rae, J., Wierstra, D., and Hassabis, D. (2016). Model-free episodic control. arXiv, arXiv:160604460.
Goldman-Rakic, P.S. (1990). Cellular and circuit basis of working memory in prefrontal cortex of nonhuman primates. Prog. Brain Res. 85, 325–335.
Thrun, S., and Mitchell, T.M. (1995). Lifelong robot learning. Robot. Auton.
Syst. 15, 25–46.
French, R.M. (1999). Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 3, 128–135.
McClelland, J.L., McNaughton, B.L., and O’Reilly, R.C. (1995). Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol. Rev. 102, 419–457.
Gilmore, C.K., McCarthy, S.E., and Spelke, E.S. (2007). Symbolic arithmetic knowledge without instruction. Nature 447, 589–591.
Gopnik, A., and Schulz, L. (2004). Mechanisms of theory formation in young children. Trends Cogn. Sci. 8, 371–377.
Lake, B.M., Ullman, T.D., Tenenbaum, J.B., and Gershman, S.J. (2016). Building machines that learn and think like people. arXiv, arXiv:1604.00289.
Battaglia, P.W., Hamrick, J.B., and Tenenbaum, J.B. (2013). Simulation as an engine of physical scene understanding. Proc. Natl. Acad. Sci. USA 110, 18327–18332.
Spelke, E.S., and Kinzler, K.D. (2007). Core knowledge. Dev. Sci. 10, 89–96.
Higgins, I., Matthey, L., Glorot, X., Pal, A., Uria, B., Blundell, C., Mohamed, S., and Lerchner, A. (2016). Early visual concept learning with unsupervised deep learning. arXiv, arXiv:160605579.
Daw, N.D., Niv, Y., and Dayan, P. (2005). Uncertainty-based competition be- tween prefrontal and dorsolateral striatal systems for behavioral control. Nat. Neurosci. 8, 1704–1711.
Sukhbaatar, S., Szlam, A., Weston, J., and Fergus, R. (2015). End-to-end memory networks. arXiv, arXiv:150308895.
Duan, Y., Schulman, J., Chen, X., Bartlett, P.L., Sutskever, I., and Abbeel, P. (2016). RL^2: fast reinforcement learning via slow reinforcement learning. arXiv, arXiv:1611.02779.
Wang, J., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J.Z., Munos, R., Blundell, C., Kumaran, D., and Botvinick, M.M. (2016). Learning to reinforce- ment learn. arXiv, arXiv:161105763.
Eukaryon is published by students at Lake Forest College, who are solely responsible for its content. The views expressed in Eukaryon do not necessarily reflect those of the College.
Articles published within Eukaryon should not be cited in bibliographies. Material contained herein should be treated as personal communication and should be cited as such only with the consent of the author.