Foreword
Recently, the atmosphere surrounding Si Wu’s online course was akin to that of a concert. As soon as the registration opened, all 400 spots were snapped up instantly.
He said, visibly excited as he reached for a deep blue textbook from the shelf behind him, “Actually, this course on ‘Practical Neural Computation Modeling’ is at the forefront of academic research.” The cover was faintly inscribed with the title: “Making Complex Neural Computation Models Accessible to All.”
“Considering many couldn’t enroll in the first session, I plan to organize a second, even a third, to give more people the opportunity to learn,” he added.
Modeling the brain using mathematical and computational methods is a challenging yet fascinating approach. Among its most renowned stories is that of deep learning networks, inspired by the brain’s actual neural networks.
However, Wu remains unswayed by external hype. He believes AI is merely a byproduct of computational neuroscience. It’s a pathway, not the destination. In his view, there are more critical and fundamental issues yet to be resolved than AI – understanding the essence of life and the nature of consciousness.
This sober realization stems from his shift to AI research after completing his Ph.D. in Physics in 1995, witnessing the ebbs and flows, the changes of seasons. Wu is well aware that the fundamental principles of intelligence are yet to be unearthed, with numerous gaps in areas like abstract concepts, memory, and emotions. The brain, a living example of intelligence in the universe, when interpreted through mathematical models, lays the groundwork for AI development.
“Only after we have deciphered the mechanisms of brain representation can we build intelligent systems that truly resemble humans. Future developments in artificial neural networks should consider incorporating these principles.”
Rather than blindly following the current AI craze spearheaded by “godfather” figures like Geoffrey Hinton, Wu hopes the new generation of researchers will learn from their spirit of quiet dedication and persistence during the ignored “winters”.
He often uses Hinton’s words to encourage young researchers: “The brain is such a network; there is no reason it shouldn’t work.”
The following is the interview with Professor Wu. We have condensed the text to make it easier to read.
Question: In keeping with the tradition of “NextQuestion”, could you please introduce your area of research?
Professor Wu: My research area lies in the fields of computational neuroscience and brain-like computing. The focus is on using mathematical methods to elucidate the fundamental principles of brain information processing. Building on this, I develop various artificial intelligence algorithms and models.
Question: Coming from a background in physics, you eventually became a neuroscientist. Based on your personal experience, how do quantitative sciences intertwine with neuroscience? Nowadays, more scholars with computer science or mathematics backgrounds are entering this interdisciplinary field. Is this a natural trend for the development of neuroscience?
Professor Wu: It is indeed a natural trend. We often draw parallels between neuroscience and physics. Initially, physics was dominated by experimentation and observation. Later, theoretical physics emerged, represented by figures such as Newton and Einstein.
Neuroscience has followed a similar trajectory to physics. In its early days, researchers primarily relied on data and experimental observations to understand the brain. Subsequently, like in the study of physics, they began using quantitative methods to deduce the brain’s fundamental working principles. Such an approach deepens our understanding of brain science.
Question: When did your interest in the field of neuroscience begin?
Professor Wu: My interest in neuroscience developed quite naturally. In our era, there was a saying: “Master mathematics and science, and you’ll be fearless anywhere in the world.” Physics was arguably the most significant discipline of the last century. During my master’s and doctoral studies, I gradually realized that the development of physics was reaching maturity, with relatively limited theoretical directions left to explore. Thus, I was drawn to a new field that was both nascent and more intriguing to me. That’s how I initially ventured into artificial intelligence, which at that time was still in its developmental winter. Therefore, I pondered whether insights from brain functions could spur developments in AI. Consequently, I naturally transitioned towards brain science.
Some may find it surprising to switch from physics to neuroscience, but it’s quite a natural progression. Physics uses mathematical methods to study the laws of nature, while computational neuroscience applies mathematical methods to understand the workings of the brain. Methodologically, it’s more about changing the focus rather than the approach. In the early days of computational neuroscience, approximately two-thirds of the researchers had backgrounds in physics. Perhaps their rationale for changing their research focus was similar to mine.
Question: What was the “AI winter” like?
Professor Wu: I earned my PhD in 1995, having studied theoretical physics and general relativity during my graduate studies. During this time, I serendipitously came across books on artificial intelligence and found AI to be more fascinating compared to physics. This sparked my transition to working in AI. That period was the AI winter, a time so bleak in the field that it’s hard for many to comprehend. Back then, mentioning that I worked in AI often led people to think I was a ‘fraudster’. Consequently, I never included ‘artificial intelligence’ on my resume at that time.
The winter primarily resulted from the era’s lack of big data and computational power. Back then, unlike today, AI couldn’t produce anything particularly impressive, leading to a widespread lack of faith in its development and a prevailing belief that it had no future.
However, I retained my interest in intelligence, not deterred by the AI winter. It seemed natural to me to consider the brain as a living example of intelligence in the universe. Why not learn from the brain and then apply it to AI? Around 2000, while at the RIKEN Institute in Japan, I consciously shifted from artificial intelligence to computational neuroscience. The goal was to understand how the brain generates intelligence, laying a foundation for my later research in AI.
Question: Did convolutional neural networks exist at that time?
Professor Wu: Yes. The earliest version of convolutional neural networks was proposed in the 1980s by Japanese scientist Kunihiko Fukushima, called the Neocognitron1. Simultaneously, Yann LeCun was conducting research on convolutional neural networks, but his work was somewhat disregarded in the scientific community at that time. His research findings were not well-received, and he was unable to publish his papers. Therefore, I deeply admire AI scientists like Geoffrey Hinton and Yann LeCun, who persevered on this challenging path with tremendous determination.
Question: At that time, did you have any academic interactions with them?
Professor Wu: Back then, they weren’t yet in the spotlight or considered academic celebrities. I encountered them at some academic conferences, but they were relatively unknown. The real stars of that era were researchers working on SVM (Support Vector Machine), like Bernhard Scholkopf and Alex Smola, who were emerging talents. At that time, SVMs were somewhat akin to today’s deep learning networks, having surpassed artificial neural networks in numerous practical applications. Therefore, the academic luminaries of that period were those working on Support Vector Machines.
Question: Will the future development of artificial intelligence tend to be independent, or will it become more like human intelligence?
Professor Wu: AI is a vast field. Convolutional neural networks, deep learning networks, and the newer Transformer models represent just one developmental path in AI. For instance, convolutional neural networks were initially inspired by the brain but gradually diverged towards engineering applications. In engineering applications, we are primarily guided by performance and do not shy away from using methods that diverge from the brain’s mechanisms.
Thus, one path to realizing AI applications, as seen in current convolutional neural networks, deep learning networks, or GPT models, is to gradually move away from their biological origins, emphasizing operational performance (how well they work).
However, this doesn’t mean they encompass all of AI. If we start from the definition of AI as ‘developing artificial intelligence’, there’s another path: learning from the brain. AI, currently represented by deep learning networks, is still unable to replicate many of the higher cognitive functions of humans. Therefore, this path remains worth exploring. This path may be more complex and breakthroughs may be delayed, but it certainly exists.
Question: How do the methods of deciphering the black boxes of AI and the brain compare and contrast?
Professor Wu: This question is complex, and my understanding of it has been evolving.
The training mode of artificial neural networks determines their black-box nature: for instance, inputting big data and setting the output as object labels, then using a backpropagation algorithm to optimize the parameters. Once trained, the network operates, but the exact mechanisms of its functioning remain unclear, leading to the concept of a ‘black box’. Consequently, this has spurred the development of explainable AI, where scientists, primarily with mathematics or computer science backgrounds, strive to demystify this black box.
In contrast, the human brain is also a black box. Isn’t computational neuroscience all about studying how the brain works? We’re deciphering our brain’s black box. How? One approach is through animal experiments. Neuroscientists have developed a variety of model organisms, like nematodes, fruit flies, zebrafish, mice, and monkeys, each with its strengths and weaknesses. For example, zebrafish are transparent, allowing clearer observation of neural activity. Experimental neuroscientists conduct various experiments, such as presenting visual stimuli to an organism and recording neuronal activity in corresponding brain areas, to infer the principles of information encoding – essentially, discerning what information the neural activity contains about the stimuli. This is more of a data-driven research method, using experimental observations to understand principles and open the black box.
I believe there’s a complementary relationship between AI and computational neuroscience or neurobiology. For example, from neurobiological research, we understand how the brain’s visual system operates. Does this suggest that trained artificial neural networks have similar mechanisms? Some scientists use multi-layer convolutional networks to mimic the visual system. After training through supervised classification tasks, they find the neural activity closely resembles that in our visual system. This suggests that to achieve multi-object recognition, a common representation might be necessary, applicable in both biological brains and artificial neural networks. This is a kind of reciprocal research methodology.
The above represents a more traditional viewpoint. As I mentioned, my understanding is constantly changing, especially after the advent of ChatGPT. While its functioning is not entirely clear, it exhibits remarkably high capabilities. This has made me question whether our pursuit of complete explainability – fully opening the black box – is unrealistic. Similarly to ChatGPT, the brain is a highly complex system. With an input, it generates sentences or conversations following certain rules, seemingly displaying intelligence, right?
Our aspiration to open the black box and explain its behavior reflects a fundamental human quest for understanding. Whether we can achieve this fully is uncertain. There may not always be explainability, or if we truly pursue it, we might need to develop new mathematical tools and make significant conceptual breakthroughs. I haven’t fully figured this out, as my understanding is also constantly updating. The latest developments in AI have challenged many of my long-held beliefs, prompting me to rethink aspects I once took for granted.
Question: Some scholars believe that ChatGPT might be a product of qualitative change triggered by quantitative accumulation. Similarly, if we could draw a sufficiently detailed brain map and collect enough data, could we possibly see the emergence of an intelligence more powerful than previous AI? What is your opinion on this?
Professor Wu: I don’t quite agree with that notion. To categorize all developments as mere emergences is, in a way, to evade the core issue. We can’t simply explain a phenomenon we can’t describe with a straightforward concept of emergence. However, it is clear that a system must attain a certain level of complexity to exhibit emergent behaviors. As for the reasons behind this emergence, we still lack the mathematical tools or concepts to adequately describe them, leading to our current predicament: many things remain unexplained. With the gradual advancement of science, we have created a super-complex system that may not yet be describable with current simplistic mathematical tools. This necessitates the development of new mathematical tools or novel approaches to understanding, which might be key to unraveling these mysteries.
Ancient Chinese observers noted the complexity of celestial movements but didn’t attempt to explain them with a common law. Instead, they conjured up complex mythological stories, like the heavenly palace of the Jade Emperor. However, Newton realized that perhaps the celestial bodies and objects on Earth share a simple law, marking a conceptual and intellectual breakthrough. Following this line of thought, he developed the concept of universal gravitation.
To describe the concept of force, he developed a classical mathematical tool: calculus. In his era, if Newton had merely presented the concept of ‘force’ without predicting or explaining any phenomena, people would have thought he was talking nonsense, and no one would have believed him. But he not only developed a theoretical framework but also explained a lot with it, leading people to accept his ideas. At that point, we began forcing ourselves to learn about the concept of force and calculus. Eventually, we gradually accepted and felt that such explanations were natural. I believe that understanding biological intelligence and super-complex AI systems may require similar breakthroughs in thought and concepts.
Question: This year, you co-authored a paper with several scholars titled “AI of Brain and Cognitive Sciences: From the Perspective of First Principles”2, where you proposed six primary principles of the brain and cognitive science in relation to artificial intelligence. Why do you think these are the primary principles?
Professor Wu: Firstly, it’s important to emphasize that I was not the sole proposer of these six principles. In Beijing, there’s an institute called the Beijing Academy of Artificial Intelligence (BAAI). This article was co-authored by scholars of BAAI specializing in the direction of AI and cognitive neuroscience foundations, who guided postdoctoral researchers in its writing. I was responsible for the section on “Attractor Networks,” as this has been a long-term focus of my research. I would like to explain why attractor networks are considered a fundamental principle, an aspect currently overlooked in contemporary AI frameworks.
The brain, composed of a vast network of neurons, is the most complex dynamical system known in the universe. All our perceptions and behaviors arise from external stimuli or internal brain activity, which prompt the evolution of our dynamical system, the brain, culminating in observable behavior. The evolution of this dynamical system resembles the changes in network dynamics caused by neuronal interactions. There’s a state where all neighboring states converge, known as an attractor. Attractors correspond to local minima within the network’s energy landscape, where all neighboring states possess higher energy, thereby being ‘attracted’ towards these attractors.
I believe the brain, as an extremely complex dynamical system, processes information in the time-space domain, fundamentally different from current artificial neural networks computationally. If we aim to develop a brain-like intelligent system, attractor networks are unavoidable. In the paper, I highlighted the importance of attractor networks, considering them a basic computational principle of the brain. Increasingly, neuroscience experiments are proving the existence of attractor networks in the brain.
Question: What insights does the theory of attractor networks offer for the development of AI?
Professor Wu: In AI, attractor networks could help address the issue of “abstract concepts”. We know that the brain has the capacity for abstract concepts. As children, we might learn tangible knowledge, but once we enter formal education, what we learn in classrooms is primarily abstract, like mathematics. Thus, representing abstract concepts is central. Current AI is data-driven, but there’s also a knowledge-driven aspect. We need AI systems that learn through abstract concepts, akin to human learning, to accelerate knowledge acquisition.
Moreover, attractor networks form an efficient memory system in the brain. For instance, this is our first online meeting, but perhaps the next time we meet offline, you might recognize me instantly. Even if there are significant changes in my appearance, hair, or clothing from online to offline, you can still recognize me. This is because you extract some abstract concepts and representations of my face, connecting them with my voice and name, forming the concept of ‘Si Wu’ in your brain, represented by neuronal activity. If we can thoroughly understand how the brain represents abstract concepts and emotions, the resulting intelligent systems could become more human-like. Therefore, it is crucial for the future development of artificial neural networks to consider incorporating attractor networks.
Aside from experiments, to my knowledge, theoretical models expressing abstract concepts haven’t yet yielded results, but some might be attempting it. Our research group is also exploring this direction.
Question: Could you explain what “attractor networks” are?
Professor Wu: Imagine the brain as a super-complex network comprised of countless neurons. When the brain receives an input, the network’s state undergoes a change, signifying the brain’s computation or memory retrieval process. For instance, in a study about noise, an attractor represents a stable state after noise reduction. Let’s say, for example, I learn your name and hear your voice. The next time I see you, there’s a high probability I’ll recognize you (though I might not, as I tend to forget things with age). Once your image is memorized by my brain, a stable neural network state that can identify you exists in my mind. A trigger input can re-evolve this state, meaning my memory is restored, and I can recognize you. That is, when I see your image next time, the brain’s network will evolve into a state corresponding to the concept of you. Therefore, the brain’s memory computation system is different from a computer’s. An attractor network is just such a network described by a mathematical framework.
Attractor networks are also involved in decision-making. This is actually the work done by Professor Xiao-Jing Wang. In decision-making, when choosing between two options, these options can be likened to two attractors. We are essentially gathering evidence to aid our judgment. The evidence will determine which attractor I ultimately fall into, culminating in a decision.
Attractor networks can also be applied to explain our understanding of language. The basic grammatical structure of human language is innately stored in the brain, akin to a series of attractors. When I listen to you speak, or I speak myself, the attractor network begins to evolve and unfold, allowing me to understand what you are saying. The concept of attractors was introduced by physicists. Coming from a physics background, I naturally embraced and liked this concept, and I use it to describe some of my scientific work.
Question: Beyond abstract concepts, what do you think are the fundamental differences between current AI and human intelligence?
Professor Wu: I believe there’s often a confusion between ‘intelligence’ and ‘wisdom’ when critiquing AI or comparing it to humans. Intelligence refers to the ability to perform specific tasks, such as learning and reasoning – essentially, it’s a problem-solving capacity. But what we humans possess at a higher level is wisdom, which includes emotional and social cognitive abilities. In particularly complex life scenarios, it is wisdom, not merely intelligence, that is required. For instance, how do we solve global warming? The simplest solution might be to eliminate humanity entirely, cutting off the root cause. But we know that would be absurd. In tackling such complex tasks, we need to consider wisdom, which encompasses capabilities beyond mere intelligence, like considering the collective interests of humanity. In such complex scenarios, AI is still far from matching human intelligence, let alone possessing wisdom. Even though today’s ChatGPT can reason, it cannot perform operations on abstract concepts like humans, so there’s still a vast distance between the two.
Another important aspect not to overlook is that ChatGPT, and other current AI systems, mainly perform tasks akin to those processed by the human brain’s neocortex, such as cognitive reasoning in language. However, there’s a billion-year-evolutionary behavior in biological organisms known as instinctual behavior, which, in a sense, is even more challenging for current AI, referred to as ’embodied intelligence’. Even a sophisticated system like AlphaGo requires a human to physically place the chess pieces. Even simple actions like picking up chess pieces are not easy for robots, yet they are effortless for us, a result of millions of years of evolution. How we acquire such diverse instinctual behaviors, like running or hand-eye coordination, is still unclear. Understanding this could accelerate the development of robotics. Such robots, combined with large models capable of some level of thinking (like ChatGPT), could produce truly society-serving robots. Otherwise, it’s just a machine that can process language but lacks physical capabilities. Without embodied intelligence, the notion of wisdom remains distant, and there’s still a long journey ahead.
* Embodied intelligence is a concept in artificial intelligence, often referred to as ‘Embodied Intelligence’ or ‘Embodied AI’. It describes intelligent systems that have a physical embodiment or entity. These systems not only possess cognitive intelligence but can also interact with and execute tasks in their environment, similar to how humans or other biological beings experience the world.
Question: Nowadays, using the ‘Turing Test’ to gauge a machine’s intelligence seems somewhat simplistic. How do you think we should test AI’s ‘wisdom’ in the current era?
Professor Wu: Regarding this question, I haven’t thought systematically yet. As I mentioned earlier, examples of human cognitive abilities in daily life include common sense, social interaction skills (such as being considerate of others’ emotions), and the ability to consider a variety of factors in complex scenarios. I believe that current AI is lacking in these aspects. Cognitive scientists and psychologists should develop datasets akin to ImageNet or a series of standard tasks. These would go beyond the simplistic, language-based Turing Test, offering a more comprehensive assessment of whether AI truly possesses human-like intelligence and wisdom. However, at present, there is minimal work being done in this area, and I believe it’s time to initiate such efforts.
Question: AI may still experience cycles of winter and boom in the future. How do you think contemporary scholars should respond to this?
Professor Wu: On one hand, despite having witnessed these cycles of highs and lows, I continue to be surprised, even a bit astonished, by the achievements AI has made so far. On the other hand, I am not easily swayed by external hype, knowing well that some fundamental scientific principles underlying AI are yet to be uncovered. Therefore, in recruiting for my research group, I primarily select students driven by interest, as it is only with sustained interest that one can navigate the ebb and flow of external trends without losing direction. Simultaneously, I advise them against chasing the latest trends. Peking University’s students are exceptional, representing the hope of Chinese science, and I believe that chasing trends just for publishing papers is “too low”. With such excellent platforms provided by the nation, we shouldn’t just follow in the footsteps of others who build large models (though I don’t oppose others pursuing large models out of interest). I hope the students in my group will steadfastly adhere to their convictions and persist in their long-term work, as original results stem from such dedicated pursuit.
In my view, the current trajectory of AI development represents just one path, not the ultimate destination. Fundamental issues such as understanding the essence of life and the nature of consciousness remain unresolved and are paramount. Therefore, I hope our students will maintain long-term ideals and goals. We should not merely chase the trends set by luminaries like Geoffrey Hinton and others who have ignited the current AI craze. Instead, we should draw inspiration from their resilience during challenging times and infuse our research and development with that same spirit of perseverance, aiming to contribute significantly to this interdisciplinary field.
Question: How would you evaluate the development of computational neuroscience in China over the past decade or so?
Professor Wu: The field is still evolving, and there are too few people engaged in computational neuroscience research at present, but the number is gradually increasing. The growth is not limited to computational neuroscience alone; for instance, researchers in brain-inspired intelligence, differing from those in deep learning networks, are concentrating on studies related to spiking neural networks. These individuals are also learning about computational neuroscience, so the field is expanding. In comparison, computational neuroscience is akin to theoretical physics in terms of its vast scope.
A month ago, my research group launched an online course titled “Neural Computation Modeling.” This course is academically oriented and very cutting-edge. As soon as it opened for enrollment, all 400 spots were instantly filled, akin to the rapid sale of concert tickets today. Influenced by the experience of organizing summer schools at Cold Spring Harbor on computational and cognitive neuroscience, we believe in either doing things with commitment or not doing them at all. We approached this course with utmost seriousness, ensured rigorous quality control, and prioritized student interaction. At least from what I’ve heard, the students’ feedback on this course has been exceptionally positive. Considering many couldn’t enroll in the first session, we are planning to organize a second, even a third iteration of the course to give more people the opportunity to learn.
Question: What breakthroughs and advancements have been made in the field of computational neuroscience over the past decade or so?
Professor Wu: Being deeply immersed in this field, I might be overly critical of my own domain, or perhaps I set the bar too high for what I consider breakthroughs and advancements. In reality, there has been considerable progress. A decade ago, we were primarily focused on simple neuronal and synaptic modeling. Present-day researchers increasingly focus on modeling large-scale networks, shifting from merely explaining simple, experimentally observable behaviors to attempting to interpret more complex higher cognitive functions. Objectively, a significant challenge we face is the modeling of advanced cognitive functions. Despite the widespread belief that we can derive insights from the brain, computational neuroscience hasn’t yet yielded a groundbreaking application comparable to AlphaGo or AlphaFold. Achieving a neural disease model to replace animal experimentation, a goal of the European Brain Project, remains unfulfilled. So, while I believe our field is moving towards the right scientific objectives, it’s true that we haven’t made revolutionary breakthroughs yet.
During the AI winter, Geoffrey Hinton once said: “The brain is such a network; there’s no reason it shouldn’t work.” This simple and earnest belief drove him to persistently research, eventually leading to the development of deep learning networks. I often use Hinton’s words to encourage my students. The brain operates daily, exhibiting various advanced cognitive functions. Despite its complexity, if we remain focused and collectively endeavor in this field, accumulating knowledge bit by bit, I remain optimistic that within the next decade, we will witness significant breakthroughs, driving our field forward.
Question: LeCun has said AI is not biomimetics. How do you think brain-like computing draws inspiration from the brain, and why do you believe it’s a pathway to breaking through AI’s limitations?
Professor Wu: First, I’d like to emphasize a point in line with LeCun’s view: to what extent should biomimetics replicate the original? I oppose the idea of reconstructing all intricate neural structures of the brain, as that would delve into the implementation layer of biological intelligence. Drawing a parallel with the three levels proposed by the British psychologist and neuroscientist David Marr, we start with a behavioral level that represents cognitive functions and various psychological behaviors. Then there’s an intermediate level of implementation, which can be described in mathematical equations or network models. Below that is the neurobiological implementation level, such as how neurons and synapses connect. In reality, the brain is highly complex and challenging to study thoroughly. At least for me, focusing solely on the implementation level isn’t particularly interesting.
Ultimately, the goal is to abstract a theory of information processing functioning within the brain. This theory will be something mathematical, underlain by the neuronal implementation level. Therefore, in brain-inspired research, we cannot merely rely on advancements in neurobiology to initiate model building from neurons and synapses. We need to draw inspiration from neurobiology, supplemented by our rational thinking and mathematical tools. It is this combined approach that we aim to elucidate in our brain-inspired intelligence research, and it forms the cornerstone of my research group’s pursuit.
Question: In recent years, what work in neuroscience do you believe has provided inspiration for AI?
Professor Wu: I believe aspects such as the brain’s overall framework, cognitive architecture, efficient memory representation systems, breakthroughs in modeling certain advanced cognitive functions, and the expression of abstract concepts could provide substantial inspiration for the further development of AI.
Question: Speaking of memory, is there a significant gap between human memory and AI memory?
Professor Wu: AI, in fact, doesn’t possess memory in the traditional sense. Rather, what is often referred to as AI’s ‘memory’ is actually the connection weights of a network trained by large datasets. However, our brain’s memory functions as a continuum of attractor network systems, where each network corresponds to the expression of an abstract concept. The interaction and communication between different states within the network culminate in the brain’s efficient memory system. Throughout my scientific career, I’ve worked on many topics, but continuous attractor networks have been my unwavering focus for over 20 years. If you’re interested, I can discuss the specifics in more detail later. The hippocampus, being a crucial component of the brain’s memory system, and its collaboration with the entorhinal cortex, likely plays a pivotal role in the representation of abstract concepts. This aspect is immensely significant. If we can thoroughly understand this system, we might achieve a breakthrough in expressing abstract concepts, which could potentially be applied in AI.
Question: Will the future development of AI converge towards mimicking human likeness, or will it evolve independently?
Professor Wu: From an engineering perspective, there’s no necessity for AI to mimic the human brain when it comes to specialized tasks. In fact, the astonishment at AI surpassing human capabilities seems a bit exaggerated. For instance, early calculations were done using abacuses, but now computers far exceed human speed and accuracy in numerical computations. What’s so surprising about that? In my view, once we decipher the mathematical mechanisms of a specific task, humans are removed from the equation, allowing machines to undoubtedly surpass our capabilities. However, the reason we emphasize AI’s human-like qualities is that we seek not just intelligence but also wisdom in AI, aligning it with human society to serve it better. We hope AI can possess common sense, empathy, emotions, moral understanding, and the ability to handle complex situations wisely. In this sense, AI needs to align with us, drawing strong inspiration from humanity, or else it could become completely uncontrollable.
I’d like to add a point about a current issue I’m facing: many people believe studying computational neuroscience and brain mechanisms is for the sake of AI. But that’s not the case. AI is merely a byproduct of research in computational neuroscience. Personally, my primary goal lies in unraveling the mysteries of the brain and understanding the essence of life. I can forgo material comforts for a life of intellectual fulfillment. And frankly, understanding how the brain works can benefit society beyond just AI, such as in medicine, education (studying how the brain learns through plasticity during development), and more. In summary, computational neuroscience encompasses a broader horizon, despite having intersections with brain-like intelligence and AI.
# Response to Wang Xiaojing’s Follow-up Question:
Question: How do we define and quantify human thought and intelligence? The answer to this question reveals the gap between today’s AI and human capabilities. Specifically, it involves understanding the neural mechanisms of thought and intelligence through neuroscience methods.
Professor Wu: As a cross-disciplinary researcher currently engaged in cognitive neuroscience, I’ve observed notable barriers between fields. AI experts specialize in completing specific tasks through mathematical approaches, whereas psychologists focus on behavioral-level studies. For instance, in psychology, there’s a field called comparative psychology. Decades ago, it attempted to answer whether animals possess intelligence through numerous experiments, like the ‘Crow and Water’ story, where crows raise water levels by dropping stones. Is this an exhibition of intelligence? Researchers have also used other animals, such as rats, to probe similar questions: Do animals possess the same kind of intelligence as humans? I feel that these studies have largely been overlooked by the AI community. AI researchers would benefit from delving into comparative psychology to study behavioral-level research on human cognition. This might lead to a realization of how to proceed in emulating human-like thought and intelligence in AI.
We in computational neuroscience play a pivotal role in bridging these disciplines, effectively narrowing the gap between them. AI emphasizes mathematical models, while psychology studies behavioral levels. What’s needed now is the development of mathematical models to truly explain human cognitive behavior. Once this is achieved, it could significantly contribute to understanding the differences in thought and intelligence and the nature of advanced human cognitive functions.
A question from @Si Wu:
What role do emotions play in our intelligent computing?
Professor Wu: The question of emotion is intriguing. The AI luminary Marvin Minsky once said, ‘The current question isn’t whether robots need emotions, but whether a robot without emotions can at all develop intelligence.’ I believe this touches on something fundamental: wisdom, with emotions being at the core of it.
In response to Professor Si Wu’s question, we will invite more guests to explore this answer. Stay tuned for further insights.