Foreword
In September 2023, the decade-long Human Brain Project (HBP) officially concluded. This initiative aimed to combine multidisciplinary efforts to decode and simulate the complex structure and functions of the human brain. However, it faced controversy from the start.
Professor Moritz Helmstaedter, director of the Max Planck Institute for Brain Research, reflected on the project with sudden seriousness. “We should not have made those unrealistic promises,” he admitted, critiquing the project’s initial objectives. He believes that the approach of ‘assembling knowledge, simulating it, and then solving brain diseases’ was fundamentally flawed.
Expressing a sense of regret, he noted, “Sometimes, I envy the physics community for their ability to develop successful large-scale accelerators, which serve as an ideal model for big science. In contrast, neuroscience lacks consensus on methodology, leaving us uncertain about the best approach or even which could lead to a true understanding of the brain.”
Despite his critiques of the European Brain Project, Professor Helmstaedter remains optimistic about the future of neuroscience. He highlighted his early adoption of machine learning in neuroscience research, anticipating that AI will significantly advance the field and potentially replace human annotators.
He expressed disinterest in the unrealistic expectations surrounding artificial intelligence, focusing instead on understanding the brain’s actual mechanisms and the practical applications of AI.
The following is the interview with Professor Moritz Helmstaedter. We have condensed the text to make it easier to read.
Question: Hello, Professor Helmstaedter! Thank you for accepting our interview with “NextQuestion,” supported by Tianqiao and Chrissy Chen Institute. Could you start by introducing your research area?
Professor Helmstaedter: I’m Moritz Helmstaedter, the current director at the Max Planck Institute for Brain Research in Frankfurt, Germany. My research focuses on understanding neural circuits within the brain, primarily through advanced microscopy and mapping neural networks—a field known as connectomics. I lead the department of connectomics at our institute.
Question: In 2015, you critiqued the “Blue Brain Project,” a precursor to the European Brain Project, by stating that merely aggregating data does not equate to creating new science. With the EU Commission ceasing funding for the HBP after September, does this outcome match your expectations? Given its controversial nature and perceived failure by many, what do you believe are the key factors that hindered its success?
Professor Helmstaedter: When the initial Human Brain Project was proposed and conceived, it heavily emphasized using all available data at the time to create models. Many colleagues and I were skeptical of this claim. As scientists, we must communicate with the public very responsibly. If the promises we make are clearly unachievable, or obviously incorrect, that approach is not viable, and we must communicate openly. Just as I am doing now, telling you that perhaps connectomics could be of great help to us, but we must acknowledge that there is uncertainty, and we should not make those unrealistic promises.
A decade has passed, and it’s clear that these promises have not been fulfilled. Of course, this large project still produced many interesting related scientific outcomes. But the specific critique I raised at the time, the idea of ‘assembling what we know, simulating it, and then solving brain diseases,’ I believe was wrong.
*Human Brain Project (HBP): The HBP was a 10-year large-scale scientific research initiative funded by the European Union, involving nearly 500 scientists and costing about 600 million euros. Its goal was to create a computer-based model of the brain to serve as a research infrastructure for brain research, cognitive neuroscience, and brain-inspired computing, accessible to researchers worldwide. The project started on October 1, 2013, and ended in September 2023.
Over ten years, the HBP produced numerous significant and useful scientific results, such as creating and integrating about 200 three-dimensional maps of the cortical and deeper brain structures, resulting in a human brain atlas accessible through EBRAINS. However, from the beginning, the HBP was highly controversial, with several changes in its management structure, criticisms of fragmented research unable to achieve multiscale integration, lack of prioritization, limited cooperation, etc. With the cessation of HBP funding at the end of September, discussions about whether the HBP was a failure abound. For more details, see: Because the Mountain is There – A Decade of Human Brain Projects.
I believe that if you create a large scientific project like the “Human Brain Project” and invest billions or similar amounts of funding, then the issue is not whether some cool results will emerge. After all, such a scale of funding should produce many captivating and crucial discoveries. Indeed, collaborators of the “Human Brain Project” have conducted a lot of relevant and very appealing research. What if this funding was divided into thousands of portions, each one million euros or five million euros, distributed to research teams across Europe? Would the outcome be any different? I think that’s a harder judgment to make, a higher standard because that alone is not enough to produce great scientific achievements.
For this, you also need to prove that these outcomes could not have been achieved just through numerous smaller projects. But the “Human Brain Project” claimed from the start that it could use existing data to build a massive brain science model and solve major problems. However, this original goal was not achieved, something many colleagues predicted and I doubted. But we must admit, it certainly did not turn out as the project had envisioned.
Question: Have you seen the movie “Oppenheimer”? In physics, we’ve seen successful large-scale science projects like the Manhattan Project, LIGO (Laser Interferometer Gravitational-Wave Observatory), and CERN (European Organization for Nuclear Research). Such examples seem rarer in the life sciences. Do you think the approach of large-scale science is unsuitable for current brain science research, or was the Human Brain Project (HBP) missing a leader like Oppenheimer?
Professor Helmstaedter: I haven’t watched “Oppenheimer”, but I am aware of it. I must say, I sometimes envy the physics community for being able to develop successful large-scale accelerators, which are hallmarks of successful big science. Why are they so successful? It’s because their goals are very clear, measurable, and quantifiable. Physics has spent decades developing a coherent model that enables them to accurately predict—“If I do X, I can test Y.” And if someone is willing to pay for it, they can achieve these tasks. This is the ideal model for big science.
However, this is much more challenging in the field of neuroscience, although I wouldn’t say it’s impossible. We lack consensus on methodology, unsure of which approach is best, or even which could successfully lead to an understanding of the brain. Every researcher believes their method is the optimal one. Within the field, we have not reached a consensus, which is quite different from physics.
Currently, a good example of a large-scale science project in biology is the Human Genome Project. The project had a clear and achievable goal, but its impact and direct consequences were almost entirely misjudged. People hoped it would identify key genes for treating diseases unrelated to genetics, including in psychiatry. But that clearly did not happen. Nonetheless, it facilitated the development of faster sequencing technologies, making genome-wide association studies (GWAS) possible. We will do our best to pursue our goals, even achieving them amidst competition.
However, when we don’t find the expected results, we are surprised, but that’s why we do it. If we knew what we were looking for in advance, we wouldn’t need to explore new lands, new stars, new parts of the universe. This is also why I engage in connectomics, to this day, we are still amazed by the precision and depth of the data, but sometimes, we don’t find what we hoped to find, or the wonders we thought we would discover never materialize. At worst, we overturn a hypothesis, say goodbye to it, and move on to the next one. This is the process of science.
*Genome-Wide Association Study (GWAS): A new strategy that uses millions of single nucleotide polymorphisms (SNPs) as molecular genetic markers across the genome for control analysis or correlation analysis at the genome-wide level, by comparing to discover gene variations that affect complex traits.
*Competition: Refers to the competition between the Human Genome Project and private enterprises (such as Celara Genomics) in the United States. They also sequenced the genomes of the fruit fly, mouse, and human, achieving significant scientific, economic, and social effects. Despite some resource wastage on both sides, the competition between “official” and “private” efforts eventually reached a resolution.
Question: There was a notion that many believed the HBP lacked a clear and achievable goal. If you were to set a specific goal for the HBP, what would it be?
Professor Helmstaedter: I must admit, I am striving towards the goals I have set for this project. These goals are set not because I think others should also do so, but because I firmly believe in their importance and can argue for their significance. The synaptic-level connectome is a part of our evolutionary learning process, which includes adaptation and selection. The appearance of certain predators, why striped beasts are dangerous, and what an approaching eagle signifies, might all be encoded evolutionarily. Beyond that lies the individual knowledge learned over a lifetime. The synaptic-level connectome is where these two converge. It is not a connectome based on cell types, nor is it like whole-brain diffusion tensor imaging (DTI) maps that project highways, as they do not store individual knowledge. This is why I believe this perspective holds great value. It is not an arbitrary description or choice but the right step forward in understanding these brain networks.
Question: Hypothetically, if the HBP were initiated today, with access to contemporary technology and artificial intelligence, do you believe the outcomes would differ?
Professor Helmstaedter: The outcomes would undoubtedly differ, but articulating a clear, definitive goal remains a challenge. A goal must be quantifiable; simply stating a desire to “understand the brain” does not constitute a measurable objective. Similarly, aiming to cure diseases is too broad. Our goals must be specific and attainable, ambitious yet clearly defined.
Question: Having studied physics and medicine at Heidelberg University, how has an interdisciplinary approach and a physics background enriched your neuroscience research?
Professor Helmstaedter: My physics education has been invaluable, providing a solid foundation that has enabled me to approach complex problems without intimidation. Physics teaches how to simplify complex phenomena and develop quantitative methods for solving intricate issues. This training has emboldened me to tackle challenging problems, including understanding neural networks, without hesitation. Although some problems may remain elusive, my background in physics has equipped me with a fearless approach to exploring the complexities of the brain.
Question: Your laboratory deals with vast amounts of data. How do you incorporate artificial intelligence into this process?
Professor Helmstaedter: Tracing the tiny fibers connecting neurons, specifically axons, necessitates high-resolution electron microscopy. However, reconstructing these connections in three-dimensional space requires acquiring very large datasets. Over 20 years ago, storage was predominantly measured in gigabytes, making it challenging to analyze the terabyte-level datasets we collected with the technology available at that time. Given the dense packing of axons and synapses in the brain, their three-dimensional structure is incredibly complex and vast, rendering manual tracing of neurons impractical due to the time required. Consequently, we began exploring automated methods early on. Around 2005 or 2006, we delved into applying machine learning to image analysis, which was not yet widely adopted. Initially, we relied on standard image processing techniques, designing filters to apply to images for generating results.
My collaboration with my colleague, Winfried Denk, marked our first foray into using machine learning methods, starting with the K-nearest neighbors algorithm—a traditional approach for visual image problems. This attempt was quite pioneering at the time and led to significant advances. Later, in collaboration with Sebastian Seung, we adopted convolutional neural networks developed by Yann LeCun. By 2007, we were applying this technique to image data analysis, making it one of the earliest uses of this method, which proved to be very successful. The first mammalian connectome map we published in 2013 utilized the contemporary AI methods available then.
Yet, we still relied significantly on human effort. We recruited hundreds of students for data labeling, combining their efficient work with the tools we developed and the artificial intelligence technologies of that time to complete the first mammalian retinal connectome. This process revealed an interesting interplay between artificial intelligence and neuroscience: we use AI to gain a better understanding of the brain, and conversely, we apply insights from the visual organ to analyze brain structures detected by AI in real-time.
Over the last decade, we have increasingly leveraged artificial intelligence to diminish the reliance on human labor. Considering the scalability limits of human resources—like the challenge of hiring hundreds or even thousands of students—we pushed for advancements in AI to reduce the need for manual labor. This strategy has proven highly efficient.
By 2019, we introduced the first cortical connectome, achieving an efficiency 20 to 40 times greater than that of the previous retinal connectome. We have now entered a new era where AI can largely supplant manual annotators, propelling our field forward at an unprecedented pace.
Question: ChatGPT has become quite popular recently. I’m curious, could such large language models be of assistance in your research?
Professor Helmstaedter: We have yet to directly explore the application of transformer models like ChatGPT in our research. Our experience suggests that we achieve the most success when we develop targeted solutions for our specific challenges. Therefore, while I acknowledge that large language models like ChatGPT possess significant potential for processing language and other complex data types, our data is not complex in the same way. It doesn’t feature a vast range of configurations. Instead, our challenge lies in identifying and understanding the continuation positions of axons locally—a highly specific problem.
Our approach treats this as a three-dimensional navigation challenge. We work with three-dimensional data and have developed an AI-driven “flight engine” that has learned to follow axons through this space. The initial findings of this research have been published on bioRxiv, under the title “RoboEM.” This system functions like a miniature robot navigating through electron microscopy data, proving to be highly effective for our needs. As a bespoke solution, it offers significant advantages for our specific challenge.
*Schmidt, Martin, et al. have detailed “RoboEM” in their paper, “RoboEM: automated 3D flight tracing for synaptic-resolution connectomics,” available on bioRxiv (2022): 2022-09. “RoboEM” is an AI-powered automated 3D navigation system designed to traverse neuronal pathways using only three-dimensional electron microscopy data. It substantially reduces the computational costs associated with annotating cortical connectomes—about 400 times less than the costs of manual error correction. Moreover, it enhances automated segmentation techniques beyond the current state-of-the-art and eliminates the need for manual proofreading in addressing more complex connectomics analysis challenges.
Question: In your early work, you examined the differences between the human brain and those of other primates. How do you view the differences in operation between artificial intelligence and human intelligence?
Professor Helmstaedter: Discussing the operational differences between artificial intelligence (AI) and human intelligence, the foremost point is their efficiency in learning. The stark contrast lies in learning efficiency. Modern AI, although advancing rapidly and excelling in various domains, still lags behind human intelligence in terms of learning and adapting from minimal instructions. Consider how infants learn to recognize their surroundings — trees, cars, parents, people, and cats — with minimal direct teaching and a reliance on vast amounts of unlabeled data. This capability remains a challenge for current AI technologies, highlighting a significant gap in learning methodologies.
Another critical distinction lies in functionality and efficiency, particularly in understanding physical laws and the world’s operational model. Humans intuitively grasp physical concepts — for example, the behavior of cars, planes, and falling objects — through a learning process that remains enigmatic. In contrast, AI systems require extensive data to learn what occurrences are possible or impossible, struggling to grasp these underlying principles merely through statistical learning. This gap in predictive modeling and understanding of physical laws marks a significant divergence between human and artificial intelligence.
Structurally, AI predominantly uses feedforward networks, characterized by unidirectional processing layers without lateral or feedback connections. This approach contrasts with the human brain’s architecture, which incorporates numerous feedback loops, suggesting a fundamental difference in information processing. While AI has made strides in image processing, the underlying mechanisms differ substantially from brain functions. The challenge of managing recurrent neural networks in AI underscores this difference, although our brains naturally excel in such tasks.
Moreover, the role of inhibition in neural processing, an aspect underexplored in AI, warrants closer attention. Recent findings highlight the human brain’s extensive network of inhibitory synapses, even compared to other species like mice. This discovery suggests that incorporating concepts of inhibition could provide valuable insights for advancing AI, underscoring the importance of drawing inspiration from human brain architecture and function to overcome current limitations in artificial intelligence systems.
Question: Neuroscience has been a foundational inspiration for the early development of artificial intelligence. Looking ahead, do you foresee AI becoming more akin to human intelligence, or will it embark on a distinctly separate evolutionary path?
Professor Helmstaedter: The prospect of artificial intelligence merging with human intelligence or taking a distinct path is still an open question. Early developments in AI were indeed profoundly influenced by neuroscience, and I anticipate this reciprocal inspiration will persist. Nevertheless, AI might also progress independently by capitalizing on existing methods and accelerating through technological innovations, such as quantum computing, which may not require an intricate understanding of human brain mechanics. Despite this, I remain convinced that the human brain holds invaluable lessons for AI, especially in areas like predictive learning, world model encoding, and memory processing.
Question: Considering how quantitative advancements can lead to qualitative leaps, as exemplified by ChatGPT and what some might call emergent intelligence, do you believe a similar breakthrough could occur in brain science? Specifically, as you and your colleagues map more whole-brain neurons and their interconnections, could this endeavor help decode the brain?
Professor Helmstaedter: My commitment to exploring the cortex and other brain functions via connectomics is driven by a strong belief that this approach will critically advance our understanding of brain operations. By analyzing the variability in neural circuits, we aim to gain insights into sensory experiences, developmental processes, individual and species-specific development, and even pathological alterations. Focusing on neural circuits, therefore, emerges as a powerful strategy not only for comprehending the human brain but also for understanding changes in conditions.
The feedback we’ve received thus far is encouraging, although the true value of our efforts will be revealed with time. We are proactive in our scientific pursuits, navigating an open field brimming with uncertainties. The future alone will validate the accuracy of my hypotheses. The significance of inhibition, a topic of keen interest within the AI community, exemplifies a promising avenue for exploration, suggesting that our findings could illuminate new directions for AI development.
Understanding the mechanisms and principles of learning remains a pivotal challenge. Data on neural networks could elucidate the storage rules utilized in the human brain on a large scale, potentially informing advancements in AI. I am hopeful that these discoveries will reciprocate, enriching AI research with insights drawn from biology. Yet, whether this exchange will materialize remains to be seen.
Question: The Turing Test is commonly used as a benchmark to evaluate artificial intelligence. From a neuroscience perspective, do you think there’s a need for a new way to test AI’s performance, particularly in terms of its similarity to human intelligence?
Professor Helmstaedter: As an observer and participant in the AI field, it’s apparent that there’s a prevailing belief we’ve unlocked the fundamental principles of AI, reducing remaining challenges to technical hurdles. Figures like Geoffrey Hinton suggest models such as backpropagation reflect the brain’s actual mechanisms—a view I do not share. The learning processes of current AI differ significantly from human learning during childhood. Yet, I argue that mimicking the exact processes of the brain isn’t necessary to achieve human-like intelligence, much as we don’t replicate bird flight to create aircraft.
Understanding the underlying principles is crucial. We can draw inspiration from the brain’s functionality and adapt these insights in novel ways, similar to aircraft design. Observing brain operations and attempting to replicate similar networks can be beneficial, as demonstrated by the early successes of multi-layer perceptrons, even without a comprehensive grasp of nonlinearity.
The importance of distinguishing AI from human intelligence (HI) through tests like the Turing Test is diminishing. AI’s capabilities in problem-solving are evident and continually improving. Current language models display impressive intelligence, casting doubt on the relevance of such tests. My interest lies more in understanding AI’s mechanisms and its potential applications rather than its ability to mimic human behavior. Ensuring AI’s controllability and preventing it from supplanting human roles is crucial, but not a concern I currently share.
Question: Considering your deep engagement with AI and three years of specialized knowledge, how has interdisciplinary collaboration propelled scientific progress?
Professor Helmstaedter: Interdisciplinary collaboration’s primary advantage is the exchange of knowledge and perspectives among experts from various fields. Our lab benefits from the contributions of electrical engineers, physicists, biologists, medical students, and chemists. This diversity fosters a rich environment for collaborative work.
Some team members initially had a basic understanding of the brain but have since deepened their neuroscience knowledge, complementing their expertise in data analysis. This reciprocal learning embodies the essence of interdisciplinary collaboration, united by a shared objective. Tackling challenges together is not only rewarding but also illuminates our collective progress and scientific advancements.
Question: The public often perceives progress in brain science as slow, particularly in terms of curing brain diseases. If you had to highlight the most significant discovery in brain science this century, what would it be?
Professor Helmstaedter: I understand and share the public’s frustration with the slow progress in neuroscience, especially in fields like neurology and psychiatry, in combating diseases. It’s important to recognize the complexity of these challenges. Despite the advancements during the “Decade of the Brain” in the 80s and 90s, neurodegenerative diseases continue to pose a significant challenge, anticipated to burden society heavily in the future.
The scarcity of breakthroughs raises important questions about our research methods and objectives. My engagement in this field stems from a belief that neural circuits may play a crucial role in disease manifestation. This hypothesis, long-standing but previously unexplored, is now within our reach to investigate. We aim to determine whether major psychiatric disorders correlate with specific neural circuits, identifying which circuits are implicated, how they are affected, and which signals are involved. This understanding is vital for tackling neurodegenerative diseases.
Though this area is fraught with complexity and ongoing debate about causality, I can speak most confidently about my research. We’ve uncovered significant differences between mouse and human cerebral cortexes, a surprising finding that underscores the value of exploring new areas and observing networks in novel ways. This discovery exemplifies the potential for uncovering unexpected insights through innovative research approaches.
# A question from @ Moritz Helmstaedter:
How is knowledge about the world encoded in our brains? Are there neural circuits in the brain dedicated to predictive coding?
Professor Helmstaedter: A fundamental question that captivates me is how our brains encode knowledge of the world, enabling us to comprehend current happenings, particularly through the lens of predictive coding. Are there specific neural circuits in the brain tasked with predictive coding? If so, how are they structured? I believe our cognitive processes heavily rely on predictions, based on a wide array of assumptions. Thus, deciphering the architecture of these processes is crucial, not just for advancing our understanding of cognitive function but also for shedding light on neuropathology. Our ongoing research in this area is driven by its profound implications for understanding and treating brain diseases.
In response to Professor Helmstaedter’s question, we will invite more guests to explore this answer. Stay tuned for further insights.