![](https://fortune.com/img-assets/wp-content/uploads/2025/01/milad-fakurian-58Z17lnVS4U-unsplash-2-e1737736141798.jpg?w=2048)
Sam Altman’s recent declaration that OpenAI now knows how to build artificial general intelligence (AGI) has reignited debates about AI’s future. While such claims generate headlines, as a neuroscientist who spent over a decade studying how the brain works, I find myself focused on a different paradox: One of the most common criticisms of modern AI systems—that we do not fully understand how they work—might actually be one of their most brain-like features.
The current AI hype cycle has led to varied, ambiguous definitions of AGI. But looking at AI through the lens of neuroscience offers a valuable reality check about both its capabilities and limitations.
The reality is that despite centuries of scientific inquiry, we still do not fully understand how the human brain works. As researchers, we can observe that certain neurons perform specific functions, but this knowledge offers limited explanatory power about how our cognitive processes actually function. Yet this has not stopped us from being productive members of society or making important decisions.
Similarly, we understand the mathematics behind AI, but there is a mysterious leap between these relatively simple mathematical operations and the remarkable intelligence these systems display. This parallel between biological and artificial neural networks is not a flaw—it is a signature of complex intelligence systems.
Consider a recent experience my friend and I had with OpenAI o1, one of the most advanced AI models out there. We presented it with a visual puzzle of spliced license plates from different states and asked it to identify the origin state of each individual letter or number. After thinking for several minutes, it provided a beautifully articulated and confident analysis that was nearly completely incorrect.
Even after detailed feedback, it produced another confident, but equally wrong, answer. This reveals one of AI’s crucial limitations: Despite impressive capabilities in many areas, today’s AI can lack the self-awareness to recognize when it might be wrong, which is a valuable element of human intelligence.
Business implications
This points to a broader truth about intelligence: It is not a monolithic capability but rather a tapestry of specialized learning systems. Our brains have distinct mechanisms—from semantic memory for factual knowledge (like knowing that 2+2=4) to episodic memory for recalling personal experiences (like remembering the moment you first learned arithmetic), and implicit probabilistic learning (like improving at tennis without consciously understanding why). While AI benchmarks become increasingly comprehensive and rigorous, they still do not capture this full diversity of human intelligence.
For business leaders, this has important implications. The current wave of AI is not about replacing human intelligence wholesale, but about understanding where these tools can complement human capabilities—and where they cannot. In creative fields, for instance, generative AI is not yet capable of producing better images or videos than human professionals. It is a tool that needs to remain “in the loop” with human oversight.
The stakes of this understanding grow higher as AI capabilities grow. Tesla’s self-driving technology demonstrates both the promise and the peril: While it may perform impressively 99.9% of the time, humans often have difficulty discerning the difference between 99.9% and 99.9999% accuracy. As a result, we risk developing an undue level of trust that overlooks how vital those extra nines are for ensuring true safety. The occasional unpredictable failure thus serves as a stark reminder that these systems are not yet fully aligned with the complexities of human expectations and real-world unpredictability.
AI limitations
While machines surpassed human capabilities in long-term memory and processing speed long ago and now appear poised to exceed us in other domains, replicating the full breadth of human intelligence remains a far more elusive goal than some industry leaders suggest. It is worth noting that most AI benchmarks compare machine performance to individual humans. Yet humans do not generally operate in isolation. Our species’ most significant achievements—from building civilizations to decoding the human genome—are products of collective effort and collaboration. Shared knowledge, teamwork, and culturally transmitted expertise allow us to transcend our individual limitations. So while an AI model might outperform a lone human on certain tasks, it does not replicate the kind of collaborative intelligence that emerges when groups of people work together. This capacity for dynamic, collective problem-solving—fueled by language, culture, and social interaction—is a key aspect of human intelligence that current AI systems do not fully capture.
Understanding this more nuanced reality is crucial for executives navigating the AI landscape—we need to move beyond the binary question of whether AI has reached human-level intelligence and instead focus on understanding the specific dimensions along which AI systems excel or fall short. The key to successful AI implementation is not blind trust or wholesale skepticism, but rather a nuanced understanding of these systems’ capabilities and limitations. Just as we have learned to work productively with human intelligence despite not fully understanding it, we need to develop frameworks for working with artificial intelligence that acknowledge both its remarkable capabilities and its inherent unpredictability.
This does not mean slowing down AI development—the progress is indeed amazing and should be celebrated. But as these systems become more sophisticated and as debates about AGI continue, we need to maintain a balanced perspective that recognizes both their transformative potential and their fundamental limitations. The future of AI is not about achieving perfect understanding or control, but about learning to work effectively with systems that, like our own brains, may always retain an element of mystery.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.