The futures of science in the era of AI
I’m writing this as I prepare for my panel discussion in a few hours at the recent Neumann Series conference, organized by Tamás Novák at Columbia University’s historic Pupin Hall in New York. We gathered around a Neumann’s compelling question that: Can we survive technology?
With the theme Human Ingenuity in the Era of AI, I want to share a bit of what I experienced this weekend — offering you a little peek into the thoughts and reflections from that inspiring event.

There are times in science when everything seems to come together in a new way. The framework readjusts, offering fresh perspectives. The whole world, as we see it, no longer means the same thing. This illustrates the transformation Thomas Kuhn discussed in The Structure of Scientific Revolutions — a book that, I must admit, is one of the most challenging I have ever read, and I have read extensively.
To better understand the latest developments in AI, it can be really helpful to explore Kuhn’s idea of a paradigm. This perspective can offer valuable insights that make the complex progress in AI feel a bit more approachable.
A paradigm is the full framework that dictates how science operates at a certain time. It encompasses scientists’ assumptions about reality, the meaningful questions they consider, the accepted tools and methods, what qualifies as evidence, and how results are interpreted. It shapes perception itself. In other words, a paradigm defines how scientists see the world and how they do science within it. In a given paradigm, some ideas seem obvious, others seem doubtful, and many possibilities are never even considered.

Kuhn describes scientific progress as a series of stages. Initially, there is what he calls normal science, a steady phase during which scientists operate within a common paradigm. They work through problems, enhance models, and steadily grow their knowledge, all while maintaining consistency and clear guidance. The paradigm offers a shared language and a clear sense of what constitutes progress.
Over time, cracks begin to show as more and more observations surface that don’t align with the current framework. These are anomalies. Initially, anomalies are seen as minor issues, often explained away, set aside, or expected to be fixed later. As they accumulate, pressure builds up. The paradigm begins to stretch, and the tension grows more difficult to ignore.
This causes what Kuhn terms a crisis, where the current framework struggles to accommodate new observations. Questions come up that the current way can’t fully answer, which makes people feel less sure. As a result, new ways of thinking about reality start to take shape.
The scientific revolution follows, with a new paradigm emerging and ultimately replacing the previous one. Scientists start to view the world differently, pose different questions, adopt new concepts, and interpret the same data in fresh ways. After the shift, the new paradigm becomes dominant, and a new period of normal science begins.
Kuhn refers to this as a paradigm shift. Essentially, a paradigm shift is a major change in the underlying framework for understanding reality. It alters what can be known, how it is known, and what is considered to be true.

Kuhn’s key idea is that different paradigms often cannot be fully compared, a concept he terms incommensurability. Incommensurability means that two paradigms cannot be directly compared because they use different concepts, languages, and standards. People in different paradigms have different answers, but also, they are asking different questions and seeing a different reality; they simply no longer speak the same scientific language.
Historical examples clearly illustrate this. For example, transitioning from a geocentric to a heliocentric model altered our perception of the universe, shifting the focus from Earth to the Sun. It literally transformed the way we view our place in the universe. The shift from classical physics to relativity and quantum theory profoundly changed our understanding of time, space, and causality. It’s also fascinating to see how our perceptions can change so dramatically, like shifting from the belief in a flat Earth to accepting it as spherical. These examples show just how deeply our view of the world can evolve when the foundational models we rely on are updated, and this is where everything links to the present moment.

If we view AI through Kuhn’s perspective, what we are witnessing now is well beyond just a technological advancement. We are in the midst of a paradigm shift in science itself.
For centuries, science has depended on human cognition. Observation, hypothesis formation, interpretation, and validation have all been grounded in the human mind. Knowledge was limited by humans’ ability to process and understand information. Now, we are moving from a paradigm in which humans produce knowledge to one in which they must decide what to trust, what to question, and what to do with knowledge they did not fully generate themselves.

Anomalies are beginning to appear at an extreme pace. AI systems now have the ability to generate hypotheses, analyze vast datasets, and deliver scientific results at speeds and scales beyond human cognitive capabilities. More critically, they can produce outcomes that are not always entirely interpretable by humans.
We are entering an era where knowledge can be created without full human comprehension, and this is huge, scary, and, of course, at this point, it creates confusion, especially since we still lack proper guidance and regulations that explain for what we can use AI and exactly how.
This challenges the current paradigm more than anything before. If understanding has been central to scientific knowledge, what occurs when we cannot fully explain the outputs we depend on? This is the emerging crisis.
A new paradigm is emerging as science develops into a hybrid system that merges human and machine cognition. Scientists are shifting from solely generating knowledge to also interpreting results, evaluating their reliability, defining boundaries, and making ethical decisions about how knowledge is used. In this new paradigm, the role of the scientist transitions from that of a knowledge producer to that of an interpreter, curator, and ethical decision-maker. Furthermore, it involves shifting from merely solving problems to determining which problems matter.

AI changes science, and it does even more: it alters what it means to know something, leading us to the larger question that underpins many ongoing debates. In an increasingly AI-driven world, human ingenuity is not just about competing with machines in speed or quantity. Rather, it’s about guiding these systems with a clear sense of purpose, creating meaningful connections, and embracing responsibility as we work within a framework that extends beyond just human thinking.
As science is evolving from being just human thought to a hybrid system, we are exploring our role in this transformation. When discussing AI in science, education, and research, or in medicine, I believe we often view it as a technological revolution, but I would argue that what we are witnessing is not primarily a technological transformation — it is a cultural paradigm shift in how knowledge is created and validated.

Drawing on Thomas Kuhn’s concept of paradigm shifts, we are not trying to improve existing systems at this moment, but changing the whole framework through which we understand expertise and authority.
In medicine, for example, we are already seeing that knowledge is no longer scarce or exclusively held by professionals. Patients, supported by AI, are becoming active interpreters of their own conditions. This creates tension — because it challenges deeply rooted professional identities, and also the traditional doctor-patient relationship.
The real disruption, therefore, is psychological and cultural. It forces us to ask: What does it mean to be a scientist, a teacher, or a doctor when knowledge is no longer our monopoly? Are we willing to redefine ourselves fast enough to remain relevant within a fundamentally new paradigm.
AI is revolutionizing science by turning it from a process limited by human capabilities into a hybrid cognitive system. It speeds up discovery, but even more significantly, it alters the way we formulate questions.
AI won’t replace scientists — but, especially under the pressure of publishing, some can stop developing the deep thinking required to guide it. In my opinion, the futures of science depend less on computational power,
and more on our ability to ask meaningful, responsible questions.
Responsibility is evolving, and now it’s not just about what we discover. It’s also about how we team up with AI to build knowledge together, making the process more collaborative and exciting.
We need new forms of responsibility that include:
- transparency in AI use
- accountability for decisions supported by AI
- and psychological awareness of how AI shapes our thinking
To me, human ingenuity in the era of evolving AI is the ability to create meaning, direction, and responsibility.

AI can generate solutions, but it’s still we human beings who decide what is worth solving. So human ingenuity becomes even more important, because we are the ones who:
- define the problems
- set the ethical boundaries
- and live with the consequences
In that sense, human creativity shouldn’t be questioned; we just have to accept that now we are being tested to see whether we can evolve it.
My answer to Neumann’s famous question is this: embracing technology is the best way for us to thrive with it, rather than resisting it. The only way to move forward while keeping our sanity is to mature alongside it.
The ongoing paradigm shift is primarily cultural, though technological advances have enabled it, and Neumann’s question is human-centered too.
Our main concern should be this: Can we develop wisdom, ethics, self-awareness, and confidence in our new roles as researchers at the same pace as our tools evolve?
Many fear that AI could become too powerful. However, in science, power is essential because we are curious and driven by a mission. Our goal is to solve problems and provide better, faster answers to our communities. Proper use of automation and AI enables us to achieve these goals. We want AI to be powerful. I only see the problem is if we remain psychologically unprepared for the world we are creating.

If this resonates with you, invite me to coffee to fuel my research and be part of the mission to build a smarter, more human future for healthcare.
