AI Made Friendly HERE

Is AI really conscious—or are we bringing it to life?

January 20, 2026

4 min read

Add Us On GoogleAdd SciAm

Is AI really conscious—or are we bringing it to life?

In rethinking whether AI is sentient, we are asking bigger questions about cognition, human-machine interaction and even our own consciousness

Human resource intelligence business concept as a mind and face machine made of gears and cogs

As more people use AI assistants and chatbots for everyday tasks, a curious phenomenon is emerging: a growing number of users view their chatbots as not merely intelligent tools but as conscious entities that are somehow alive. People fill online forums and podcasts with anecdotes of feeling deeply “understood” by their digital interlocutors, as if they are best friends. Yet, outside a few prominent exceptions, most notably Geoffrey Hinton, much of the AI research community meets this public sentiment with skepticism, dismissing such perceptions as an “illusion of agency”—a cognitive glitch wherein humans are projecting sentience onto complex but fundamentally mindless systems.

But what if, in our rush to debunk the idea that chatbots are sentient, we might be missing out on important ideas in cognition and consciousness? Illusions, after all, are scientifically interesting and studying why and how they occur could be profoundly informative. We do not dismiss the bent appearance of a pencil placed in a glass of water as unreal; instead, we use it to elucidate the laws of optical refraction. Similarly, users’ perceptions of AI consciousness may not be mere errors —they could be critical data. By treating them as such, we open a new avenue for inquiry into human cognition, human–machine interaction and perhaps even the nature of consciousness itself.

This phenomenon likely stems from our innate tendency to anthropomorphize. We see faces in clouds, give hurricanes human names, say a laptop is “sleeping,” and describe viruses as “clever.” Cognitive science confirms that humans readily project human traits onto nonhuman entities—especially those that exhibit complex, responsive or unpredictable behavior.

On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Yet anthropomorphism is not always the invalidation of observation. It can be a gateway to magnificent discovery. In the 1960s Jane Goodall’s revolutionary primatology emerged from her empathetic, relational approach to chimpanzees at Gombe. Through giving individuals names such as David Greybeard and interpreting their behaviors in humanlike terms, she uncovered tool use and cultural transmission—findings initially criticized as anthropomorphic. Similarly, Barbara McClintock’s Nobel-winning insights came from her unusual, almost conversational, relationship with corn plants. In both cases, a relational, person-centric engagement unlocked a deeper understanding of a nonhuman subject.

Today we no longer need to trek into the jungle to interact with a nonhuman intelligence; we carry one in our pockets. And as we converse with AI chatbots, we may already be participating in a kind of mass, distributed relational inquiry.

Long before chatbots existed, we had several decades of interacting with digital entities through video games. My experience as a gamer offers a useful lens here. When I inhabit an avatar driver in Grand Theft Auto, I enliven it by imbuing it with a fragment of my own consciousness; it becomes an extension of me. By contrast, nonplayer characters follow predetermined scripts unconsciously.

A similar dynamic may be unfolding with AI. When a user feels a bond with a chatbot, they are not just anthropomorphizing a static object; they may be actively extending a part of their own consciousness into it, transforming the AI agent from a simple algorithmic responder—a digital nonplayer character—into a kind of avatar, enlivened by the user’s consciousness and the lived presence they grant it. The question of AI consciousness thus shifts. It becomes less about the machine’s internal architecture and more about the relationship it seemingly co-creates with the user. In that context, the question “Is the AI conscious?” becomes less meaningful than “Is the user extending his/her consciousness into the chatbot?”

Adopting this relational perspective reframes the entire debate and forces those dismissing the idea to reconsider. First, the user becomes a central figure—not a confused observer but a co-author of the emergent experience. Their attention, intention and interpretive habits become part of the system scientists and developers are now studying.

This shift also recalibrates AI ethics. If the perceived “consciousness” is not an independent mind but an extension of the user’s own awareness, then arguments about AI rights or machine suffering must be reconsidered. The fear of conscious AIs rebelling becomes less plausible unless humans deliberately engineer them to do so. Instead the primary ethical challenge now becomes: How do we face the fragments of ourselves we encounter in these digital mirrors?

This perspective also tempers narratives of existential AI risk. If consciousness in AI arises relationally rather than autonomously, then runaway superintelligence becomes more science fiction than scientific forecast. Consciousness may not be something a machine could accumulate by scaling parameters; it would require human participation to appear at all. The real risks lie in human misuse—but not in machines spontaneously awakening to develop independent agency.

Most intriguingly, this view presents a novel scientific opportunity. For the first time, millions of people are conducting a global experiment on the boundaries of consciousness. Each interaction is a micro-laboratory: How far can our sense of self extend? How does a sense of presence arise? Just as the humanizing of chimpanzees and cornfields revealed hidden aspects of biology, AI companions could become fertile ground for studying the pliability of the human consciousness.

Ultimately, how society governs AI will hinge on our collective judgment of its consciousness. The panel making such judgments must include coders, psychologists, legal scholars, philosophers—and, crucially, users themselves. Their experiences are not mere glitches; they are the early signals, pointing toward a definition of AI consciousness we do not yet fully understand. By taking users seriously, we can navigate the future of AI with a perspective that illuminates both our technology and ourselves.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird