top of page

When Machines Think

Writer's picture: Rajul JagdishRajul Jagdish

In the same breath that we marvel at artificial intelligence’s potential, we wrestle with its existential weight. AI is not just a tool but a mirror, reflecting our fears of obsolescence, our anxieties about control, and our fragile search for meaning in a world we are reshaping.



We have always assumed a contract between humanity and its creations: we shape them, and in return, they remain within our grasp. But AI unsettles this balance. It does not simply follow instructions, it learns, sometimes in ways we do not anticipate. There is something deeply unsettling about creating something that may eventually outthink us, something that forces us to question whether we are truly in control.

Beyond the fear of losing control lies an even deeper anxiety: the death of meaningful work. AI is not just automating tasks; it is automating entire ways of being. Work has long been more than a means of survival; it offers identity, structure, and purpose. What happens when those vanish? What remains of our sense of self when our labor is no longer needed? Who am I if I am no longer needed? If AI can predict our decisions, complete our sentences, and anticipate our desires, then what does that mean for free will? Are we choosing, or are we merely following an algorithm’s preordained path?


We are outsourcing more than just labor; we are outsourcing metacognitive functions: thinking, planning, and even deciding. AI’s ability to predict, optimize, and “understand” us raises unsettling questions about autonomy. If our choices can be anticipated before we make them, are they truly ours? It is a kind of existential suffocation - being known too well, reduced to patterns rather than infinite possibilities.


Then there is the problem of responsibility. AI systems do not exist in isolation; they are trained on human biases, reflecting our blind spots, and complicating accountability. As AI makes decisions for us, responsibility grows murky. If an AI system fails catastrophically, who is to blame? The creator, the user, or the machine? The weight of collective responsibility looms, demanding that we embed not just efficiency, but justice and care into these systems.


Beneath all of this lies a primal fear - the fear of disappearance, of invisibility, of purposelessness. AI forces us to confront the possibility that we may not always be the primary authors of our own stories. This is not just a technological dilemma; it is an existential one. 


Perhaps the true existential risk of AI is not that it will destroy us, but that it will reveal us to ourselves. AI holds a mirror to our values, our contradictions, and our deepest uncertainties. It forces us to ask: What do we stand for? What does it mean to be human in an age where machines think? And most importantly, what do we become?


Recent Posts

See All

Comments


bottom of page