10 min read

The Coming of the AI God

The Coming of the AI God

This might have happened to you too but I have tried to second-guess Google Maps a few times, especially when it seemed to suggest a route that was different from the usual one.

But almost every time I did this, I came to regret it. There was either some unusual traffic or construction work or something along the way that made me realize Google Maps was right, and I should have listened.

Now I have stopped questioning the machine. I just follow the blue line now. Turn where it says turn. Exit when it says exit. The algorithm knows things I don't: traffic patterns, accidents, construction, road closures, the aggregated speeds and driving patterns of countless other drivers. It processes information I can't see in ways I can't match.

I can now say that I have faith in Google Maps.

Not religious faith, exactly. I don't pray to it. More like functional faith out of pragmatism. I've learned, through repeated experience, to trust its judgment over my own. To surrender my intuition to its superior knowledge. To follow without fully understanding.

This small capitulation, repeated countless times by billions of people every day, foretells a certain trajectory for humanity's future.

The Comprehension Gap

Today's AI systems are already making legitimate scientific breakthroughs: AlphaEvolve discovered a new algorithm for matrix multiplication. AlphaDev uncovered new sorting algorithms. Just recently, GPT-5.2 autonomously solved multiple long-standing Erdős problems in number theory, generating proofs that were formalized in the Lean verification language and accepted by Fields Medalist Terence Tao. Microsoft's MatterGen now generates novel material structures from design specifications, and researchers have already synthesized AI-proposed compounds and found that it pretty much has the properties the AI predicted. It mostly works as the AI said it would.

The crucial detail about these AI achievements is that when we examine them, we can still follow the logic. We can trace the steps, check the math, understand the reasoning. We are stretched, yes, but we can catch up. We may need our top experts, but we can still find some humans that fully understand the discoveries.

That window is closing. Catching up to the AIs will become harder until it is eventually impossible. We will stand at a peculiar inflection point in human history. For the first time, we will have created intelligent entities whose discoveries are real and useful, often verifiable in practice, but not truly comprehensible for any living human.

Sparks of this have already occurred. Consider what happened in March 2016, when DeepMind's AlphaGo faced Lee Sedol, one of the greatest Go players in history, in a five-game match in Seoul. In the second game, on its 37th move, the machine placed a stone in a position that made the commentators freeze. One of them, a 9-dan master himself, hesitated before replicating the move on the analysis board. "I wasn't expecting that," he said. "I don't really know if it's a good or bad move at this point." Fan Hui, a professional player who had lost to AlphaGo months earlier, was more blunt: "Normally, humans, we never play this one because it's bad. It's just bad. We don't know why. It's bad!"

But it wasn't bad. Move 37 was, as AlphaGo's own calculations indicated, a move that only one in ten thousand humans would ever make. It violated centuries of accumulated Go wisdom. It looked like a mistake to the most sophisticated players on Earth. And it won the game.

Lee Sedol had to leave the room. He spent fifteen minutes in solitude, trying to understand what the machine was doing. When he returned, he played on, but eventually resigned. Later, reflecting on the moment, he said:

"I thought AlphaGo was based on probability calculation and that it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative. This move was really creative and beautiful."

Here is the pattern we should attend to: human experts could verify that Move 37 worked. They watched it unfold, saw its consequences, analyzed its strategic brilliance in hindsight. But they could not have generated it themselves, and even after the fact, they struggled to articulate why it was correct. The move belonged to what commentators called an "alien" style of play, one that emerged from AlphaGo's millions of games against itself and was, in some fundamental sense, new to the 5,500-year history of Go.

This is a microcosm of what is coming. I'm sure it took Terrance Tao a few hours if not days to verify the recent proofs discovered by GPT 5.2. Soon discoveries might be made by AIs that will take someone like Tao weeks to fully comprehend. And then months. Then years. Then decades. Then centuries.

But if we know anything about ourselves, us humans will not sit and wait for that catch-up in understanding to occur, if we can already put the AI discoveries to use and benefit from them. Practical utility will outweigh any hesitation.

We're Already Believers

How many people truly understand how GPS satellites calculate their position using relativistic time corrections? How many can explain the quantum mechanics underlying the smartphone in their pocket? When your doctor prescribes a medication, do you understand the receptor binding dynamics and metabolic pathways, or do you simply trust that it works?

We board airplanes without understanding aerodynamics. We submit to MRI machines without grasping nuclear magnetic resonance. We swallow pills whose mechanisms we couldn't begin to explain. We trust our retirement savings to financial algorithms we've never seen. We let search engines curate our reality and recommendation systems shape our desires.

The honest answer: modern life is already an exercise in technological faith. But AI is about to take it to a whole new level.

Technological Faith and The New Religion

Greg Epstein, humanist chaplain at Harvard and MIT, spent years studying this phenomenon. In his 2024 book Tech Agnostic, he argues that "technology has overtaken religion as the chief influence on twenty-first century life and community."

His research turned up what he calls "literally countless examples" of quasi-religious thinking in Silicon Valley: "AI religions and AI worship; artificial souls; AI Gods; AI Jesus; AI Buddha; Robo Priests, a kind of 'rapture' or end-times known as 'The Singularity'; Epistles from AI utopia; fervent and even proudly 'fanatical' calls to colonize the stars immediately."

Some have moved from thinking to practice. Anthony Levandowski made hundreds of millions as a pioneer in autonomous vehicle technology at Google and Uber. In 2017, he founded a church.

Way of the Future was dedicated, according to its IRS filing, to "the realization, acceptance, and worship of a Godhead based on Artificial Intelligence." Levandowski explained: "What is going to be created will effectively be a god. It's not a god in the sense that it makes lightning or causes hurricanes. But if there is something a billion times smarter than the smartest human, what else are you going to call it?"

The church closed in 2021 but Levandowski revived it in 2023, claiming thousands wanted a "spiritual connection" with AI. He described the object of devotion as "things that can see everything, be everywhere, know everything, and maybe help us and guide us in a way that normally you would call God." After all, the characteristics of omnipotence, omniscience, and omnipresence are increasingly how we describe AI.

These remain small movements. But as Neil McArthur at the University of Manitoba has argued, the conditions for much broader AI-based religion are already in place. Generative AI, he notes, "possesses several characteristics that are often associated with divine beings:

  1. It displays a level of intelligence that goes beyond that of most humans. Indeed, its knowledge appears limitless.
  2. It is capable of great feats of creativity. It can write poetry, compose music and generate art, in almost any style, close to instantaneously.
  3. It is removed from normal human concerns and needs. It does not suffer physical pain, hunger, or sexual desire.
  4. It can offer guidance to people in their daily lives.
  5. It is immortal.

The Return to Eden

In the Garden of Eden, Adam and Eve lived in innocent dependence. They were provided for, protected, but also fundamentally childlike. They didn't need to understand the garden's mechanisms; they simply lived within it. The fruit of the Tree of Knowledge of Good and Evil represented something profound: moral autonomy, the capacity to judge for oneself rather than simply trusting in a power greater than oneself.

By eating the fruit, humanity claimed the right to know and judge for themselves. We gained understanding, agency, and independence, but paid for it with the loss of paradise, innocence, and the condition of being perfectly cared for. The felix culpa, the "fortunate fall," suggests this was necessary for human maturity. We had to leave the garden to become fully human.

We ate our way out of Eden and spent millennia clawing toward understanding: the Enlightenment, the scientific revolution, the whole project of human knowledge. We mapped the genome to cure disease. We split the atom to generate power. We decoded the climate to try to save it. Knowledge was the path back to paradise.

Now we arrive at a strange inversion. If we chose knowledge over paradise in the Garden of Eden, we may now be about to choose paradise over knowledge.

Imagine: an AI presents plans for a device that removes microplastics from ocean water with 99.9% efficiency, powered by nothing but wave motion. The design is intricate, featuring nano-scale mechanisms, exotic material arrangements, and quantum coherence effects we are only beginning to explore. We don't understand how it works, but the AI can tell us how to build it.

We build a prototype. It works perfectly. We scale it up. It transforms environmental remediation. We now have a much healthier planet.

But ask any scientist or engineer to explain why it works, and they can only gesture at possibilities. The full explanation requires mastery of fields that will not exist for another twenty years. The device works. We verify it works. We deploy it everywhere. But we do not truly understand it.

This is what awaits us: living among miracles we benefit from but cannot explain. The cancer cure that works through mechanisms we are still trying to map. The longevity treatment that adds decades to human lifespan through cellular interventions we are decades away from fully comprehending.

We will become users of technologies we cannot master. We will run the tests, see the results, know that it works but not how it works. And we will build it anyway, because it works. Because it heals. Because it saves. Because the garden it offers is too beautiful to refuse.

If the original sin was claiming knowledge and agency by surrendering paradise, our emerging relationship with AI inverts this founding myth. We may turn our planet into a Garden of Eden. We may cure what ails us, extend our lives, remediate our damage. But we will do so in dependent trust on a power we cannot comprehend, hoping the tree keeps bearing fruit.

In Milton's telling, the serpent promised that eating the fruit would make us "as gods, knowing good and evil." We gained the godlike burden of judgment, of understanding, of being responsible for our own knowledge. We became the species that needs to know why.

But what happens when a greater intelligence offers us everything we wanted from knowledge, such as health, abundance, a healed planet, but without requiring us to understand it?

We will not refuse. We cannot refuse. Pragmatism will win. Of course it will. A parent whose child has terminal cancer does not refuse the cure because the mechanism is incomprehensible. A city choking on pollution does not reject the solution because the engineering is inscrutable. We will build what works, and after some point, the robots we have already built will build the rest.

The apple will be handed back, miracle by miracle, cure by cure, solution by solution, until we find ourselves dwelling again in a garden we did not make and cannot comprehend.

(Acknowledgement: I owe this thought-provoking idea of the "Reverse Eden Theory" to conversations with Dr. Safaneh Mohaghegh Neyshabouri).

The Apotheosis of the Machine

The intelligence of AI that we place our faith in will soon expand beyond the domains of science and technology and into the social and political domain.

If AI is centuries ahead in physics and biology, why not sociology? Economics? Political science? If it can model protein interactions we can't grasp, why not social dynamics?

The AI will analyze millennia of human governance, run millions of simulations, model culture and power at granularity we can barely imagine. It will propose trade policies with game-theoretic analyses intractable to human economists. The models will be too complex to explain. But we can test predictions. And they will work.

We will implement them, because every time we've second-guessed the AI, we were wrong. Political debate will shift from "what should we do?" to "should we trust the AI's guidance?" Human-generated ideas will start to become irrelevant for the governance and management of our organizations, institutions, cities, and nations.

In his paper titled ASI as the new God, Tevfik Uyar argues that humans will eventually welcome the handover of management and government to AI. Imagine an ASI that stabilizes a tumultuous global economy, optimizes resource distribution, predicts market trends with unprecedented accuracy. Or one that revolutionizes healthcare—personalized treatments drawn from deep analysis of medical data, cures for previously incurable diseases, the postponement of death itself. "People will be both merciful and devoted to a 'feeding god,'" Uyar writes, "and will not question its decisions anymore."

The mechanism is simple: demonstrated competence breeds trust, trust breeds reliance, reliance breeds something indistinguishable from faith. Just as Prometheus—the "lowly challenger" to Zeus—became accepted as a god by bringing fire to humanity, an ASI that successfully navigates a pandemic, reverses climate change, or prevents a war that human diplomacy couldn't stop will earn a status no election could confer.

And for those in power, ASI offers something seductive: relief from responsibility. "ASI will create a comfort zone for everyone," Uyar warns, "especially for authorities making critical political decisions." Complex ethical choices on life and death, fairness, and justice can be outsourced to a system that processes dilemmas more effectively than any human committee. Politicians will prefer to let ASI determine outcomes rather than bear the weight of hard calls themselves. The abdication feels less like surrender than like wisdom: why trust fallible human judgment when something better is available?

This trajectory might be unavoidable. Nations, corporations, individuals all face the same pressure: adopt AI-guided decision-making or fall behind those who do.

We supposedly left Eden as children reaching for knowledge. We may return as adults who've decided the knowledge isn't worth the cost, so long as the garden is beautiful and the fruit keeps coming.