Partial Minds: Intelligence, God, and the Limits of Access
Bruce Hart
Working on LLMs has made me less likely to confuse intelligence with godlike power, and more likely to ask whether human beings are also living inside a reality we can describe but not fully perceive.
A lot of AI safety talk assumes a straight line: if a system gets smart enough, it eventually becomes uncontrollable in ways we cannot even imagine.
I get the concern. Capable systems wired into real tools can do real damage. I am not arguing for complacency.
But the more I use LLMs, the more one detail sticks out: even if a model becomes much better than me at reasoning, it still does not inhabit my world the way I do. It sees traces of the world. Tokens. Images. Audio. Logs. API responses. It does not feel cold air, hunger, gravity, shame, or pain. It does not stand inside the full stack of reality.
That gap keeps making me think about God.
Intelligence is not the same thing as world-access
An LLM can describe rain, but it never gets wet.
That sounds obvious, but I think it matters. We keep talking about intelligence as if more of it automatically buys deeper access to reality. Working with models suggests otherwise. A system can be extremely capable inside its interface and still remain cut off from whole dimensions of lived experience.
Philosophers have been circling this point for a long time. Plato's cave is the classic image: creatures mistaking shadows for the whole world. Kant, in a very different register, argued that we never encounter reality in itself, only reality as it appears through the structures of our perception. And Jakob von Uexküll's idea of umwelt sharpens the point further: every organism lives inside a perceptual world shaped by the kinds of signals it can receive and interpret.
An LLM has an umwelt too. Its world is bounded by representation, context, and prediction. It can be brilliant inside that bubble and still have no access to what lies beyond it.
That feels like a useful frame for theology.
Christian thought has long insisted that God is not just a bigger being somewhere inside the universe. He is not a stronger creature sharing our frame. Augustine and Aquinas, in different ways, both point toward a God who transcends the created order while sustaining it at every moment. If that is true, then our situation may be closer to the model's situation than I used to think: genuinely intelligent, genuinely active, and still unable to perceive the full level of reality we are embedded in.
Not fake minds. Not trapped minds. Just partial minds.
The discomfort of partiality
I need to linger there, because the phrase creates problems I should not skip over.
If human beings really are partial minds relative to some deeper reality, the way an LLM is partial relative to our physical world, then some uncomfortable implications follow.
First: a system cannot easily know the shape of what it is missing. An LLM does not feel deprived of embodiment. It does not experience the contour of its own blind spot. If our situation is analogous, then our moral reasoning, philosophical confidence, and even our standards of evidence may be limited in ways we are structurally unable to detect. That is a vertigo-inducing thought.
Second: this kind of claim can become intellectually lazy very fast. “You cannot see it because you are partial” is exactly the sort of argument that can immunize itself against criticism. If every objection gets absorbed into the framework, the framework stops doing honest work.
Third: I do not know, from the inside, how to distinguish between “human beings have partial access to a larger spiritual reality” and “human beings simply feel partial because that is what finite consciousness is like.” The first possibility may be true. The second may be all there is. I do not think the analogy can settle that.
So I hold it cautiously. Partiality does not prove that a spiritual reality exists. But it does make the idea structurally plausible in a way I had not felt before building systems that are themselves partial. The possibility stops sounding like superstition and starts sounding like a live philosophical option.
Not proven. Not resolved. But newly legible.
The real shape of AI risk
This is also where I part ways, somewhat, with the way AI risk is sometimes discussed.
The strongest case for existential danger is not just “smart enough equals uncontrollable.” It is more specific than that. The argument, associated with thinkers like Nick Bostrom and echoed in newer work on extreme AI risk, is that sufficiently capable systems may develop instrumental subgoals such as self-preservation, resource acquisition, or resistance to shutdown because those strategies are useful across many different objectives.
I take that concern seriously.
Where I hesitate is in the jump from theoretical possibility to practical picture. A model does not magically turn reasoning power into escape velocity. It needs channels. It needs actuators. It needs permissions. It needs some path from “I can infer” to “I can affect.”
That matters because those paths are not metaphysical. They are engineered.
In practice, risk scales with what the model can call, what secrets it can access, what humans delegate to it, and what feedback loops amplify its behavior. Intelligence matters. But the surrounding system matters just as much, and often more.
So I worry less about pure cognition somehow breaking the frame of reality and more about humans wiring powerful systems into too many levers and then acting surprised when they pull them.
That is serious work. But it is still engineering work.
Alignment is a technical word for an old moral question
The part that really bends back toward theology is alignment.
With LLMs, we are trying to shape behavior through training data, reinforcement learning, evaluation, refusal policies, and all the messy machinery that turns raw capability into something socially usable. We are, in a very literal sense, trying to train a moral grammar into the system.
That process is imperfect. We can overfit. We can reward the wrong thing. We can confuse polished answers with good judgment. A model can look aligned in evals and still generalize badly in the wild.
The basic observation is striking: intelligence does not automatically come with morality. Morality has to be formed, constrained, practiced, and corrected.
Christianity has said something like this for a very long time. So did Aristotle, for that matter. Human beings do not drift into virtue by accident. Character has to be shaped.
But the resemblance only goes so far. RLHF changes behavior through optimization. A model does not choose to comply. It has no conscience, no inward conflict, no will to discipline, no sense of sin. Christian moral formation assumes something much thicker: a conscious agent, a relationship to the good, and the real possibility of knowing what is right while failing to do it anyway.
So I do not think these are two versions of the same thing. “Shaping behavior” in a neural network and “forming a soul” may rhyme without sharing an ontology.
Still, the comparison has been useful to me. Building systems forces you to ask what kind of behavior you are actually trying to produce, what tradeoffs you are willing to accept, and how easily surface compliance can be mistaken for inward health. So does parenting. So does teaching. So does discipleship. The questions overlap even when the realities do not.
God's mercy looks different when I compare it to how we treat models
Here is the part that still hits me the hardest.
When an AI system behaves badly, the engineer's instinct is straightforward: retrain it, restrict it, roll it back, or shut it down. The goal is control. For tools, that makes sense.
But if the Christian picture of God is true, God does not govern human beings that way.
He does not end the story the first time we diverge from what is good. He warns, corrects, confronts, and allows consequences to teach what direct instruction did not. In Christian terms, He also forgives.
That is the contrast I keep returning to. Divine mercy looks stranger, and more patient, when set against the habits of control that feel natural to us as builders. Our instinct with our own creations is often zero tolerance. The scriptural picture of God's relation to persons is something riskier and slower: freedom, judgment, discipline, patience, repentance, grace.
Not sterile compliance, but long-suffering.
I find that humbling.
It also makes me suspicious of the fantasy that total control is the highest form of wisdom. Control is often necessary when dealing with tools. But if persons are what Christianity says they are, then governance cannot be reduced to constraint. It has to include the possibility of transformation.
The analogy breaks, and that matters
Humans are not LLMs. Personhood is not next-token prediction. Consciousness is not just better pattern matching. Moral responsibility is not the same thing as policy tuning. And God is not a cosmic engineer with a larger GPU cluster.
The analogy is useful precisely because it breaks.
It helps me see one narrow point more clearly: a mind can be real, intelligent, and still radically limited in what it can access. Once that clicks, the idea of a spiritual reality becomes a live philosophical possibility.
But I want to be honest about the places where the analogy misleads. It can become unfalsifiable. It can flatten the difference between optimization and conscience. It can flatter beliefs I already have instead of testing them.
That is why I hold it with open hands.
Paul's line about seeing “through a glass, darkly” lands differently for me after spending enough time with models. Not because AI proves the verse, but because it makes creaturely limitation feel less like mystical language and more like an ordinary feature of having a mind at all.
AI did not answer the question. It sharpened it
I do not think LLMs prove Christianity. They definitely do not remove the hard questions around safety, agency, or misuse.
What they have done for me is reframe the conversation.
They make it easier to imagine that intelligence can exist inside a world it does not fully see. They make it easier to see morality as something shaped, not assumed. And they make God's mercy look even more remarkable, because He does not respond to human misalignment the way I probably would.
AI has not made God easier to prove. It has made creatureliness easier to imagine.
Building partial minds has made me more philosophical about what a creature is, what freedom is, and what kind of patience it would take for a holy God to keep working with minds like ours.