Skip to content

Bruce Hart

AI LLMs Personal Opinion

Most of Life Is Pattern Matching Until It Isn't

Portrait of Bruce Hart Bruce Hart
8 min read

The older I get, the less random the world looks.

Maybe I just have an analytical mind, but I keep running into the same feeling in completely different places: what first looks haphazard usually turns out to have structure. Not perfect structure. Not a clean formula. But enough pattern that, once you see it, the chaos becomes a lot more legible.

I notice this with stories. I notice it with people. I notice it with math. And lately I notice it every time I spend serious time working with LLMs.

That is part of why AI feels so interesting to me right now. It is not just that the models are useful. It is that they keep pressing on a question I already cared about: how much of the world is pattern, and how much of it only looks like pattern until you hit the edge?

Most things feel messy before they feel predictable

Good screenplays and novels do not come out of nowhere.

You can see this pretty quickly if you read about structures like the hero's journey or any of the other well-worn storytelling frameworks people love to argue about. There are beats that tend to work. There are rhythms audiences respond to. There are arcs that keep showing up because they are emotionally legible.

That does not make art fake. It makes art more interesting.

A weak writer treats structure like a paint-by-numbers kit. A strong writer uses structure the way a jazz musician uses a standard: as shared ground to build on, bend, and occasionally ignore. The pattern is not the point. The pattern is what lets the deviation matter.

I think people work the same way.

Human beings love to imagine that everyone around them is mysterious and unpredictable, but a lot of behavior starts making sense once you understand what someone wants, what they are afraid of, and what has worked for them before. Experiences leave grooves. Desires create incentives. Incentives create habits. From a distance it can look like randomness; up close it often looks like memory plus pressure.

Math has this feeling too, which is part of why it is so satisfying. Once you have solved enough problems, whole classes of them start to feel familiar. You recognize the shape before you know the answer. You know when to substitute, when to bound, when to reframe, when to stop pushing down one path and try a different one.

Pattern recognition is not a side effect of expertise. It is a big part of what expertise is.

The edge cases are where taste and originality show up

Still, I do not think the lesson here is that everything important can be reduced to a formula.

The opposite, really.

The moment a pattern becomes clear, the interesting question changes. It is no longer "what usually works?" It becomes "what do you do when the usual thing is not enough?"

That is where taste shows up.

A great novelist is not great because they discovered that stories need structure. They are great because they know which structural rules to respect, which ones to hide, and which ones to violate. A great manager is not great because they know that people respond to incentives. They are great because they can spot the tiny signals that tell them this particular person is drifting away from the usual pattern. A great mathematician is not great because they memorize the standard moves. They are great because they sense when the standard moves are exhausted and something stranger is required.

This is the part I keep coming back to.

Patterns get you competence. Judgment at the edge of the pattern is what gets you distinction.

That is also why mastery often looks so simple from the outside. Experts are not just carrying more facts around in their heads. They are compressing large parts of a domain into reusable shapes, then noticing the places where the shape no longer fits.

Not pattern worship, but pattern fluency.

LLMs are weird because they are so good at the first part

This is why LLMs feel both impressive and incomplete.

They are astonishing pattern machines.

Give them enough examples and they become very good at absorbing the statistical regularities of language, code, argument structure, musical phrasing, and a surprising amount of everyday reasoning. They are often good at seeing the groove of a domain and then articulating it clearly. Sometimes they do that better than people who technically know the material but cannot explain the pattern underneath it.

That matters. A lot of useful work is exactly that.

It is easy to dismiss pattern work until a machine starts doing it at scale. Then you realize how much of professional life depends on being able to summarize the obvious-to-an-expert structure of a problem in a way that someone else can act on. LLMs are very good at that kind of translation.

But working with them also sharpens the real question.

Where do they break down, and why?

When a model fails, is it because the pattern exists but the model has not seen enough examples to lock onto it? Or is it because some parts of the task are not just hidden regularities waiting to be discovered, but genuinely depend on things like lived context, taste, embodiment, intention, or a leap that cannot be reconstructed from prior examples alone?

I do not think we know yet, and I do not trust anyone who sounds too certain.

The real debate is not intelligence, but where the pattern stops

This is where music and writing get interesting.

AI-generated music is better than it used to be. AI-generated writing is better than it used to be. Sometimes much better. The floor has risen fast. If you want something competent, passable, or even occasionally striking in a familiar style, the tools are already here.

And yet, at least today, I can still usually feel the difference between generated output and the work of someone exceptional.

Not always. Not in every sentence. Not in every melody.

But over the course of a whole piece, I still notice it.

The great human version usually has a kind of pressure in it. It feels like someone made a sequence of choices for reasons, not just because those choices fit. There is often a sharper sense of what is being withheld, what is being risked, what is being smuggled in under the surface. The work is not just coherent. It feels committed.

Maybe that gap shrinks a lot. I would bet on that.

Models will get better. Training will get broader. Inference will get cheaper. Tooling will improve. Feedback loops will get tighter. Some things that currently look like real creative boundaries may turn out to be data problems or evaluation problems or product problems.

But even if that happens, I doubt the story ends with machines simply catching up to a fixed human target.

More likely, the target moves.

If models become excellent at reproducing established patterns, then human originality becomes even more concentrated in the decision to break from those patterns in a way that still works. The machine learns the center of gravity. The human keeps searching for the next place gravity can be cheated.

It means the frontier keeps relocating.

This is why the next few years will be fun to watch

What excites me is not some neat answer to whether AI can or cannot be creative.

I think that question is too blunt.

The better question is: which kinds of creativity are mostly pattern completion, and which kinds depend on the ability to notice when pattern completion is the wrong move? Which parts of taste can be learned from enough examples, and which parts only emerge from having skin in the game? Which parts of originality are deep structure, and which parts are acts of defiance against structure?

Those are much better questions.

They are also more honest questions, because they leave room for surprise in both directions. Some things we currently defend as uniquely human may turn out to be more compressible than we thought. Some things we assume will fall to scale and compute may turn out to depend on forms of grounding that are harder to fake.

Either way, we get to find out.

That is the part I like most.

I do think life is full of patterns. More than most people notice. Probably more than I notice. But I also think the best work, the memorable work, and maybe the most human work tends to happen right where the pattern starts to fray.

That is not a reason to fear LLMs. It is a reason to pay close attention to them.

They are teaching us something about the world by showing us how much of it can be learned as pattern, and by making the remaining question impossible to ignore.

Where, exactly, does that stop?