Skip to content

Bruce Hart

AI LLMs Personal opinion

Seedance 2.0 and the Future of Infinite TV

Portrait of Bruce Hart Bruce Hart
5 min read

Seedance 2.0 feels like a glimpse of how entertainment is going to change.

I have this funny memory from 20 plus years ago: I wrote a post on my old blog arguing that computers would eventually model voice and imagery well enough that we could bring old TV shows back to life.

At the time, I had no idea if that would happen in my lifetime, or even what the path would look like. It was pure extrapolation: computers keep getting better at processing images and sound, and eventually the trend line crosses the stuff we can already see in our heads.

I was also a big Seinfeld fan in high school, and I was genuinely sad when it ended. I remember wishing there were a way for creative people to write new episodes and bring them to life without needing the original actors, a studio schedule, and the full machine of 90s TV.

Watching the Seedance 2.0 demos that have been floating around this week, that old blog post stopped feeling like sci-fi and started feeling like a product roadmap.

The cost curve is the real special effect

The models themselves are impressive, sure.

But the bigger story is the cost curve.

When a capability is expensive, it stays centralized. Big studios. Big budgets. Big gatekeepers.

When a capability becomes cheap, it turns into a tool. Then a workflow. Then a genre.

Video generation right now still has that "this is amazing but I cannot afford to do this all day" vibe for an individual creator. That is exactly the kind of thing that tends to change faster than people expect.

If you buy the premise that costs drop and quality climbs, then the question is not "will people make more videos". The question is "what do you build when you can make video the way you make software".

Entertainment is turning into software

One mental model I keep coming back to is: entertainment becomes a compiler target.

You have an idea, a script, a vibe, a set of characters, and a style guide. Then you compile that into video.

That sounds abstract until you remember what software did to every other medium:

  • Editing became non-destructive.
  • Iteration got fast.
  • Distribution got free.
  • Collaboration went from "room" to "network".

Generated video pushes those same dynamics into the part of the stack that used to be hardest to scale.

The obvious first wave is novelty. Then parody. Then "look what this model can do" clips.

The second wave is workflows. The moment people have reliable pipelines, shared assets, and predictable quality, you get sustained output. You get series.

And once you can make a series cheaply, you can make a lot of them.

Licensing and consent become the real product layer

There is an awkward reality here.

A lot of the open models people are excited about are not especially respectful of copyright, consent, or provenance. That is part of why the demos feel a little like watching the future through a cracked window.

Still, I can easily imagine a near future where the most valuable models are commercial, licensed, and boring in the best way.

The real unlock is not "anyone can imitate anyone".

The unlock is "actors can license their likeness and voice, with clear terms".

That changes the incentive landscape:

  • Actors get paid when their digital self gets used.
  • Studios can keep franchises coherent without impossible scheduling.
  • Creators can build with real, legal building blocks instead of vague imitation.

I would not be surprised if we end up with something like app stores for people. Approved bundles: actor A, style pack B, set kit C.

It is not hard to imagine big IP owners pushing this too. If the alternative is uncontrolled imitation, a well-run licensing system starts to look like the compromise everyone can live with.

The weird (and exciting) endgame is personal TV

Here is the part that keeps me up at night, in a good way.

Once generation is cheap and licensing is normalized, you can create entertainment that is personalized at the channel level.

Imagine a YouTube experience that is not just recommending videos you might like. Imagine a channel that is continuously generated to fit your exact interests:

  • Your favorite comedy rhythm.
  • Your favorite kinds of stories.
  • Your preferred episode length.
  • Your tolerance for plot vs. vibes.

The content stops being a catalog you browse and becomes a stream that adapts.

That is also where the hard questions show up.

Personalization can be delightful, but it can also become isolating. It can turn into a comfort loop. It can make shared culture rarer.

There is a world where we all watch completely different "shows" and have less and less common ground.

So the job is not just to make the tech work. The job is to decide what kind of media ecosystem we actually want.

What I think Seedance 2.0 is really showing

Seedance 2.0 is not the finish line.

It is a signal that the "make video" button is getting closer to feeling normal.

And once it feels normal, all the interesting questions shift from model benchmarks to human stuff:

Who owns what?

Who gets paid?

Who consents?

How do we label what is generated?

What do we protect, and what do we open up?

I do not have perfect answers. I just know we are going to get them the same way we get every other answer on the internet: by shipping, arguing, iterating, and slowly backing into norms.

If you have takes on how licensing should work here, or you are building tools in this space, I would love to hear from you.

Some Seedance 2.0 examples