I Turned the 2026 NFL Schedule Into JSON in Minutes
Bruce Hart
I rebuilt an old two-hour chore in 2 minutes and 34 seconds.
I do not have a real need for a 2026 NFL schedule JSON file right now.
That is almost the point.
On May 14, 2026, the NFL released the full regular-season schedule at 8 p.m. ET. A little after that, I tried a small experiment with the new Chrome plugin for Codex: could it do the annoying data extraction job I used to do by hand every football season?
It could.
The result is a public GitHub Gist with the 2026 NFL regular-season schedule as JSON. It has 272 games, week structure, kickoff times, teams, venues, broadcast info, and source URLs. The file says it was generated at 2026-05-15T00:51:38Z, which was still schedule-release night here on the East Coast.
The whole thing took 2 minutes and 34 seconds.
That number is funny to me because I know exactly how long this used to take.
The old job was not hard, but it was irritating
Back in the early 2000s, I ran a homemade football picks website for my family.
It was a very specific artifact of that era: Microsoft Access database, classic ASP, VBScript, and a lot of code that probably made perfect sense to me at the time.
Every season, I had to load the new NFL schedule.
That meant finding a schedule page, studying the HTML, writing a parser, handling edge cases, cleaning team names, dealing with time formatting, catching weird week structures, and making sure the generated rows actually matched what my Access database expected.
It was not a grand engineering challenge. It was chores all the way down.
And it still took a couple of hours, because the task was mostly reconnaissance. Where is the data? Is the HTML table consistent? Did the site use special formatting for Monday night? Did the schedule page change this year? Did I accidentally skip a game? Did I get home and away reversed?
The parsing code was the easy part. Trusting the result was the slow part.
The new workflow starts closer to the answer
This time, I did not start by writing a scraper.
I started with the browser.
With Codex connected through Chrome, the agent can inspect the same live work surface I am using. It can look at a page, notice useful network calls, try small bits of code, compare output against what is visible, and shape the data into a file that a program can use.
In this case, the generated JSON references ESPN NFL scoreboard API endpoints, one per regular-season week. That is exactly the kind of thing I would eventually hope to find if I were doing the job manually.
The difference is that Codex got there before I had time to settle into the old routine.
The output is not just a flat list of games either. It includes a season object, calendar metadata, week groups, team details, kickoff times in UTC and Eastern time, broadcast names, venues, status fields, and game links.
That is the difference between "I scraped a page" and "I now have a useful data artifact."
The real win is collapsed setup time
A lot of agent demos focus on replacing the moment where a developer types code.
That is useful, but it misses some of the texture of real work.
For jobs like this, the expensive part is not always the final script. It is building enough context to write the right script.
You have to inspect the page. You have to find the data source. You have to decide whether to parse HTML, call an endpoint, copy from a table, or use a hidden JSON blob. Then you need to validate the output against something you trust.
A browser-connected coding agent compresses that loop.
It does not remove judgment. I still want to know where the data came from. I still want a game count. I still want to spot-check kickoff times. I still would not treat a random gist as an official NFL record.
But for a practical scripting reference, the speed changes the feel of the task.
It turns "I should block off part of the evening" into "let's see if this works while I take out the trash."
One-off data work is becoming cheap enough to be casual
This is the part that feels new.
In the old world, I would only automate this if I had a reason. A family picks site was enough reason. A database import was enough reason. A yearly repeatable task was enough reason.
But "I am curious if I can make this useful JSON file" probably was not enough reason.
Now it is.
That matters because a lot of useful software work lives below the threshold where we traditionally build proper tools. Small imports. One-time migrations. Personal dashboards. Sports schedules. Vendor exports. Weird CSV cleanup. Data from a page that nobody bothered to make downloadable.
None of those tasks deserve a full product.
Many of them do deserve a clean artifact.
Codex with Chrome seems especially strong in that middle zone: not permanent infrastructure, but not manual copy-paste either.
Access is probably still waiting somewhere
I put the JSON up as a gist in case anyone else wants to reference the schedule from a script.
I am not planning to resurrect the old picks site. I do not even know where that Access database ended up. Somewhere there may still be a .mdb file with a table design that remembers my early-2000s opinions about primary keys.
But if someone does still have Microsoft Access installed, I am pretty sure Codex could load the schedule into it too.
That sentence would have sounded ridiculous to me twenty years ago.
Now it sounds like a five-minute side quest.
Small file, useful signal
The JSON file itself is not the important thing.
The important thing is what it says about the shape of these tools.
A task that used to require custom parsing, browser inspection, database-loading scripts, and a careful little verification process became a supervised extraction job. The human supplied the goal and the judgment. The agent handled the tedious pathfinding.
That is a good division of labor.
Not because it is flashy. Because it meets the work where it actually is: messy, small, specific, and not quite worth doing the old way.
If you need the schedule, the gist is here: 2026 NFL regular-season schedule JSON.
And if you are still running a classic ASP football picks site, first of all, respect.
Second, I think your import script just got a lot easier.