The ‘A.I Domini’ calendar system isn’t real. But maybe it should be.
The joke went something like this: if we count years from significant historical turning points – Christ’s birth for Anno Domini, the founding of Rome for ab urbe condita – then November 30, 2022 deserves the same treatment. ChatGPT dropped, one million people signed up in five days, and suddenly everyone had an AI coworker they didn’t ask for. By two months, ChatGPT hit 100 million users – the fastest adoption of any internet app in history.
So the gag writes itself: 2022 is Year Zero. Everything after is A.I Domini. Years of the Machine.
Funny? Sure. But here’s what no one’s talking about: this joke is also a mental framework you can actually use. Not for giggles. For work. Because pre-ChatGPT and post-ChatGPT aren’t just different eras in meme history – they’re different in how you prompt AI, how you version projects, and how you evaluate what a model can or can’t do.
This guide shows you three ways to apply the A.I Domini framing that aren’t about posting screenshots. They’re about getting better outputs, avoiding version confusion, and staying honest about what AI actually knows.
What ‘A.I Domini’ Really Means (and Why It Caught On)
Anno Domini – Latin for “in the year of the Lord” – has been marking time since a 6th-century monk decided Christ’s birth was a better anchor than Roman emperors. Dionysius Exiguus devised the system in 525 CE, though it took centuries to stick.
The A.I Domini joke does the same thing: it picks a moment – ChatGPT’s November 30, 2022 launch – and says “this is when everything changed.” It’s cheeky. It’s also… not entirely wrong.
Before that date, AI tools existed. GPT-3 came out in 2020. DALL-E in 2021. GitHub Copilot started autocompleting code the same year. But ChatGPT was the one that escaped the lab. Suddenly your aunt was asking it for recipes and your manager was pasting marketing copy into it. It democratized access in a way previous models never did.
Did ChatGPT invent AI? No. Did it change when regular people started using AI daily? Absolutely – 800 million weekly users by late 2025 say as much.
Method A: Referencing A.I Domini in Prompts (Wins for Knowledge Cutoff Clarity)
Most people don’t think about when an AI learned what it knows. That’s a mistake.
Every model has a knowledge cutoff – a date after which it hasn’t seen new information. If you’re asking GPT-4 about “recent political events” without specifying what “recent” means, you’re rolling dice. The model might think “recent” is six months ago. Or two years ago. You don’t know.
Here’s where the A.I Domini framing helps: instead of vague time references, anchor your prompts to the era you care about.
For example:
- Weak: “What are the best practices for using ChatGPT?”
- Better: “What were the best practices for using ChatGPT in Year 1 A.I Domini (2022) versus now in Year 4 (2025)?”
The second version forces the model to differentiate between early adoption advice (when plugins didn’t exist, when GPT-4 wasn’t out) and current strategies (custom GPTs, vision, voice mode). You get historical and current context, not a mush of the two.
Another use: troubleshooting old tutorials. Someone wrote a guide in “Year 1” that no longer works. If you prompt “Why doesn’t this 2022 ChatGPT method work anymore?”, the model has to reconcile timeline changes. You’re explicitly asking it to think in versions.
Does the AI understand “A.I Domini” natively? No. You’re using it as a prompt anchor – a way to structure your question so the model separates pre-release, early-era, and current-era information. It’s semantic scaffolding.
Method B: Dating Project Files by AI Era (Loses to ISO 8601, but Wins for Human Context)
Standard date formats exist for a reason. ISO 8601 keeps file systems happy. Your operating system, your backup scripts, your CI/CD pipelines – they all expect 2025-03-21, not Year-3-AI-Domini.
So don’t replace real dates. Supplement them.
I’ve started tagging project documentation with both. A file might be named:
2023-06-15_prompt-library_Y1-AID.md
The first part is machine-readable. The second part – Y1-AID (Year 1, A.I Domini) – is a human context flag. When I revisit this file two years later, I instantly know: this was written in the ChatGPT early days, before GPT-4, before custom instructions, before the November 2023 Dev Day features dropped.
It tells me to read it with skepticism. Techniques that worked in Y1 might be obsolete. Workarounds that were necessary might now be built-in. The A.I Domini tag is a smell test – it warns me the content may be stale.
You can do the same with:
- Internal wikis (“This workflow was designed in Y2, pre-GPT-5”)
- Code comments (“Prompt written Y1-AID, may need refactor”)
- Client deliverables (“Report generated using Y3 models”)
The goal isn’t to be cute. It’s to timestamp by capability era, not just calendar date. Because “June 2023” doesn’t tell you if GPT-4 was available yet. “Y1 A.I Domini” does.
Pro tip: If you’re documenting AI-generated content for clients or compliance, adding an era tag alongside the model version (e.g., “GPT-4, Y2-AID”) gives auditors context. They can trace not just what model you used, but when in the tool’s maturity you used it – which matters for understanding quality and limitations.
When This Breaks
Two warnings.
First: automated systems won’t parse this. If your build script expects ISO dates in filenames, Y1-AID will confuse it. Keep the human-readable tag after the machine-readable date, as a suffix, not a replacement.
Second: not everyone will get the joke. If you’re collaborating with people outside the AI-fluent bubble, they’ll see “Y3-AID” and ask what it means. You’ll spend time explaining instead of working. Use it in personal projects or teams that already speak the language. Don’t force it into client-facing work unless they’re in on it.
Method C: Evaluating AI Capabilities by Era (the One That Actually Matters for Accuracy)
This is the use case that justifies the whole framework.
When someone says “ChatGPT can’t do X,” the first question should be: which ChatGPT?
Year 1 ChatGPT (late 2022, GPT-3.5) couldn’t browse the web. Couldn’t see images. Couldn’t remember conversations across sessions. It hallucinated constantly and had no plugins.
Year 3 ChatGPT (2024, GPT-4 with updates) could do all of those things. Plus custom instructions, voice mode, DALL-E integration, and code interpreter.
If you’re reading a tutorial from Y1 that says “ChatGPT can’t access real-time data,” that was true then. It’s false now. But people share old advice without timestamps, and suddenly you’re operating on outdated assumptions.
The A.I Domini framing gives you a quick heuristic:
- Y1 (2022): GPT-3.5, text-only, no plugins, knowledge cutoff ~April 2023 initially, heavy hallucination
- Y2 (2023): GPT-4 release (March), plugins (April-May), browsing mode (beta), custom instructions (July), DALL-E 3 integration, GPT-4 Turbo (Nov)
- Y3 (2024): GPT-4o (omni-model with vision/voice), memory features, custom GPTs, GPT Store, better reasoning, lower cost
- Y4 (2025): (As of March 2026, Y4 is still unfolding – track new releases yourself)
When you see advice, check the era. “This prompt doesn’t work anymore” – was it written in Y1? Then yeah, probably broken. “ChatGPT is terrible at math” – true in Y1, less true in Y3 with code interpreter.
You’re not memorizing release notes. You’re binning capabilities by time so you can filter outdated info fast.
The Gotcha No One Mentions
Here’s the edge case that ruins the whole system if you’re not careful:
ChatGPT’s release date (Nov 30, 2022) does NOT align with its training data cutoff.
Early GPT-3.5 models were trained on data through early 2022 – before ChatGPT launched. So “Year 1 A.I Domini” (2022) is chronologically after the training data ends. If you prompt about “late 2022 events” and expect the model to know them because they happened in “Year 1,” you’ll be disappointed. The model’s knowledge precedes its public release.
Later updates shifted cutoffs forward (GPT-4’s cutoff varies by version – some go to April 2023, others to December 2023, and by 2025 many models have more recent data). But the release year ≠ knowledge year mismatch persists.
Translation: Don’t assume “Y1 A.I Domini” means the model knows everything from 2022. It doesn’t. Check the actual cutoff for the version you’re using. The era label is a rough guide, not a spec sheet.
Why This Framework Works (Even Though It’s Ridiculous)
Absolute dates are hard to remember. “Was browsing mode available in June 2023 or August?” Who knows.
But “Was browsing available in Y1?” Easy. No. Y1 was bare-bones ChatGPT. Browsing came in Y2.
The A.I Domini system chunks a fast-moving timeline into discrete eras you can reason about. It’s like saying “the iPhone 3G era” instead of “June 2008” – you immediately know what features existed and what didn’t.
It also forces a useful habit: thinking in versions, not absolutes. “ChatGPT can’t do X” is lazy. “Y1 ChatGPT couldn’t do X, but Y3 can” is precise. It makes you a better prompter, a better debugger, and a better evaluator of what advice to trust.
Is it silly? Yes. Does it work? Also yes.
Frequently Asked Questions
If I use ‘A.I Domini’ in my prompts, will the AI understand it?
Not natively, but that doesn’t matter. You’re using it as a framing device – a way to structure your question so the model separates time periods. If you say “Compare best practices from Year 1 A.I Domini (2022) to Year 3 (2024),” the model will interpret the years you specify and differentiate the eras. You’re giving it a timeline to work with. The label itself is arbitrary; the temporal structure is what counts.
Why not just use the model version (GPT-3.5, GPT-4, etc.) instead of years?
Because model versions don’t tell you when features launched. GPT-4 came out in March 2023, but plugins didn’t arrive until May, and GPT-4 Turbo wasn’t until November. If you just say “GPT-4,” you don’t know which feature set you’re talking about. The A.I Domini year gives you a rough feature era – Y2 had early GPT-4 plus plugins, Y3 had mature GPT-4 with vision and custom GPTs. It’s a shorthand for “what was possible then,” not just “what model existed.” Also, casual users don’t track version numbers. They remember when they started using ChatGPT. Years are more intuitive than version strings for non-technical people.
Is there any official recognition of A.I Domini as a real calendar system?
No, and there won’t be. It’s a meme, not a standard. Anno Domini took centuries to become official after Dionysius Exiguus proposed it in 525 CE – and it had the backing of the Catholic Church. A.I Domini has Reddit threads and niche tutorials. Use it as a personal mental model and a team convention if your collaborators are on board, but don’t expect it to show up in ISO standards or Wikipedia’s calendar page. The value is in the framework, not the formality. It’s a thinking tool, not a petition to rewrite the Gregorian calendar.
What to Do Next
Pick one method and try it this week.
If you write a lot of prompts: add an era reference to your next ChatGPT query. “What were the limitations of Y1 models that Y3 models solved?” See if the output gets more specific.
If you manage projects: tag one documentation file with both ISO date and A.I Domini year. Revisit it in six months and see if the era tag helped you assess staleness faster than the calendar date alone.
If you evaluate AI tools: make a rough capability timeline. Y1 = basic text. Y2 = plugins + early multimodal. Y3 = mature multimodal + memory. When you read old advice, check which era it’s from. Ignore anything that’s been obsoleted by new features.
The joke is that we’re living in A.I Domini now. The useful part is treating it like we actually are – dating our work, structuring our questions, and versioning our expectations by the era that produced them.
Because if 2022 really was Year Zero, then we’re only in Year Four. And everything you think you know about AI might be Year One thinking.