As I warned at the outset, I’m not an AI accelerationist.
At least, not the kind that meets most definitions of AI acceleration, which is to pursue the development of AI at all costs.
There are very real darksides to genAI (and other AI in the form it takes today in general) that we must contend with.
The Calendar of Necrubel
(or Five Psalms for the AI Apocalypse)
1. job loss
Massive job loss is incoming. I count myself among the soon-to-be casualties of the hollowed out working class. On a daily basis, I use LLMs to write code that would ordinarily take me hours, in minutes. Contrary to what you may have read, this code often works perfectly with a single prompt. I believe it’s only a matter of time before everything I do on the development end of things becomes automated by agentic workflows.
You might say, “Well then you must be a shitty developer!” and you’re probably right, but I would argue that your lack of foresight will also be your undoing. The pace of development of the technology is so rapid, I cannot keep up with it, and neither can you.
The damage has already been done to many whose livelihood genAI is capable of automating. We are obsessed with visual artists as if they are sacred cows, but there are whole swaths of working class creatives whose economic value is headed toward the slaughterhouse as generative tools reduce the manpower required to do their work: freelance writers, proofreaders, translators, voice actors, sound designers, concept designers, and content marketers, to name a few. Creatives are just the first casualties. The entire professional managerial class (PMCs)—executive assistants, data-entry clerks, paralegal support, bookkeepers, tutors, clinical and non-clinical support staff, market research analysts, human resource ops, customer support—are next.
White-collar work is front of mind because that's the narrative US media caters to. But what about the jobs that undergird our material economy? Line workers and assemblers? Skilled machinists? Warehousers? Fewer humans per shift in factories, especially in high-wage countries, means the remaining roles skew technical and supervisory. Once enough warehouses are automated near ports and major hubs, there’s less incentive to hire manual labor in those regions at all.
And even manual labor is not safe from automation. Field laborers, smallhold farmers, fishing crews… Autonomous tractors, harvesters, and sprayers driven by AI are not the stuff of science fiction when each can be piloted by only a handful of people managing entire fleets remotely. It’s a matter of scale and training data. It’s not difficult to foresee entire rural economies hollowed out as capital-heavy, AI-guided megafarms scale upward.
Then you replace even the smallest human cogs in the capitalist apparatus—like long-haul truckers and gig couriers—with autonomous systems, dwindling every human driver’s bargaining power to zero. And the transition doesn’t need 100% autonomy: partial auto-dispatch and AI pricing has already squeezed them into submission. This is devastating for people whose entire skill stack is licensed road knowledge or physical endurance. Will privileged PMCs even notice the disappearance of cashiers and fast food kitchen staff? It’s already happening incrementally without AI, but we can dream up successfully integrated LLM-based voice ordering systems that replace phone staff and drive-through attendants, or AI-driven kiosks that operate entirely self-sufficient mini-food-factories with robotic fryers, grills, and drink dispensers, guided by cameras and predictive ordering systems. In this future, a single kitchen wrangler manages machines instead of five to 10 line cooks. Entry-level jobs are summarily decimated, and teenagers, migrants, and people re-entering the workforce are displaced.
And we haven’t even touched on healthcare, policing, or the military.
AI coming for all our jobs. And I don’t know what we’re going to do about that.
Decelerationists propose putting a “pause” on the pace of AI development to prevent the creation of a “misaligned” superintelligence, or AGI (artificial general intelligence). The fantastic predictions of AI 2027 (a production of the AI Futures Project) are an exercise in futurist horror fiction. I strongly recommend reading and watching. Here is a breathtaking video summary by AI in Context:
At the end of the day, I’m just some guy.
I’m along for the ride, just like you. But what I do know, despite all this, is that sticking my head in the sand is worse than at least trying to learn how these technologies work, even if there is no way to be prepared for their arrival.
2. Dead Internet Theory
The Internet has been rotting for a long time.
I think there is some credence to the notion of “Dead Internet theory." SEO marketers have been sloppifying the web with content farms for more than a decade. The analytical technology behind this enshittification is more complex and insidious than any of us can comprehend, and I say this as someone who regularly has to attach the disgusting tentacles of those technologies to the teats of websites I develop in my day job.
It is only going to get worse as these things become empowered by AI.
3. the end times for privacy
Hosted tools may log prompts and outputs, creating data trails creators don’t expect. Training or fine-tuning on internal documents can leak sensitive details if not handled correctly. For individuals, the concept of privacy becomes a foregone conclusion, as corporations receive unprecedented access to our inner lives. With AI, it doesn’t matter if you stay off social media or are careful with your digital hygiene. Do you use technology of any kind? Well, then the vector for access to you is virtually unlimited, and genAI is daily creating vectors we have yet to imagine. This is without even considering what is happening behind the scenes of our governments, whose surveillance capabilities are greatly expanded by this technology.
4. ai-powered cybercrime
Deepfakes, digital scams, identity and biometric theft, and cybercriminality of all flavors is a new frontier for anyone with the will and lack of scruples. The sword of open source genAI cuts both ways: it’s trivial to generate a deepfake or replicate a voice on a personal computer, even from a single image (or clip of audio) of a victim. We’re already living in a society where the erosion of trust in media and evidence has led to political polarization. AI added to the mix only exacerbates the problem.
5. automated cognition
What happens to education in a world where we have a Wikipedia that can talk to us and reason on our behalf (but also hallucinates falsehoods 5 - 30% of the time, depending on the subject matter and training)? I loved the movie WALL-E. Is that the sort of future we’re headed towards?
It’s difficult for me to separate alarmism from genuine concern on this point.
I don’t think the leap here is like abacus to calculator, or card catalog to Internet. It’s more explosive and more profound, because we’re not talking about better access to information, we’re talking about cheap, automated cognition.
- Abacus → calculator meant reduced cost of arithmetic. Humans still decided what to calculate, why, and what the results meant.
- Card catalog → Internet meant reduced cost of finding information and people. Humans still had to read, synthesize, and act: the Internet as communication and memory layer.
- But Internet → LLMs… this reduces the cost of thinking operations over that information: summarizing, drafting, translating, pattern-spotting, basic reasoning. It’s qualitatively different than what we’re used to. Moving from the Internet to LLMs changes who does the thinking, and when we offload that sort of thinking to something other than ourselves, that may be where we start to lose a little bit of what (we think) makes us human.
This is why it’s so important for creatives to use genAI as tools, expressly for the purpose of executing our creative visions more efficiently. But the question remains: how do we develop creative visions in the first place, if our “training” as human beings dovetails with that of the almighty LLM?
It’s All About Capital
“…as I’ve said many times, the future is already here—it’s just not very evenly distributed.”
—William Gibson, 1999 NPR Talk of the Nation “The Science in Science Fiction”
Even with all these apocalyptic AI outcomes on the horizon, I can’t side with the deccelerationists.

While OSR+ is a fantasy TTRPG, I grew up reading science fiction, and I hope to one day produce a science fiction version of OSR+. My earliest memories learning about outer space was in reading these slim non-fiction volumes about comets and individual planets by Isaac Asimov (the Library of the Universe series). The idea that there are things so incomprehensibly vast out there and unknown blew my mind as a kid, substituting for what I imagine religious people feel when they think about God. And in a way, this wonder is similar to the sort of wonder we seek to experience when we play fantasy TTRPGs—a hope there is a numinous quality to reality that the human spirit yearns to connect with.
So part of the reason why I can’t say “Pencils down folks, AI is a net poison!” has to do with the naïve hope that the other side of the eschatonic coin is our salvation: the idea that if genAI can give the means of production back to the people, then the road to something like a Marxist utopia may be possible.
I understand that’s a crazy thing to say, when we started from this is “an expanded position statement on the use of generative AI (genAI) in OSR+,” which is a tabletop roleplaying game.
But here we are.
Today, AI feels like science fiction, but its effects are now unfolding in real time.
- DeepMind’s AlphaFold cracked a 50-year-old protein structure challenge. Its GNoME model predicted 2.2 million new inorganic crystal structures, with ~380,000 estimated to be stable, equivalent to roughly 800 years of prior human materials discovery.
- LLMs are helping to solve 60-year-old math problems and increasingly translating human mathematical proofs into formal language.
- Reinforcement-learning controllers have been used on real tokamak devices to control plasma rotation and magnetic configurations, stabilizing conditions that humans struggle to tune in real time.
- Deep generative models are routinely designing novel molecules and proteins from scratch
- AI is involved in designing drug candidates that are entering human trials.
I think if we do nothing, then we fall behind.
If we refuse to learn how AI works, then it can’t be used ethically. Others will shape how it impacts society.
If democratic societies don’t develop AGI first, then authoritarian regimes will (and there's some irony in writing this, as the West descends into authoritarianism). Complete silence on our part means there will be no ethical discussions at all. We’ll simply be left out of the larger conversation, and left behind by those who don't care about ethics. AI is part of a geopolitical race, whether we like it or not.
Massive frontier models currently require data centers with vast compute, but that may not always be the case. The open source community is like a tiny lighthouse on a dark island, and we are lost at sea. Individuals are already adapting AI models to run on smartphones. What begins as inaccessible technology eventually became ubiquitous.
All of this has happened before, and it will all happen again.
To quote a popular refrain of Bernie Sanders (who is not a fan of AI): Maybe, just maybe… We have a chance, if we try.
I don’t consider myself a particularly intelligent person, or a particularly creative one.
I just want to make a game that inspires people. And I think genAI is a powerful tool I can use to do just that.
So let me try.
Archetypes
Armor
Classes
Conflicts
Cultures
Ethos
Flaws
Glossary
Kits
Maleficence
Origins
Shields
Skills
Spells
Stances
Status Effects
Tactics
Talents
Techniques
Treasure
Weapons

Hall of Heroes
Hall of Legends
Dungeons & Flagons