Skip to Primary Menu Skip to About OSR+ Menu Skip to OSR+ Support Menu Skip to Main Content

SupportAI Use

FAQs

What follows is a concise Q&A regarding the use of AI in OSR+ products.

You can read our full AI Statement of Use to better understand our perspective.

Entering: . Skip section?

Why We Use AI

Why does OSR+ use generative AI in its books and tools instead of hiring human artists and writers for everything?

In short: we believe genAI is a remarkable tool that allows us complete control over the creative process in the production of our game system. GenAI also enables us to produce creative material at scale that would not be possible without tremendous budgets and time we don’t have. You can read more how we use AI in OSR+ in our AI Statement of Use.

If AI is dangerous and destabilizing, why not simply refuse to use it in OSR+ products?

The premise of this question assumes that refusing to use AI meaningfully reduces its danger, which we don’t believe is true. We believe AI’s true destabilizing effects are driven by capital and state power. Refusing to engage with the technology only limits our own agency while changing nothing about the trajectory of its development. In our full AI Statement of Use, we discuss the dark side of AI and why we think it’s worth using the technology despite that dark side.

Other publishers (Wizards of the Coast, Paizo, Chaosium, some small presses) proudly advertise “no AI.” Why is OSR+ going the opposite way?

We don’t accept the premise that AI is something ethically suspect by default, nor do we believe its environmental or social harms are meaningfully reduced by symbolic abstention at the level of small creative studios.

For OSR+, genAI is simply a tool that gives us direct control over our production pipeline—from conception to execution. Larger publishers have the privilege of relying on decades of accumulated capital and staff as well access to physical distribution networks to achieve high production values through traditional means, and many indie presses do remarkable work without AI as well. But we don’t view AI use as a binary ethical choice, and for a small team building a highly integrated system at scale, genAI enables creative possibilities that would otherwise be unattainable for us.

How central is AI to OSR+? If AI tools vanished tomorrow, could this game line realistically continue?

AI is central to how OSR+ currently realizes its visual and digital ambitions, but it’s not foundational to the existence of the game itself. If genAI tools vanished tomorrow, OSR+ could continue, but not at the scale, level of integration, or pace you see today. As a three-person team, we’d produce far less artwork, our digital tooling would slow down significantly, and we’d be forced to narrow our creative vision and crowdfund to replace that lost capacity with human labor.

Scope of Our Use

Are there any parts of OSR+ (books, PDFs, tools, website) that are guaranteed 100% AI-free? If not, why not?

If by “guaranteed 100% AI-free” you mean “genAI was not involved in any part of the process,” then no. We will likely have a quick start rules PDF to accompany the forthcoming print books that lacks illustrations, but even in future planned books where one or more artists may be responsible for the majority (or even all of the art), there may be genAI-assisted processes that go into the book’s layout and design, separate from the art itself.

Why don’t we produce 100% AI-free alternatives to our products? We don’t believe it’s unethical to use AI in the first place, so it doesn’t make sense for us to focus our efforts on producing material that’s 100% “AI-free.” We think of the technology as just another tool in our arsenal to produce books. We would like to work with specific artists, for example to produce smaller campaign setting books (the Worlds of OSR+), and feature their work as the predominant visual medium in the text, when we can afford it. That’s probably the closest we would come to a fully “AI-free” book.

Exactly which visual assets in OSR+ are AI-generated, and which are traditionally created or heavily hand-edited?

There are a few categories of visual assets used in OSR+ when it comes to how they are produced:

  • Regarding the website itself, all structural assets of the website’s design are either handcrafted by me, or licensed from existing icon sets or graphic element sets and then compiled into the design you see when browsing the website by hand through digital editing. You can see a list of attributions for any freely licensed assets here: https://osrplus.com/support/credits/ but I do not list attributions for licensed assets I purchased, where no attribution is required.
  • Worlds, character, and landscape artwork are all produced with genAI assistance, either wholly generated by local models (or in rarer cases, generated in a commercial model and then re-styled by a local model) or generated and then edited post-generation via digital tools by hand. For example, each World of OSR+ has its art produced by different sets of LoRAs (low rank adaptation fine-tunes) or custom checkpoints that we either made or sourced, in conjunction with tailored prompts and other post-generation enhancements (such as though local IPAdapters, controlnets, and other generative commercial tools). 
  • Video assets we recorded of live footage may incorporate genAI in the same way as above, or be produced using genAI tools.
  • As of the date of the writing, we haven’t produced print materials, but our process would be the same for print as on the web. We’ll likely incorporate some traditionally produced art (expect 10%), depending on the needs of the individual book and whatever budget we have to pay an artist to do that sort of work. If we had the budget to hire an individual artist to do all the art for a specific book, we’d love to do that, and they would “own” the visual direction of it. But in that scenario, we wouldn’t be averse to incorporating AI assets to flesh out the book’s layout, or less important aspects of its visual design.

How much of the art in a typical OSR+ book is generated by AI versus hand-drawn or painted?

As of the date of the writing of this FAQ, we haven’t produced print materials, but our process for producing artwork would be the same as we’re doing for digital assets on the web. We’ll also likely incorporate some traditionally produced art, depending on the needs of the individual book and whatever budget we have to pay an artist to do that sort of work. You can expect a 90% generative and 10% traditional split. If we had the budget to hire an individual artist to do all the art for a specific book, we’d love to do that, and they would “own” the visual direction of it. But in that scenario, we wouldn’t be averse to incorporating AI assets to flesh out the book’s layout, or less important aspects of its visual design.

Do you disclose, at the page or piece level, when a specific illustration was AI-assisted? If not, why should players trust you about your process? Will OSR+ clearly label AI-assisted art and text in each product so players can make informed purchasing decisions? If not, why not?

We’re a pro-genAI publisher, so our default position is that the art will be generative, rather than the other way around. We attribute traditionally generated art so that the individual artist behind the piece is recognized, if it appears in print or on the web. We haven’t produced print books as of the date of this writing, but you can expect a general disclosure that the book uses genAI art throughout on the copyright page, and that traditionally produced art is specifically captioned on a per-piece basis.

If some people feel AI-output is inherently soulless, why try to persuade them otherwise instead of simply respecting their boundary and labeling OSR+ as “AI-heavy”?

We’re not trying to reach consensus with anyone via our AI Statement of Use or this FAQ. We’re trying to educate the public about our processes and share our perspective on the subject, so you can make an informed decision if the use of AI is a concern for you in gaming. We also don’t believe in labeling creative products on the basis of how they’re created. We think that as AI becomes more ubiquitous and integrated with our society and creative processes in general, the pro/anti-AI divide will dissolve, and these distinctions will become meaningless.

How We Use AI

Do you use LLMs (like ChatGPT, Claude, Gemini, etc.) for OSR+ rule text, flavor text, or setting prose, or only for brainstorming?

We only use LLMs for brainstorming, initial drafting, or analysis, like sifting through large swaths of text or coming up with kernels of ideas that we can refine. In our experience, LLMs are simply not good enough to match human writers yet. I (personally, as the system creator) enjoy writing rules and flavor text myself. I’m not opposed to using LLMs to refine or draft more technical content (e.g., generating a glossary from the core rules), if there’s a need for it. Regarding drafting, we have used large-context LLMs (such as NotebookLM) to parse dozens of bestiaries in the production of our own bestiary, in order to perform the research necessary to write about all the monsters. This involves having the AI review thousands of pages and then produce summaries we can reference in order to write our own material. It may sound hypocritical to hold writing to a higher standard than art, but I am a creative with a background in both writing and art, and it’s simply my experience that genAI is (currently) better at producing evocative art than it is evocative writing. Moreover, in some cases our team physically cannot produce art by traditional means at the scale we need it in our system, even if some of us personally love drawing and painting.

What concrete steps do you take to ensure that OSR+ AI art doesn’t just look like generic Midjourney/“same-face” output?

The short answer is creative vision and technical skill. You can read more about this in the full AI Statement of Use. You see a lot of “AI slop” on the web (just like there’s a lot of traditionally produced slop on sites like DeviantArt) because the stuff has no creative vision behind it, and relies on commercial tools to do the difficult technical work.

When you sit down to use genAI to produce art, you can’t just type some text and hope for the best. If you take raw output from Midjourney and call it a day, you’ll end up with the sort of homogeneously overstyled “slop” you’re describing. Engaging with genAI to produce quality output for use in larger creative work is very much like engaging in a branding exercise, because the art is just one small (but very important) component of the whole that conveys the sort of experience of play the game is intending. You also need the technical knowledge to make full use of the generative tools available (such as ComfyUI). In laymen’s terms, such tools not only produce an output based on a prompt, but take inputs that contain generalized styles and then allow you to further manipulate the image after it’s produced with even more inputs.

Do you use “style prompts” calling out specific artists by name (living illustrators, RPG artists, etc.) when generating OSR+ art?

Yes. We use many artist tokens together, both living and dead, to derive aspects of their specific styles as part of our process of creating new styles specific to each World of OSR+.

In other words, artist tokens are a shorthand for adding one or more aspects of an artist’s individual approach to rendering art to the new style we’re trying to develop. While it is possible to replicate many such aspects without directly referencing their token, the end result is both inefficient (because it dilutes the attention of the prompt with an excess of tokens) and (usually) wildly inaccurate (for the same reason). We do not, however, specifically try to replicate specific artists’ styles through this process.

Do you refuse to use certain models or LoRAs because of how they were trained, or is your position that training data provenance doesn’t matter morally?

It depends on the LoRA. For example, there are many LoRAs that are specifically trained on individual artists’ work. These LoRAs are often too dominating to use in our processes, because their designed purpose is to produce work that looks exactly like the artist it’s trained on. We’re not opposed to using them if we can dial down their influence on the overall output, such that only the specific aspect of the artist’s style we’re trying to adopt comes through (like say, medium, color, or compositional style, etc). 

With regard to the provenance of the training data: it’s impossible to know for sure whether the training materials were obtained legally. As of the date of this writing, the outcome of the Anthropic case suggests that if you trained on illegally obtained materials, that’s still considered infringement. However, whether it is illegal to produce output via inference from a model that was trained illegally is untested in the courts. 

Morally, we don’t agree with the view that there is a chain of responsibility that starts with how a model was trained, and ends with every output from that model being morally compromised. We think it’s more practicable to evaluate the market harm caused by individual outputs, rather than the models themselves. You can read our full AI Statement of Use for more insight into our perspective on this point.

How do you document your AI workflows so you can show “substantial human authorship” if a dispute ever arises?

Because 95% of the work we generate is done locally, we can easily demonstrate when human authorship is involved with the records that our local tools produce. We’re under no obligation to share with the public the technical legwork we employed, but we’d do so if we had to provide evidence in a dispute.

Copyright & Ownership

How do you ensure AI-assisted text isn’t plagiarizing or paraphrasing someone else’s work from its training data?

It’s an incredibly unlikely occurrence, but the answer is simple: we don’t use any text it produces verbatim. At the moment, most genAI text is dry and full of clichés out-of-box. We have to rewrite anything it produces anyway, so our process only leverages AI to produce outlines or drafts as reference material.

Given that AI models are trained on huge corpora of copyrighted work, why do you not consider AI art “theft”?

We believe AI art is “theft” only when it tries to replicate a specific artist’s work and compete with them in the market. You can read our full AI Statement of Use for more details.

How do you reconcile your use of models that were probably trained on unlicensed images with respect for fellow TTRPG artists trying to make a living?

We do not believe training on unlicensed material is theft, either on a legal basis (as reinforced by the recent Anthropic court case) or on a moral basis, given the way that training works. If fellow TTRPG artists are offended by this position, we encourage them to read our full AI Statement of Use to better understand our perspective.

If a future court declares certain training practices infringing, will you commit to retiring art produced with those models from OSR+ products?

No, because such a ruling would pertain to training practices, not the inferences.

Are you concerned that using AI-generated images might expose OSR+ to copyright lawsuits from artists or rights-holders?

Actually, we’re less concerned about being sued for infringement by rights holders because of genAI output than by materials traditionally licensed or freely licensed, such as work in the Creative Commons or public domain, because both hosted and commercial models are statistically unlikely to produce exact replicas of rights-protected materials unless guided to do so through adversarial prompting. By comparison: genAI output starts at a baseline of not being copyrightable at all, whereas licensed work always has an underlying rights holder who is liable to sue you, regardless of what license terms say. And in my personal experience (speaking as the creator) I’ve had a copyright troll firm try to sue me for using copyrighted photography that (I strongly suspect) was intentionally deposited into the Creative Commons as a honeypot. You can read about that harrowing experience on TechDirt

Do you believe OSR+ owns full copyright in its AI-assisted images and text, or do you treat them as partially uncopyrightable under current U.S. guidance?

We assert that we own the copyright to materials we created where there was substantial human input involved. For example, if we put up some image on the website that was just a wholesale one-shot output from a genAI model, then that image would not be copyrightable. But if we put up an image that underwent significant manipulation after being generated, then it is copyrightable. This is consistent with the US Copyright Office’s views on the matter.

Economic Impact

If a human artist produced the same final images you’re getting from AI, would you still choose the AI route purely because it’s cheaper and faster?

For smaller, self-contained products, such as a World of OSR+ campaign setting, if we have the budget to hire an individual artist to produce most of the art, that’s often our preferred approach. In that scenario, a single artist can imprint a cohesive visual vision on the book, and we can then attribute that work directly to them. But even in those cases, we’re not opposed to selectively using genAI-assisted assets to support or extend that vision where it makes sense.

For the core books (Core Rules, Bestiary, Game Master’s Guide), the issue is not only scale, but creative control and sustainability. These books require many illustrations, diagrams, spot art, and UI-style visuals, all of which must remain stylistically cohesive across a long production timeline. As a small team of three, producing that volume entirely through traditional commissions would require us to curtail our creative vision, pursue crowdfunding, and inflate our timeline to a multi-year scope.

Suggestions like “use public-domain art” or “use no art at all” don’t address this constraint. There is not enough public-domain material to achieve the singular, cohesive visual identity we are intentionally designing, and we don’t accept the premise that avoiding genAI is ethically necessary in the first place, as explained in the AI Statement of Use. Sacrificing visual coherence to satisfy objections we don’t share would undermine the final product for no reason.

Where scale becomes truly prohibitive is in tooling, not books, however. Our Character Creator already includes 14,000+ avatar images. Producing that library through traditional illustration alone would require hundreds of artists and years of work. We also intend to offer a premium feature that allows users to generate avatars on the fly. That capability inherently requires generative technology.

So the decision isn’t “AI instead of human artists because it’s cheaper.” It’s about using appropriate production methods for different kinds of creative work. Where traditional human authorship is feasible and strengthens the product, we use it. Where the function or scale makes that impractical, we use genAI-mediated processes.

We don’t see these approaches as being in conflict. Integrating generative tools into traditional workflows lets us retain creative control, maintain efficiency, and keep the project sustainable, while still producing a cohesive, visually rich game.

AI may destroy white-collar and creative jobs, including your own as a developer. Why is OSR+ leaning into a technology that undermines the livelihoods of artists and writers?

Because refusing to use AI does not prevent job displacement, and because the forces driving automation are structural rather than the result of individual creators’ choices. Speaking as the creator, I fully expect AI to undermine my own livelihood as a developer, and I do not believe creatives should be uniquely insulated from technological change while other classes of labor are hollowed out without discussion.

OSR+ uses genAI to create work that would not exist at all under traditional production models, and engaging with these tools critically is the only way small creators retain any agency instead of ceding the field entirely to large corporations. You can read more in the full AI Statement of Use.

Have you turned down human freelancers (artists, editors, layout, etc.) in favor of AI to save money, and if so, how do you justify that ethically?

Using generative tools necessarily means we’re not commissioning every asset through traditional freelance pipelines, particularly in cases where we did not (and do not) have the budget to do so. That’s true in a straightforward sense.

However, the more relevant question is not whether a specific freelancer was “turned down,” but whether a small publisher has an obligation to pursue the most expensive production method available when a less costly and more scalable approach exists that meets the project’s needs.

To use a concrete example: our Character Creator includes over 14,000 character portraits. Producing that volume of work through traditional illustration alone would require hundreds of artists and multiple years of production, at a cost that would make the tool unfeasible. Using generative tooling allowed a small team of two—myself and an artist-consultant—to produce the art in a matter of weeks. The alternative would not have been “the same product, but human-made”; it would have been no product at all, or a drastically reduced version that abandoned its intended functionality.

We don’t believe it’s reasonable to require creators to abandon their creative ambitions purely because they employ cutting edge technology. Creative work isn’t uniquely impacted by these changes, even if it’s often discussed as though it were.

Speaking as the creator, I’m not anti-labor. In my day job, I work as a contractor alongside other creatives, and my livelihood depends on taking on the sort of work that’s liable to become automated away by genAI. And so the same generative tools that make OSR+ viable at scale are also likely to affect parts of the work I do professionally. But refusing to use tools that clearly expand what small teams can build wouldn’t meaningfully protect creative labor, it only limits what independent creators like me are able to create, while larger organizations adopt them regardless. You can read more about this tension between labor and technology in the longer AI Statement of Use.

If OSR+ becomes financially successful using AI pipelines, do you see a moral obligation to share profits with the human creative community you might be displacing?

It would depend on what “sharing profits with the human creative community” means. For one, we don’t believe in the utilitarian idea that moral responsibility starts with training materials and ends with outputs. That sort of reasoning is too fuzzy to be practicable. You can read our full AI Statement of Use to better understand why.

We do already offer the game system and digital tools online for free, and only aspire towards making a living by selling print materials and premium access to those digital tools. We think that level of access is pretty generous and a way of giving back to the creative community whose input helped create some of the tools we used to produce the system.

Would you still use AI if it were not cheaper or faster, purely on the basis of “creative possibility”? Or is cost-saving the real driver?

Outside of “cheaper and faster,” genAI tools also give us direct control over the creative vision we’re trying to implement through the tools.

But “cheaper and faster” are key to how that control works. That is, with a genAI tool, we can rapidly iterate hundreds of times in pursuit of a specific look when it comes to generating art. However, in this hypothetical scenario where somehow humans are faster at producing the materials than AI, we’d likely hand off genAI drafts to the human artist, because then they’d understand our creative vision precisely due to all the prior iteration we would be able to do via the AI.

Is OSR+ effectively asking its audience to accept near-term harm to creative workers in exchange for a speculative future where AI might “liberate” the means of production?

In the broadest sense, sure, you can certainly extrapolate from our perspective to make that claim.

But you should really read our full AI Statement of Use to understand the nuance.

Cultural & Ethical Impact

How do you respond to the claim that AI “democratizes” the creation of slop and accelerates cultural trash, and that OSR+ art is just part of that flood? Isn’t OSR+ indirectly helping to normalize a world where corporations flood the culture with cheap, homogenized AI content, making it harder for experimental, slower, human-made work to be seen?

As we explain at length in the full AI Statement of Use, we believe AI absolutely does democratize the creation of “slop.”

But we also believe there is a difference between AI slop and art produced wholly by genAI and/or with genAI assistance. We don’t consider what we’re producing in OSR+ to be slop because the quality of what we’re producing is comparable to art produced by traditional means.

If I personally believe any AI use in creative work is unethical, why should I consider buying or playing OSR+ at all?

If your view is inflexible in that regard, you probably shouldn’t consider buying OSR+ products, because nothing we can say will make you feel better, and honestly, changing the minds of those who have closely held (ideological) beliefs is not our goal.

You’re welcome to read our full AI Statement of Use if your view is flexible, however.

How do you respond to people who agree AI might have upsides in science or medicine but still insist it should be kept out of art and entertainment entirely?

We think that view unduly privileges the arts and entertainment industries above others, and it isn’t actually practicable in either the short or long term. Drawing a hard line between “acceptable” uses of generative technology in science or medicine and “unacceptable” uses in creative fields assumes that creative labor is uniquely deserving of insulation from technological change, while other forms of labor are expected to adapt.

In practice, the same underlying models, tools, and workflows are used across all domains. The techniques that help discover drugs, analyze medical images, or accelerate research are not meaningfully separable from those that enable visual design or generative drafting of text. Attempting to ban or cordon off genAI from art and entertainment would require restrictions so broad that they’d inevitably affect non-creative uses as well, or be selectively enforced in ways that favor large corporations over independent creators.

You argue that refusing to engage with AI cedes the field to bad actors and authoritarian regimes. Isn’t that the same logic used to justify every arms race in history?

Absolutely. Speaking as the creator here: I have said on our retired sister podcast WorldBuild with Us that the development of AGI is akin to the development of nuclear weapons. I’m serious about that. If the possibility of AGI is not a fiction, then I believe whoever controls it first will effectively control the world. You should read more about this from the AI Futures Project.

How do you mitigate the potential for harmful representation of stereotypes with genAI in OSR+ products, such as generations that reinforce racist, sexist, or otherwise harmful stereotypes?

In our experience, genAI models tend to collapse into visual stereotypes if you’re not particular in what you ask for. That is, if I start with a “female knight wearing plate armor,” your average model (open source or commercial) will likely generate a conventionally beautiful White, thin, blonde woman wearing silvery mail. It might not be skimpy out the gate, but she’ll probably be wearing boob-guards. This tendency for models to generate a baseline like this, is, in our opinion, a reflection of Western society’s internalization of White hegemony. To counteract this, we build representation into our generation process. For example, we use scripts that sit behind our token wildcards to add weighted randomization for race, body type, sex, and gender (among other things) into our generations. In the race example, instead of letting the AI assume “White” when race is not mentioned in a prompt, our script injects “Pacific Islander” or “Nigerian” tokens on a randomized basis. In turn, this prevents us from choosing generations on the basis of our internalized bias when we review a swath of outputs for inclusion in the final product.

How does relying on AI for art and ideation affect your own growth as a human artist, designer, or writer—does it atrophy skills you’d otherwise develop?

This is a good question, and touches on our brief discussion of the impact on education that AI may have on our society in the full AI Statement of Use.

We believe, like any tool that can do work for you, you must be careful not to let it do the thinking for you, and that you ought to turn to it as a tool, and not as a substitute for direct engagement with creative work that contains human authorship. Speaking as the creator: my engagement with genAI thus far has taught me a lot about art and artists I would have otherwise never researched, as well as sharpened my technical skills as a developer. On the LLM side, it’s turned me into a far more efficient researcher and ideator, and forced me to be even more adept at sourcing primary citations, since these models often hallucinate. I don’t privilege the tool above other tools, even if it’s more powerful than most of them. I view it as another technology to add to my toolset.

You say even meme-y, low-effort AI content can count as “art” if it makes someone laugh. Does that flatten the distinction between deeply crafted works and disposable attention-hacks?

No, not any more so than it flattens the distinction between deeply crafted works and low-effort content that is not genAI-assisted.

Environmental Impact

You argue that AI’s environmental impact is “orders of magnitude” smaller than many everyday activities. Why should we trust your numbers, and what sources are you relying on?

Don’t trust our numbers. First of all, they’re not ours. Look at the numbers for yourself. We discuss a particular source at length in the full AI Statement of Use as a convenience to you, because it is a comprehensive meta analysis of other sources that have been tediously reviewed by someone with a better understanding of math and physics than we have, and it cites the source for every graph and data point it references. We also include a list of additional sources to explore that are cited by that source or offer a similar perspective.

Even if AI’s share of emissions is small now, isn’t OSR+ still contributing to an infrastructure that will grow massively and may become a serious climate problem?

Not yet or in the near term, based on our reading.

While the reality is that nobody can predict the future, even the most aggressive projections for the environmental impact of AI up to 2030 are not meaningful as compared to the harm caused by most other human activities. See the full AI Statement of Use for more information.

Why not take an explicitly precautionary stance—“we don’t need AI to make a TTRPG,” therefore we avoid contributing even marginally to data-center demand?

AI use in creative tooling is not a meaningful driver of data center demand compared to far larger contributors like streaming, crypto, advertising tech, or industrial compute, so abstention here is largely symbolic rather than precautionary. We think learning to use the technology thoughtfully is more responsible than pretending non-use meaningfully alters its trajectory.

Moreover, while you don’t need AI to make any TTRPG, you do need it to make the kind of TTRPG we’re making, at the scale and level of integration we’re pursuing. 

You can read more about our thoughts on AI’s impact on the environment in the full OSR+ AI Statement of Use, if you’re curious. 

Community & Moderation

Do you allow AI-generated content (adventures, classes, NPCs) in official OSR+ releases, or is AI limited to internal design and iteration?

GenAI-assisted or wholly generated art is part of official OSR+ releases. No text wholly generated by LLMs, however, makes it into official releases as we use those tools for internal iteration because they (currently) do not produce output that meets our standards of quality. Also, as creators, some of us like to write more than draw, and some of us like to draw more than write. Because our position is that genAI tools are not inherently ethically compromised (read more in our AI Statement of Use), we feel we have the wherewithal to use the tools selectively in such a way that best benefits the projects we’re working on.

When OSR+ fans publish third-party content, will you allow or restrict AI-assisted writing and art under your licenses?

We haven’t developed licenses for OSR+ yet, but we would not restrict the use of genAI-assisted writing or art in third-party content.

Are OSR+ contributors required to disclose AI use in their submissions, and how do you verify that?

If we were asking for traditionally produced artwork where we’re specifically commissioning the work without genAI assistance, we’d also ask for draft materials that demonstrate how it was produced over time, to prove human provenance.

How do you plan to handle community members who refuse to play in, stream, or promote games that visibly include AI-generated assets?

The community is entitled to reject genAI-assisted creative work and not engage with it, and we respect the decision of anyone who doesn’t want to engage with AI in OSR+. But at the same time, we are not obligated to make room for people who hold such views, and we do not condone harassing creators or coordinating attacks against them for the tools they use to create their work.

If a convention or store bans products using AI art, will you alter OSR+ editions to meet those requirements, or simply skip those venues?

We will respect their decision and not sell at, support, or attend those venues.

Do you moderate the OSR+ community (Discord, forums, etc.) differently for AI-related conflicts, pile-ons, or harassment, given how heated this topic is?

Rule #10 in our Discord community is: “We are proponents of the use of generative AI. You can read our statement RE: our use of AI on our website. We welcome debate on this topic, but it must be done in good faith, in a thread (not main channels). Bad faith actors will be banned.”

Acceptable Use Policies

If, five or ten years from now, solid evidence shows that generative AI has broadly damaged creative labor markets and worsened inequality, would OSR+ commit to revisiting or reversing its AI use?

Speaking as the creator, I would almost certainly be among the creatives negatively affected in that hypothetical scenario. At that point, the question of whether to “reverse course” would be largely academic: my primary concern would be earning a living, not making symbolic production choices that don’t materially affect the trajectory of the technology. Individual abstention (in the case of genAI) does not meaningfully alter systemic adoption, particularly when large institutions continue to deploy these tools at scale.

For a fuller discussion of why we don’t view individual refusal as an effective or coherent response to structural technological change, see the fuller AI Statement of Use.

Do you have a policy about not training any custom models on user-submitted art, character sheets, or campaign logs without explicit opt-in?

We don’t train AI models on user-generated content, and if we had a reason to do this, we would make it opt-in on a per-user basis.

Are any OSR+ tools capable of generating or manipulating player likenesses (e.g., portraits based on real people), and if so, how do you guard against deepfake-style abuse?

Many modern AI tools are capable of this. It’s up to human operators to not use them in this way. We don’t, and should we implement user-facing genAI tools in our digital suite on our website in the future, we’d implement whatever technical safeguards are available to us in order to conform to the terms of our web host and local laws (so as to curtail NSFW generations or prevent the replication of the likeness of private individuals). The closest we come to this (with public figures) as of the date of this writing is parody (such as in our show Dungeons & Flagons), or in our “Hall of Legends” archive, which is a collection of fictional characters based on popular media that demonstrates how the Character Creator can produce a wide variety of heroes.

If an AI-generated image in an OSR+ product is later found to infringe a specific work, what is your remediation plan (errata, reprints, refunds)?

If an AI-generated image in an OSR+ product were found to infringe a specific work, we would comply fully with applicable law, including removing or replacing the image in digital products and issuing errata for print editions. Future printings would correct the issue, and if legally required, we’d follow whatever remediation process applies (including refunds or takedowns). This is no different from how we’d handle infringement involving traditionally produced, licensed, or public-domain materials, which in practice have posed greater legal risk based on prior experience.

If an OSR+ image came out looking very similar to a known artist’s work, what is your policy for detecting that and responding—would you pull or replace it?

We’d definitely pull it. OSR+ is a registered agent under the DMCA, and we follow DMCA processes for responding to all forms of infringement. There is no automated or technical means (that we know of) by which one can preemptively detect if genAI-produced artwork is substantially similar (per the tests for infringement under U.S. copyright law) to any particular artist’s expression on the market. If there were, we’d use it.

Are you sure?