Wikipedia:LLM use disclosure

Essay on editing Wikipedia From Wikipedia, the free encyclopedia

Every edit that incorporates the output of a large language model (LLM) should be marked as LLM-assisted by identifying the name and, if possible, version of the AI in the edit summary. This applies to all namespaces. False denial of LLM-use when asked is likely to be met with sanctions.

The idea of requiring disclosure of LLM use as a policy has been significantly discussed, but no consensus formed (as of 2026) for a variety of reasons, including disagreement over the format of disclosures (what information to include), how to facilitate them (for example, some have suggested and others rejected a checkbox), what actions should typically follow a disclosure (if any), and other uncertainties about the specifics. Regardless, it is obvious that most editors prefer that users who use LLMs on Wikipedia disclose that use, and as of 2025, many users have been blocked for misusing LLMs and systematically not disclosing—including after being asked or warned about it—which made it impossible to start a constructive dialogue with them.

Some users assume their LLM use will not be detected because the results look good enough to them, and they avoid disclosure to escape scrutiny. Often, they are mistaken: output that is superficially to most eyes "good enough" can still be extremely obviously LLM-generated. Multiple times, this pattern—using LLMs while avoiding scrutiny and recklessly disregarding consequences—has been interpreted as the editor not being here to build an encyclopedia, pursuing an incompatible personal or commercial agenda instead. Conversely, clumsily using an LLM in a transparent manner, promptly receiving relevant feedback, and responding reasonably to that feedback, would generally mean that the user is able to receive the message, following which they are just expected to improve their editing, motivated by what is in Wikipedia's best interest.

Therefore, in light of these practical considerations, it might be best to treat disclosing as highly encouraged. Editors can treat it as collaboratively reaching out to the many editors interested in reviewing other editors' LLM-assisted edits, on a voluntary-but-highly-recommended basis.

See also

Related Articles

Wikiwand AI