Wikipedia:Help, I've been accused of AI!
Essay on editing Wikipedia
From Wikipedia, the free encyclopedia
Suppose someone posts a comment on your talk page accusing you of using AI, or tags an article you edited as likely containing AI-generated text.
This is an essay. It contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article or a Wikipedia policy, as it has not been reviewed by the community. |
Take a moment to look over the comment or tags and think from the accuser's perspective. They are pointing to things you may be uniquely equipped to explain. There are many ways to respond, some of which are helpful and some of which will make you look bad.
What to say
Yes
If you used AI to write the content you added, then admit it. Most people will be much more willing to engage constructively with you if you tell the truth.
However, an even better approach is to disclose that you used AI in your edit summary. Disclosure at the time of publication is the minimum standard required by most reputable publications and scientific journals; it's not an unreasonable thing to ask.
No
If you didn't use AI, you can deny the accusation. There is literally no reason not to tell the truth.
Consider, however, whether you used AI without being aware of it. Grammarly, for instance, makes calls to OpenAI under the hood for most of its rewriting functionality, as well as its "writing suggestions." Microsoft has integrated Copilot into many of its applications, including Microsoft Word and Notepad.
If you didn't use AI, you tell the truth, and the person accuses you of lying about it, then see Wikipedia's guides to resolving disputes and responding to personal attacks.
What not to say
"You have no proof!"
Correct. Unless there are blatant giveaways that would make the article eligible for speedy deletion, the only person with proof of whether you did or did not use AI is you. Wikipedia is not a court of law, and editors are not prosecution attorneys. If you want proof, provide it yourself.
"If there is any AI..."
Saying this makes you sound like OJ Simpson. Either there "is AI" or there isn't.
"What parts of this sound like AI?"
This doesn't answer the question of whether you used AI; whether or not this is your intention, it makes you sound like you are mostly concerned with covering your tracks.
"This AI detector says this isn't AI!"
Even the best AI detection software cannot divine with perfect accuracy whether your writing was generated by AI. However, you can. Saying this just raises the question of why you need to use an app to tell you what you, yourself, did, and why you are hiding behind this secondhand information.
"AI detectors don't work!"
While AI detection software is not perfect, it is much more reliable than laypeople think; the best ones are accurate more than 99% of the time.[1] When they do get it wrong, it's more likely to be a false negative—i.e., claiming that a piece of writing was written by a human instead of AI, not vice versa.
More to the point, though, this is only relevant if someone actually used one of those detectors. Many people, including the author of this essay, don't.
And even more to the point, this doesn't answer the question of whether you used AI.
"Humans can write like this!"
Yet again, this doesn't answer the question whether you, specifically, did use AI (you may sense a running theme).
It's theoretically possible for humans to write like AI; anything is possible. A monkey randomly mashing keys could, statistically speaking, produce the complete works of William Shakespeare. However, in the entire evolutionary history of monkeys, none of them actually did. Similarly, the linguistic characteristics of AI-generated text are things that simply did not appear very often in the many centuries' worth of text that humans actually wrote, and then, almost immediately, started to appear everywhere after 2023.[2]
This holds true even when comparing the same kinds of text, e.g., formal academic writing by humans versus formal academic writing by AI. It even holds true when comparing AI-generated text from the base language models—i.e., based only on the training data humans provided, with no changes after the fact—to text generated by chatbots available to the public. These things especially did not show up en masse, repeatedly and formulaically, in the same piece of writing. Think of the Fermi paradox: if there are so many people out there writing like AI, then where are they?
This is also the case on Wikipedia. There are more than 1 billion edits to Wikipedia, going back more than 25 years, and yet out of those 1 billion edits, very few of them show the linguistic characteristics of AI-generated Wikipedia text that became ubiquitous after 2023.
Any AI-generated response
There are established guidelines on Wikipedia that strongly discourage AI-generated comments on talk pages. This is because people don't want to hear from an AI chatbot. They want to hear from you.
It's also generally a bad idea to try to hide your AI use by using more AI. Most chatbots have a strong and recognizable "speaking pattern"; just as people can recognize the voice of Gilbert Gottfried, Fran Drescher, or Miss Piggy, people can recognize the voice of ChatGPT, Claude, Gemini, and other large language models. Chatbots also tend to produce many of the poor responses detailed above, along with other things.
If you are accusing someone of AI usage
If you are accusing someone of AI usage or asking them if they used AI to write the content you came across, make sure to provide evidence for a specific diff or section. Understand that asking if an edit is AI may inherently be seen as an insult or attack, even when it is not, and provide relevant guidelines and information around AI usage.
Even in clear-cut cases of AI usage, try to provide empathy especially if an editor is inexperienced. Many editors may feel insecure about their writing skills. Assure the editor that their own words are far more valuable than AI-polished words.