Wikipedia:Village pump (idea lab)

Section of the village pump where new ideas are discussed From Wikipedia, the free encyclopedia

 Policy Technical Proposals Idea lab WMF Miscellaneous 
The idea lab section of the village pump is a place where new ideas or suggestions on general Wikipedia issues can be incubated, for later submission for consensus discussion at Village pump (proposals). Try to be creative and positive when commenting on ideas.
Before creating a new section, note:

Before commenting, note:

  • This page is not for consensus polling. Stalwart "Oppose" and "Support" comments generally have no place here. Instead, discuss ideas and suggest variations on them.
  • Wondering whether someone already had this idea? Search the archives below, and look through Wikipedia:Perennial proposals.

Discussions are automatically archived after remaining inactive for 10 days.

« Archives, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78

Age verification protest

There should be a banner (at least) about age verification laws, why they are bad, why they would harm free software, why digital wallets will force everyone to have Android or iOS, why some variants (like the New York one) would even ban root access to your own computer! Gugalcrom123 (talk) 17:16, 10 March 2026 (UTC)

I personally am not a fan of age-verification software either, but I don't understand what that has to do with Wikipedia? Sentimental Dork (talk) 19:37, 10 March 2026 (UTC)
They might be what we need to raise awareness. Gugalcrom123 (talk) 19:42, 10 March 2026 (UTC)
If it was directly restricting access to Wikipedia, I would agree. But Wikipedia doesn't take sides on issues or promote causes, no matter how just. Thebiguglyalien (talk) 21:15, 10 March 2026 (UTC)
True. But we do provide factual, neutral, non-advocacy information in articles such as Age verification system, and the better those articles are, and the more they rely on the WP:BESTSOURCES, the better Wikipedia's will understand the subject. I suggest that anyone interested in this find some very good sources and improve the related articles. WhatamIdoing (talk) 03:31, 11 March 2026 (UTC)
It might be genuinely banning free software like GNU/Linux, which Wikipedia runs on. For example, requiring that all OSes have circumvention-proof ID checks kills it. Gugalcrom123 (talk) 14:15, 11 March 2026 (UTC)
Wikipedia has taken sides and promoted causes in similar situations in the past, most notably Stop Online Piracy Act#Wikipedia blackout CauliflowerMoon (talk) 04:12, 13 March 2026 (UTC)
The difference is that was an existential threat to Wikipedia. Age verification, while pointless, moronic, ill-conceived, etc..., isn't. Headbomb {t · c · p · b} 04:33, 13 March 2026 (UTC)
Maybe not now, but age verification laws could affect Wikipedia and other Wikimedia projects (especially the Commons) one day. Some1 (talk) 00:15, 17 March 2026 (UTC)
That hasn’t happened yet. When (not if) it happens, then we can protest it. Dronebogus (talk) 19:08, 22 March 2026 (UTC)
Should we protest it though? Age verification (identity verification, basically) will cut down on like 99% of the sockpuppetry cases. (Granted, the project will probably lose most of its non-socking editors...) Some1 (talk) 02:28, 5 April 2026 (UTC)
It will also ban doing your computing with libre software like GNU/Linux. Gugalcrom123 (talk) 07:40, 5 April 2026 (UTC)
I agree Cayley-Dicksonconstruction (talk) 19:47, 24 April 2026 (UTC)
@Gugalcrom123 I doubt that anything will, or can ever, ban "root" access to your own computer. And nothing can require everyone to have either Android or iOS. Even if such things are proposed, they won't be implemented. And Wikipedia is not the place to raise those alarms, IMO. David10244 (talk) 02:24, 19 March 2026 (UTC)
Naturally, old hardware is replaced with new hardware. Increasingly, people are less interested in building a machine and prefer to buy pre-built ones. These often come with soldered components that make it impossible to replace/upgrade them. Recently, the "old" computers at my work from 2016 needed to be replaced due to a hardware limitation that wasn't compatible with the switch to Windows 11. Nintendo has made rooting the Switch 2 challenging, and bans people from trying in ways that make it effectively bricked. Companies are increasingly looking towards "renting" people devices, rather then selling it to them. All the pieces are there for proprietary hardware that can only work with proprietary software, that is shut down if changes are made by the user. Computers that can run older versions of software might simply die of old age (I recently had a laptop I kept alive since 2010 die of old age, I suspect something on the motherboard shorted out). If the government(s) makes laws about what that proprietary software needs to do, and passes mandates on what features new computer hardware needs to have, there is little we can do to work around that. GeogSage (⚔Chat?⚔) 03:09, 25 March 2026 (UTC)
It can happen if the state requires all OSes to have certain circumvention-proof age verifications like New York requested (but not California). Of course, to make that it would be required to disallow root access. See the satirical view at https://lists.debian.org/debian-devel/2026/03/msg00110.html but which will probably be real if NY's law passes. Gugalcrom123 (talk) 06:48, 27 March 2026 (UTC)
I'll point out that many Linux distros based on the Android operating system routinely seek to make it nearly impossible to obtain root access to one's own machine (rooting). GrinningIodize (talk) 20:31, 13 April 2026 (UTC)
It is true. They also aim to store data on one's device, but that the user cannot access manually, only the designated app can. To me, this is outrageous, and "age verification" via "digital wallets" depends on it, and on the "attestation" that this antifeature is provided. Gugalcrom123 (talk) 10:30, 14 April 2026 (UTC)
Agreed. GrinningIodize (talk) 17:18, 14 April 2026 (UTC)
Wikimedia has been fighting against age verification on Wikipedia, see: where it lost, but Wikipedia is not required to implement it at this point. Graeme Bartlett (talk) 07:20, 27 March 2026 (UTC)
From that link I understand that the UK's Ofcom which classes all large user-to-user websites as Category 1 under the Online Safety Act 2023 may make an exception for Wikipedia. Wikipedia would seem to have an interest in persuading the UK to make an exception, and maybe doing the same for other jurisdictions (countries and US states).
PS please vote on my page move proposal at Age verification system
Imakesapage (talk) 08:06, 5 April 2026 (UTC)
  • No - Wikipedia is not a place to protest. Blueboar (talk) 12:23, 14 April 2026 (UTC)
    nope ~2026-25074-70 (talk) 15:00, 24 April 2026 (UTC)

Uncited birth or death dates

As many of you probably know, a lot of biographical articles have birth or death dates listed but the sourcing is very poor or nonexistent. I've generally leant on the side of caution when it came to not adding these uncited birth or death dates to given name/surname pages, due to the risk of violating WP:BLP. However, earlier today, temporary account User:~2026-14944-70 reverted two edits I made, which were in turn reverting the same temporary account which had tried to "normalize" these pages by adding the very uncited birth dates that I wanted to avoid.

Two questions: Should we list people's birth or death dates in lists, given name pages, surname pages, disambiguation pages, or so forth, even if they're not cited in the article about said person? And should we even allow uncited birth or death dates in articles to begin with? Duckmather (talk) 17:36, 7 April 2026 (UTC)

WP:CIRCULAR would seem to appply, especially to BLP information, which is to say that editors can't simply say, "Well, it's cited at that article..." Though in theory it should be easy enough to copy a pertinent citation from the article in question. DonIago (talk) 17:43, 7 April 2026 (UTC)
This is what I do; if I can't find a source in the article, then, great, I can't verify it so I don't get to add it to the SIA. The spirit behind WP:BLPDOB, to a certain extent, applies here to, imo. Dab pages I'm a little unsure about; I think I'd like to have sourced DOBs on all of them, but they are meant to just be navigational pages, more akin to redirects than anything else. GreenLipstickLesbian💌🧸 05:34, 10 April 2026 (UTC)
I've explained this at Talk:Witkoff (surname)#Alignment of index with targets, but when dealing with navigation pages (DABs, indices, lists of lists etc. etc.) The navigation page aligns to the target. So if the target opens with "For Bar (born YYYY) is a Bizian Bazer" that is what the navigation page will read, or if the first sentence is more circumlocutory it will be pulled from the short description. The references are not included on the navigation pages because they should be sourced at the target.
A citation need not follow every word. "Foo[1] Bar[1] (born YYYY)[1] is a Bizian[1] Bazer[1]" is unnecessary. In fact, leads do not need to have refs at all, Donald Trump for example does not, and infoboxes should not normally have them either, short descriptions never.
Essentially one of two things is true. Either the information is cited in the target, very often it will be in the body, in which case it will likely be appropriate both there and on any navigation pages that need to direct to it, or it is not cited on the target in which case it should be removed from the target, and then subsequently all navigation pages pointing there.
The pages in Category:Unreferenced BLPs may be a bit of an edge case. One could argue that those pages should not have any information, and so no navigation pages should point there. Or one could argue navigation pages simply are aids to finding pages based on the information those pages have.
As a practical matter these navigational aids usually exist, so as of the time of this writing, John Davis (pitcher, born 1963) has an entry at John Davis.
The basic standard is MOS:DABPEOPLE, though if no dates, nationality, or professional information is present at the target there may be some exceptions that try to structure themselves around what is known. ~2026-14944-70 (talk) 20:15, 7 April 2026 (UTC)
The basic standard is that this kind of page shouldn't contain information that is unnecessary for deciding which link to click on. This:
Witkoff is a surname. Notable people with the surname include:
is not the goal. Nobody is going to think "Oh, obviously I'm looking for the American businessman who was born 6–18 months before the other American businessman in this list". It's not about what is or could be cited. It's about what the Wikipedia convention is for this type of page, and the Wikipedia convention uses birth years to differentiate between people only when there is no realistic alternative. WhatamIdoing (talk) 23:29, 7 April 2026 (UTC)
That's not what MOS:DABPEOPLE says, it says "For people, include their birth and death years (when known), and only enough descriptive information that the reader can distinguish between different people with the same name" (emphasis in the original text). You could argue that "businessman" is not helpful here, but more likely it is simply not "enough", so maybe other adjectives are in order. ~2026-14944-70 (talk) 03:18, 8 April 2026 (UTC)
Disagree here. Disambiguation pages are not Perl golf. Including more info helps the reader find the relevant article faster, and birth / death years are a standard piece of info to include to help clarify which person is being referred to. It's also harmless to include slightly more info in the name of consistency. Including birth years isn't writing an essay, it's a tiny amount of characters that's a genuine aid in navigation. SnowFire (talk) 03:20, 8 April 2026 (UTC)
Really? How much faster does an approximate birth date help you decide between these particular entries? It can be helpful to have something like First Earl of Whatever b. 1620, Second Earl of Whatever b. 1642, etc. but I don't think that's true for BLPs. WhatamIdoing (talk) 23:23, 9 April 2026 (UTC)
I think it's quite useful, especially with common jobs like authors. PARAKANYAA (talk) 01:48, 10 April 2026 (UTC)
+1, same use case. If I'm reading a book review published in the 80s, then the author is more likely to be the anthropologist born in the 30s or 40s than the one born in the 70s. GreenLipstickLesbian💌🧸 01:56, 10 April 2026 (UTC)
Does it help you in the case listed here? Do you find yourself saying "Oh, obviously I'm looking for the slightly older/younger American businessman"? WhatamIdoing (talk) 01:15, 11 April 2026 (UTC)
Nope, but that's because I'm not looking for the American businessman. But if I was looking through a newspaper, and saw an early 2000s article about a businessman called "Witcoff" doing something, then I'd know to mentally filter out both Messrs. Alex and Zach Witkoff. GreenLipstickLesbian💌🧸 08:27, 11 April 2026 (UTC)
@~2026-14944-70 Pretty much ignoring the actual post here, I do want to quickly re-iterate WP:BLPREMOVE. If somebody removes information, citing a good faith BLP objection, then it's up to you (or the editor wishing to re-instate the material) to adequately source it. BLPREMOVE applies to every content space, including navigational ones. So I think Please adjust the targets, assuming any adjustment is needed, before adjusting navigation pages is in the wrong order, considering BLPREMOVE; you (or the other editor) get to source the disputed information, not alter another page before coming back to remove the disputed information from this one. GreenLipstickLesbian💌🧸 08:31, 11 April 2026 (UTC)
User:GreenLipstickLesbian MOS:DABNOLINK is unambiguous "References should not appear on disambiguation pages. Dab pages are not articles; instead, incorporate the references into the target articles". If you believe that should be changed in the case of MOS:DABPEOPLE then take it up at WT:DAB. Maybe use the John Davis (pitcher, born 1963) example where until recently there were no sources even at the target, though admittedly the BLPPROD saw the issue fixed once applied.
Leads do not require sources either, even Donald Trump doesn't have them and I have trouble believing that one is uncontroversial. Same goes for infoboxes, and short descriptions should never have them which is where a good portion of the time scripts pull the appropriate information from to navigational pages. Sometimes they are even transcluded though an RFC said not to do that so then have to be subst'd latter when noticed during other maintenance.
If you want to require leads, infoboxes, and short descriptions to have references then that is fine. But as of present they do not. ~2026-14944-70 (talk) 14:00, 11 April 2026 (UTC)
Infobox content isn't exempt from the WP:BLP requirement for sourcing. Where on earth did you get that idea? It is of course permissible to duplicate content in an infobox that is already given and cited in the article body, as with ledes, or short descriptions. AndyTheGrump (talk) 15:22, 11 April 2026 (UTC)
User:AndyTheGrump yes that is correct. MOS:INFOBOXREF just says the sources does not need to be inline. If you read the above, I already explained that one of two things is true. Either the information is sourced in the body, in which case it can be in the lead, infobox, and short description, or it isn't sourced in the body in which case it should be removed. ~2026-14944-70 (talk) 16:48, 11 April 2026 (UTC)
@~2026-14944-70 And if somebody removed a DOB (or, anything information, really) from the lead, infobox, or SD, with a good faith BLP objection, then I'd still expect the reverting editor to check that the material was, actually, cited elsewhere in the article before reverting it back in. GreenLipstickLesbian💌🧸 18:56, 11 April 2026 (UTC)
User:GreenLipstickLesbian as would I. The present situation is weird because the OP believes that the sourcing is adequate for the information to be included in the biography article, but inadequate for the information to be included in any navigation pages pointing to the article, which makes no sense to me and has no basis in policy or any workflows I am aware of, hence my one of two things is true remark earlier.
They also seem to be saying, though this is not entirely clear, that the lack of inline citation in the lead, infobox, or short descriptions of the biography means that while they can be included there, they cannot be mentioned elsewhere. That makes no sense to me either, and to refer again back to the obvious example, Donald Trump has no such inline citations, yet Donald Trump (disambiguation) has that information nonetheless.
What I was taught some years ago is that navigation pages, and even embedded lists, are simply reflections made by the pages to which they direct. So for example, and this does happen every so often, when someone changes the nationality description of someone to say Barian when the article states Fooian, they should be reverted unless and until the article itself changes, if that means waiting 30 days for an RFC to expire then so be it.
Possibly there have been editing disputes on the targeted pages, however checking for ongoing disputes is not a normal part of the workflow, nor should it be. Even as I write this the dates remain live on the linked pages; if that changes any navigational pages pointing at them will be adjusted accordingly. Very limited reasons to discuss things except on the talk pages of the targeted articles. Very rarely when the list of nationalities, occupations etc. is long, people will dispute a little as to the wording of the entry on the navigation page itself. But in my view even that is needless and any dispute should simply result in the verbatim reuse of the linked article's short description, though there is no formal policy to that effect. ~2026-14944-70 (talk) 19:31, 11 April 2026 (UTC)
We shouldn't have citations on disamb pages. So I don't think citations should go there. Disambs aren't articles, they're navigational tools, so I think it being cited in the article is enough. PARAKANYAA (talk) 21:03, 7 April 2026 (UTC)
And the general rule of thumb has been: If it needs a citation, it really doesn't belong on a disambiguation page. WhatamIdoing (talk) 01:16, 11 April 2026 (UTC)
I think you've made your preference clear for other reasons, but no. If we were being hyper-sticklers, anything other than the article title requires a reference, which would be silly on a navigational page. There's no difference between a birth date vs. an occupation vs. what something is. If the birth / death is cited in the other article, just like the occupation, it's fine. SnowFire (talk) 02:01, 14 April 2026 (UTC)
Why do you say that "anything other than the article title requires a reference"? Where is such a requirement written down? It's not in Wikipedia:Verifiability, which requires inline citations for four kinds of material (and therefore not for anything else). WhatamIdoing (talk) 00:48, 18 April 2026 (UTC)
For clarity, Wikipedia:Disambiguation#References says "Do not include references in disambiguation pages". The obvious way to comply with this prohibition on inline citations, and WP:V's requirement for inline citations for four kinds of material – and that having the citations in the linked article is not sufficient ("When material that needs an inline citation appears in two or more articles, an inline citation is needed in each") – is to not have those four kinds of material on the dab page. Thus: "Alex Witkoff, American businessman" and not "Alex Witkoff, American businessman whose father is a Trump crony" – even though the latter is cited in the linked article. WhatamIdoing (talk) 00:55, 18 April 2026 (UTC)
That is simply sophistry. Nowhere does the string "Trump Crony" appear on that page so it would never be copied across to begin with. It is also not a standard DABPEOPLE type of distinguisher so would be improper for other reasons. Negative descriptors of people of the type Fooian Criminal, Fooian murderer, and even Fooian serial killer do in fact appear on navigation pages, and unlike your example would in fact be libel per se if false. However, since they are true and often used in the linked pages short descriptions and introductory sentences they get copied across anyway. ~2026-14944-70 (talk) 01:45, 18 April 2026 (UTC)
"Libel per se if false" isn't one of the four types of information that's required to have an inline citation. WhatamIdoing (talk) 04:13, 18 April 2026 (UTC)
More sophistry, since the string "Trump Crony" isn't either. ~2026-14944-70 (talk) 13:58, 18 April 2026 (UTC)
Did you mean that 'the string "Trump Crony" isn't' required to have an inline citation? I'd put that in the category of contentious matter about a WP:BLP, which is required to have an inline citation. WhatamIdoing (talk) 00:18, 22 April 2026 (UTC)
And other people would doubtless place "serial killer" in the category of descriptors requiring an inline citation, your point is? ~2026-14944-70 (talk) 02:22, 22 April 2026 (UTC)
My point is that even Wikipedia:Disambiguation pages have to comply with Wikipedia:Biographies of living persons, so when a page shouldn't contain a citation, it also shouldn't contain contentious matter about BLPs. If "serial killer" is actually contentious in a given case, then a different description should be use on the dab page. (Mind the gap between 'widely accepted unpleasant fact' and 'contentious'. Only the latter requires an inline citation.) WhatamIdoing (talk) 05:18, 22 April 2026 (UTC)
So you are saying if a descriptor on a DAB regarding a BLP has been challenged it can never be readded, is that correct? ~2026-14944-70 (talk) 05:24, 22 April 2026 (UTC)
Mind the gap between 'widely accepted unpleasant fact' and 'contentious'. There is not always a gap. I can't remember who it was now, but a while back there was a huge discussion (and RfC?) related to the description of a North American person who had a notable sports career and had also been convicted of a serious sex crime (I think rape) in a high-profile case. It was "widely accepted unpleasant fact" that he had been convicted of rape, however it was highly contentions whether he should be described as a "(former) sportsperson", "(former) sportsperson and rapist", "rapist and (former) sportsperson" or "rapist". Thryduulf (talk) 09:46, 22 April 2026 (UTC)
  • (de-indent) No, anything can be contested. If someone linked the article on Alex Witkoff, the guy into cryptocurrency, but incorrectly disambiguated the article with "Welsh hamlet" or "Canadian businessman" or "mathematical concept", people would be well within their rights to contest it and demand the inaccurate summary's removal as unreferenced. If two editors disagree on reality, the one with the reference wins, which would be the person finding a reference saying he's actually an American businessman. So yes, everything has to be referenced. It just that, for reasons of sanity, that reference doesn't have to live on the disambiguation page itself, and can be in the linked article, whether it be "what is this article about" or "what is the subject's birthdate". SnowFire (talk) 20:02, 20 April 2026 (UTC)
    Everything does not have to be cited. The rule in WP:V is that inline citations are required (albeit with no WP:DEADLINE) if the claim is Wikipedia:Likely to be challenged (or already has been, since the likelihood in that case is 100%). There's a big gap between "anything can be contested" and "some things are WP:LIKELY to be contested". WhatamIdoing (talk) 00:17, 22 April 2026 (UTC)

Presumptive deletion for AI-blocked users?

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


As it stands, we have many editors blocked for repeated AI spam, but cleanup even post-block can be quite difficult. An idea I have heard suggested is a variation on presumptive deletion, which allows us to delete major contributions by a user shown to have a history of copyright violations, instead of having to sift through contributions individually and argue them one by one.

The specifics should ideally be hammered down. Obviously, we don't want something as drastic as revision-deletion, while the weaker but more generic WP:BANREVERT only applies to edits made post-block. Something inbetween could work, such as:

Any major edit by an editor blocked for a history of AI use may be assumed to be unsuitable and unilaterally reverted. An editor choosing to reinstate it takes responsibility for the content of the edit.

Of course, may plays an important role here, as our policies and guidelines are to be applied with common sense: an editor suddenly changing their style from clearly human to clearly AI-written shouldn't have their earlier edits reverted. I believe the specifics can reasonably be left to the discretion of the editor choosing to apply it, as these reverts can of course be contested, but will place the onus on the editor opting to reinstate the material.

I am of course open to any feedback or comments before making it a formal proposal! I am mostly thinking of this in terms of edits (rather than page creations) as these are easier to contest in case the presumptive deletion is abused. On the other hand, speedy deletion (perhaps as an expansion of G15?) adds a check to the process by usually involving a second editor processing the nomination. Chaotic Enby (talk · contribs) 06:53, 11 April 2026 (UTC)

Courtesy ping to @Fermiboson who I believe suggested the idea first. Chaotic Enby (talk · contribs) 06:57, 11 April 2026 (UTC)
I fully support implementing presumptive deletion for LLM content. It would be a very useful tool to combat the many new editors blocked for LLM spam and would save a lot of bureaucracy. Some thoughts on this wording: 1. I would prefer to include deletion as well as reversion as an option, since the biggest bureaucratic bottleneck is AfD. 2. Not sure we need to say "assumed to be unsuitable". 3. "An editor" should probably be "Any editor". Toadspike [Talk] 08:28, 11 April 2026 (UTC)
Could you guyrs give it a different name? PDEL is already controversial and hard enough to institute in practice, often requiring a lot more work than many editors think; an AI-based removal would look very different. For starters, it's inherently a lot more vibes based (whereas PDEL required confirmed "this editor serially copy pasted from in copyright sources and I can prove it, with diffs"), it typically poses more of an legal threat to our reusers when a copyright-based PDEL is reverted. Also, just in terms of application, complicated PDELs end up being processed at WP:CPN. That's... not less bureaucracy, that's overworking our poor clerks, and, I mean, if WP:AICNB gets a system to delete articles without going through AFD, go wild, but it's again going to likely have to look very different. Really, all of this is to say, I don't think we need to muddy the waters between the two. (Though, TBH, I've cited LLM abuse as a contributing factor to choosing PDEL, and nobody's reverted me on one of those so far. )GreenLipstickLesbian💌🧸 08:51, 11 April 2026 (UTC)
Yep, "presumptive deletion" was just an analogy, but I don't think it should necessarily have the same name or even share that many similarities. While certainly less black-and-white, I don't think it is necessarily "vibes-based" insofar as we shouldn't block someone for LLM abuse based on vibes alone (as WP:LLM makes clear), and we look for more solid evidence such as hallucinated citations, which can legitimately put in doubt the whole edit history. Chaotic Enby (talk · contribs) 08:59, 11 April 2026 (UTC)
+1 on the name. With regards to page deletions, I think setting up a system for AI-generated articles somewhat similar to the deletions done at WP:CPN is an idea worth considering (although I also agree with GLL that it would end up working very differently to CPN in many respects). So broadly speaking, that could look like a place where AI-generated articles (including those created by users blocked for LLM misuse) could be listed then deleted after seven days if the issues haven't been fixed. This is also somewhat inspired by User:LEvalyn/You don't need AfD to TNT an LLM and the related discussion about using PROD for AI-generated articles at Wikipedia talk:WikiProject AI Cleanup#Advice essay on deleting LLM articles. I think a somewhat PROD-like system similar to CPN could address a lot of the potential concerns about expanding G15 to cover these cases, and would also reduce the potential flooding of PROD/AfD with AI-related nominations (although obviously there's no one-size-fits-all solution here, and there'll always be situations where draftification/stubificiation/AfD is more suitable). MCE89 (talk) 09:54, 11 April 2026 (UTC)
I agree that there's a whole range, with some needing a one-click wholesale reversion of all edits and others needing more nuanced approaches.
In particular, when the article is garbage but the subject is notable, I'd like editors to remember what LEvalyn says about saving time by reducing it to a single-sentence stub instead of taking it to AFD. WhatamIdoing (talk) 21:06, 11 April 2026 (UTC)
This'd be great for accounts that only use LLMs, but for others wouldn't we have to verify when they started using LLMs and how often to justify this? Could we limit it to the former, allowing for case-by-case consensus on exceptions found at WP:LLMN (thinking of WP:LLMN#Skyerise AI Cleanup?)? Kowal2701 (talk, contribs) 10:47, 11 April 2026 (UTC)
With safeguards, something like this might be useful, but the safeguards are important. My first thought is something like where:
  • An editor has been blocked for AI use,
  • The block is not currently being appealed or discussed,
  • The block was applied long enough ago that we can presume the editor in question has known about it long enough to initiate an appeal if they want to, and
  • At least the significant majority of their significant contributions appear to have been made using AI.
Pages where that editor is the only significant contributor may be AI-prodded when the tagging editor believes, for a stated reason, it to have been created using AI. The stated reason does not have be as detailed as a full investigation but must be specific to the page and be based on something other than just the author. The goal is to ensure that the tagger has actually looked at this page enough to sate that "I think this is AI generated because ...". "Reference 1 is hallucinated", "The legacy section reads like puffery" and similar are sufficient.
AI-prodded pages may be de-prodded only by an editor who takes responsibility for the content and either rewrites it or states they genuinely believe (with explanation at least as detailed as, and which addresses, the prodding reason) that it isn't AI-generated. A de-tagged article may not be prodded or AI-prodded but may nominated at AfD. It should be heavy-enough weight that if an editor has only created 2-3 pages it's quicker and easier to use the current process, but light enough weight that it improves the current situation. If it is shown to be useful then it might be expandible to accounts who (almost) exclusively used AI after a given point in time and/or only in some namespaces if we can devise some way of clearly and unambiguously specifying this in a consistent place and format. Thryduulf (talk) 12:04, 11 April 2026 (UTC)
I agree that safeguards re: open appeals make sense. I'm curious what proportion of editors blocked (in part) for LLM use actually have "LLM" explicitly stated as a block reason. Presumably it's not many. My ideal use-case here is for nuking pages created by editors blocked for UPE or promotion where a clear sockmaster does not exist or is not known. In those cases, I absolutely do want to lower the level of evidence required for deletion from the ironclad G15 criteria to something like "this reads like obvious LLM" or WP:AISIGNS. I'm not sure if that's what you're saying here, but "The Legacy section reads like puffery" sounds like we agree. (G5 reform is another issue I'd like to discuss eventually.)
Separately, a specific LLM PROD, which the creator is not allowed to revert, is something I'd support. Toadspike [Talk] 12:27, 11 April 2026 (UTC)
I think we’re discussing two different things? One's making non-G15 deletions more efficient, the other's regarding large mainspace edits? I'm more concerned about lots of large edits to already-existing pages, which take a lot of time to clean-up. But if someone rewrites a section, the bytes changed may be minimal, so it doesn’t look like a large edit. I’m leaning towards presumptively reverting (incl. rollback) all contribs of the most egregious blocked accounts, who have used LLMs for practically every edit and has clearly done minimal review (regardless of an edit's bytes change). The wording would need to be sufficiently strong to avoid ambiguous cases. Agreed w Thyrduulf that enough time needs to have passed to allow for an appeal. Obv if another editor has substantially rewritten the added content and verified it, that’s clean Kowal2701 (talk, contribs) 12:50, 11 April 2026 (UTC)
Yes, I think it's probably worth discussing page deletions and content reversions separately as applicability is going to be determined very differently. Thryduulf (talk) 12:56, 11 April 2026 (UTC)
(edit conflict) I'm curious what proportion of editors blocked (in part) for LLM use actually have "LLM" explicitly stated as a block reason. This is a good point, especially considering Wikipedia talk:Blocking policy#RFC: Include LLM usage as a reason to block. My thinking is that this lower standard of proof should only apply to people with multiple other contributions that have met the usual higher standard of proof. Being blocked (in part) for LLM use is a way to determine that, but obviously doesn't capture all cases. Having had multiple contributions deleted under G15 may also be a way, but that would require filtering the deletion log by deletion reason and by page creator and I don't know if that is even possible? Thryduulf (talk) 12:53, 11 April 2026 (UTC)
Special:DeletedContributions, scripts like User:Daniel Quinlan/Scripts/Unfiltered.js, or Xtools' pages created make it quite trivial to find a user's deleted edits or page creations; from there it's one click to see the deletion reason, which pops up in the editnotice when I click on a red link. (Two clicks if you have the relevant setting switched off.) It's not hard to check for this. Toadspike [Talk] 18:40, 11 April 2026 (UTC)
Having read the comments above, I think that AI-prodding is maybe the better way to implement/name this. I also agree with the safeguards @Thryduulf has proposed above, which seem reasonable, though I would include pblocking from article/draft space in the definition of being blocked. However, I think to be effective, the presumptive in PDEL does need to be retained. Unlike copy-pasting, where the editor still has to find something to copy-paste from “appropriately”, LLM users are nearly entirely not rate limited. They can generate large swathes of text which look superficially fine to anyone but subject matter experts with immense speed. Some of the cases I’ve worked on on AINB took more than a day of machine-assisted work to find a single hallucinated citation (but after which the hallucinations started to pop up more and more often), and stylistic AISIGNS weren’t at all obvious. If we have already determined that the editor has been a prolific AI user since a certain time, requiring further investigation of every one of their edits no matter how miniscule enables bad actors to simply overwhelm us by volume. (To an extent, they already are doing that.)
Less dramatically, I would put the logic as follows: by continuously using an LLM and lacking either the competency or the honesty to properly discuss its use with the community (which presumably would be needed to escalate to a block), the editor has demonstrated that they are unable to fulfill their WP:ONUS to include their content. Therefore, anyone who feels like it can revert any of their major contributions, period, unless another editor disagrees (and hence picks up the ONUS). At some later point anyone with energy may decide to go back through said contributions or un-G13 them and pick out the salvagable bits, but that shouldn’t be something the cleanup crew is required to do with the volume of content they are faced with.
I wanted to gather examples of how long prolific AI editors’ contributions stayed in mainspace uncorrected because of their volume before proposing this policy, but I haven’t had the time to do so thus far. As I’m on a train right now, I will only be able to point out that in a recent case on AINB where the user generated a large amount of articles on Polish geographical features and other such things, it took a good few days from when consensus was established to mass draftify to when all the articles were actually mass draftified, despite it being just one button click for each article, and a 100% hit rate of hallucinations on the spot check that people did do. If we were additionally required to state which source was hallucinated in each article, I am confident a majority of those articles would still be in mainspace at the moment. Fermiboson (talk) 15:45, 11 April 2026 (UTC)
Having a halluncinated reference is just one example of a reason, but maybe something like "matches the style of other articles this user has created on the same topic that were determined to be AI at the higher standard of proof" would be sufficient. Basically the goal is to ensure that each article is looked at to make sure that "written by the same author" is not the sole reason for an AI-prod (or LLM-prod may be a better term). Nothing would prevent someone using a normal prod of course, but that would be subject to the usual rules and norms of that process and I'd say that if something has been deprodded it can't then be LLM-prodded - certainly if the prod reason was at all related to LLM-use. LLM-prod shouldn't be available if there has been a discussion about LLM-use on this specific article where there was consensus that it wasn't used (or no consensus?). Thryduulf (talk) 16:12, 11 April 2026 (UTC)
Agreed on the latter points. I remain hesitant on the first point, though you’ll have to allow me some time to gather examples and quantify the sorts of issues I think that might encounter in practice. Fermiboson (talk) 16:47, 11 April 2026 (UTC)
A partial block from mainspace seems like it would pretty clearly fall into that. I also agree that we need it to be presumptive, instead of requiring a separate stated reason for every single page, as again, it is a question of pure volume: once we know that an editor abused AI, having to examine each of their pages to produce a specific argument is purely procedural.
As we're essentially discussing two similar but parallel processes (reverting edits and prodding/deleting pages), I'm thinking of streamlining them by having the same set of requirements apply for both, although the implementation will of course differ. We should also clarify how much "long enough ago" means for the block, as if we wait too long, AI-generated edits can get buried in newer edits and make any reverts/cleanup even more difficult. Maybe three days or one week? Chaotic Enby (talk · contribs) 17:03, 11 April 2026 (UTC)
In terms of "long enough" I'd say one week works as the default, but allow for both longer and shorter times where circumstances make it clear that is appropriate. Reading the user talk page, block log and any relevant discussions highlighted on either should be sufficient to know whether such circumstances apply in any given case. Thryduulf (talk) 17:13, 11 April 2026 (UTC)
For reverting AI edits, I sort of touched on this below with permissions gaming (to clarify, not all AI use is permissions gaming, but at this point a lot of permissions gaming is done with AI), but some suggestions on what might qualify:
  • Edits are on widely different topics, especially if the account is otherwise an obvious WP:SPA. Like if "Mike at BlahBlahBlah.io" does 30 edits expanding random movie and species articles, then gets blocked for trying to spam an AI-generated ad for BlahBlahBlah.io, then those 30 edits are probably AI too and they probably didn't do much review of them.
  • Edits are made in really close succession, proportionate to the amount of text added/changed, and (again) especially if they are on disparate topics.
  • (more contentious but more common) One of the edits has something really blatant, like a ChatGPT param or an WP:OAICITE or a particularly bad highlighting the importance of the pivotal role dollop of slop, and that really blatant edit is one of (not an exact number) 500 similar edits they made around the same time that maybe aren't as blatant. The people who rewrote a bunch of leads (here and here) come to mind.
Gnomingstuff (talk) 21:30, 11 April 2026 (UTC)
I've started a very rough first draft at Wikipedia:Presumptive removal of AI content, inspired by suggestions here as well as the formatting of Wikipedia:Proposed deletion of biographies of living people. As a compromise, I have specified that, when nominating an individual article for LLMPROD, A justification specific to the article is optional but recommended. Chaotic Enby (talk · contribs) 17:36, 11 April 2026 (UTC)
I don't think any change in guidance is needed to enable this? In practice this is already pretty much how I operate - when I notice disruptive editing from an account (AI-related or otherwise) I tend to look through the account's other edits for similar disruption. If I find that most/all of the account's edits are similarly disruptive, then I will start spending less and less time deliberating before reverting, and in the case of a blocked editor with entirely or nearly-entirely disruptive edits, I'm eventually just going to rollback every major edit they have made that isn't obviously constructive with edit summaries like "restore last good version before changes by account with a pattern of unsourced edits that fail verification, please provide an explanation for these changes before reinserting". My thought process there is that reverts are cheap, so if I am accidentally overzealous someone can always revert my reverts on a case-by-case basis, and if the editor is already blocked then we are past the point of worrying about discouraging a new editor by mass reverting. This is about reverting, PROD/speedy deletion is beyond my purview as a non-admin and IMO mass creation of new LLM articles isn't as big of a problem as LLM edits that degrade the quality of existing articles. -- LWG talk (VOPOV) 17:46, 11 April 2026 (UTC)
Fully agree, and this is in fact why it can be helpful to write this down, as policies/guidelines are supposed to reflect existing best practices. It's already something that is done with reverts, and having it clearly written somewhere means we know we're all on the same page regarding that (and that editors undoing these reverts know that they're taking responsibility for the edits). Chaotic Enby (talk · contribs) 18:03, 11 April 2026 (UTC)
I think some people would object to the Requirements section. Imo the reason NOLLM passed was because it was rooted in LLM-use violating PAGs, some may oppose presumptive removal of edits that may improve the article (even if they're a minority of such edits), and this doesn't differentiate between raw output and reviewed output (a sticking point for some). A requirement could be "take a sample of 5 or more of their edits, presumptively revert if there are PAG violations in most of them", or "presumptively revert if it is very likely that most of the editor's edits contain raw LLM-generated output". higher standard of proof may be too vague. Kowal2701 (talk, contribs) 18:34, 11 April 2026 (UTC)
Currently, I believe not differentiating between raw output and reviewed output is a feature, not a bug: WP:NOLLM doesn't make a carveout for reviewed outputs, as these reviews are often found to be lacking, and blocks for LLM abuse usually stem from repeated failure to review outputs, meaning we can't necessarily trust the blocked editor to have made an accurate review in their other edits. This proposal places the WP:ONUS for the review on any editor wishing to preserve the content, which makes more sense given the asymmetry of effort that would be present otherwise. Chaotic Enby (talk · contribs) 18:43, 11 April 2026 (UTC)
Makes sense. Maybe recommend that edit summaries say eg. WP:PRVLLM, see WP:AINB#User:Example. Feel free to take responsibility for the content and rewrite and verify it. The guideline also needs to be predicated on WP:NOLLM (currently not linked in the lead) Kowal2701 (talk, contribs) 18:49, 11 April 2026 (UTC)
Both are great additions, feel free to add them! Chaotic Enby (talk · contribs) 19:20, 11 April 2026 (UTC)
Thanks! Tbh I'm just worried about ploughing ahead while leaving a significant number of people behind which'd cause rifts in the community. Like with how the MOS was written by a small group of people, and it was then enforced on everyone against individual preferences. Also something Lugnuts wrote about how while he was creating content, building an encyclopedia etc., others were unbeknownst to him making PAGs stricter (moreso interested in the perspective, obv still should've been blocked), like we've gotta try to consider the silent majority that just write content and don't participate in project space. But that's much less an issue for this, since it mainly affects blocked users and the NOLLM RfC went ridiculously well Kowal2701 (talk, contribs) 19:44, 11 April 2026 (UTC)
My concern about the Requirements section is the opposite - it creates a situation where people could argue that edits can't be reverted since "you didn't meet the requirements of WP:BLAHBLAHBLAH". IMO all edits are subject to reversion and discussion at all times, and it might be better to leave the decision of whether to mass revert a particular set of contributions to individual case-by-case judgement. -- LWG talk (VOPOV) 19:23, 11 April 2026 (UTC)
I think exploring both policies/essays/guidelines on reverts (for AI-generated rewrites/ "improvements" and a separate deletion system (to catch cases which are obvious problematic, but fall short of G15) are a good idea.
Thinking about your post here, @LWG (and @Chaotic Enby and @Kowal2701 and anybody else reading this discussion), may I invite you to contribute to a new essay on WP:NOLLM that I've just sketched out? It's at Wikipedia:AIREVERT. It's in project space, so WP:BOLD editing is encourages, as is merging, as it taking over the redirect I created so I could easily cite it in edit summaries. I've tried to document both my practice, and what I perceive to be current practice when it comes to cleaning up AI-generated edits, and I've included a whacking great caveat about WP:BLPREMOVE taking precedence over AI reverts. It's not a policy, it's an essay, but these are the arguments I rely on when reverting or rewriting AI generated content edits, unless I can point to something I'm more familiar with like copyright issues. GreenLipstickLesbian💌🧸 18:49, 11 April 2026 (UTC)
Thanks a lot! I'm especially happy to see caveats about how different types of edits may be more or less problematic and that not all should be equally reverted, and that other policies like WP:BLPREMOVE take precedence. As I had it in mind, the reason for reverting edits from AI-using editors was due to the content issues they were likely to introduce, while something like removing disputed content doesn't have the same pressing need for a revert.
I didn't go as deep into the specifics as you did, as I believe guidelines should be kept short, while essays/infopages can have more latitude to explain the "why" and the "when", and I really do like how your essay clarifies it! Chaotic Enby (talk · contribs) 19:25, 11 April 2026 (UTC)
I'm in favor of this in certain cases: one common scenario is people permissions gaming by doing a bunch of AI copyedits, then a bunch of AI article expansions, then a promotional AI draft, which is the point at which they get blocked. But now they have all these AI edits scattered through the encyclopedia, they obviously didn't get reviewed given the context, and since they're usually to articles on a wide variety of topics (thank you Newcomer Tasks, very cool), they often don't get looked into much. I am, and I am not exaggerating here, 100 tabs deep now trying to untangle these cases, because every time I look into the edit history of one article I find someone else doing it. Gnomingstuff (talk) 20:05, 11 April 2026 (UTC)
I wonder if this could be part of the blocking process. The admin is already saying "you're blocked because ____"; could they not add "and all your edits are eligible for blanking, deletion, or stubbing" when that seemed warranted? WhatamIdoing (talk) 20:53, 11 April 2026 (UTC)
Not really. In many cases, we block editors for reasons that won't justify deleting all of their edits (say, behavioral issues, edit-warring, sockpuppetry, etc.), and this shouldn't become part of the standard block process. Chaotic Enby (talk · contribs) 21:27, 11 April 2026 (UTC)
The process I have in mind is something like this:
  1. Admin blocks
  2. Admin posts an explanation of a block
  3. Explanation includes a yes/no switch (answer chosen by the blocking admin) that says whether this admin believes wholesale reversion could be appropriate
  4. The block request goes into a holding cat for a suitable time period to allow for appeals, after which gnomes should feel confident reverting/blanking/stubbing/prodding/AFDing anything they want (if the admin said 'yes').
I'm looking for something that would give the clean-up crews confidence that we think aggressive cleanup would be a good idea. WhatamIdoing (talk) 21:34, 11 April 2026 (UTC)
I think consensus at the relevant WP:AINB thread would give people confidence, then responsibility is distributed among several editors, and the issue becomes "what is 'best practice'" instead of "this editor is being disruptive". Kowal2701 (talk, contribs) 13:29, 12 April 2026 (UTC)
I feel like this is less about convincing the person who got blocked and more about convincing the people who know nothing about the block but have an article on their watchlist and are confused as to why someone's unremarkable-seeming paragraph is getting deleted for AI reasons. Gnomingstuff (talk) 21:31, 11 April 2026 (UTC)
Chaotic Enby and anybody else, are there any changes you'd make to WP:LLMPRV? Imo sticking points might be edits determined to be AI at a higher standard of proof in Requirements, and the days until an admin can delete a PRODed article. Re the former, people may want some specifics or examples of what constitutes higher proof. Re the latter, WP:PROD says 7, which seems long given how strict the requirements are for this Kowal2701 (talk, contribs) 20:42, 15 April 2026 (UTC)
The higher standard of proof aspect was suggested by @Thryduulf, whom I may ask for that. In my mind, being blocked for AI misuse (with the edits being the relevant evidence for the block) was an ideal standard, although maybe something else could be decided on. The 7-day delay was also from Thryduulf – I personally suggested 3 days, but I don't have a strong preference there. Chaotic Enby (talk · contribs) 20:54, 15 April 2026 (UTC)
The higher standard of proof I was thinking of was that required for G15 but being blocked for LLM-misuse also works, although not everybody who has misused LLM and is also blocked is not explicitly blocked for misusing LLMs. If someone is blocked for being disruptive and the evidence of that disruption is LLM-misuse then I don't think it's controversial to say they were blocked for LLM-misuse. However if someone suspected of LLM-misuse was blocked for promotion with no mention of LLMs in the block message or a discussion that immediately preceded the block then that person was not blocked for LLM-misuse (whether or not they were misusing LLMs). There will of course be situations that are not as clear either way.
Regarding the timing, I don't remember explicitly suggesting 7 (not saying I didn't) and the only mention I can immediately spot on this page was made by MCE89. Thinking about it now, I think 7 days is right but I could compromise on 5. Thryduulf (talk) 21:47, 15 April 2026 (UTC)
I think you suggested it here, but 5 days could be a good compromise. Chaotic Enby (talk · contribs) 21:53, 15 April 2026 (UTC)
Ah, that was in the context of how long we should allow for an appeal of a block, not how long to wait between tagging and deletion. Thryduulf (talk) 21:58, 15 April 2026 (UTC)
Thanks, that's on me for missing the context! Whoopsie. Chaotic Enby (talk · contribs) 22:31, 15 April 2026 (UTC)
Tbh I was thinking 1 or 3, but since such articles would likely be on obscure topics, and not indexed in search engines since they (hopefully) wouldn’t have gotten the green light from NPP, 5 is okay w me Kowal2701 (talk, contribs) 21:53, 15 April 2026 (UTC)
What about blocked editors whose contribs predate LLMs? Should we specify a date? Kowal2701 (talk, contribs) 22:34, 15 April 2026 (UTC)
Good question! Presumably, the admins actioning the PROD will do their due diligence and decline these cases. One of our cases mentions the contributions to be removed coincide in timing and pattern with edits determined to be AI at a higher standard of proof, and I wouldn't be opposed to making this the only case (as the other one, [a]t least the majority of their significant contributions appear to have been made using AI, would most plausibly entail it either way). Chaotic Enby (talk · contribs) 22:41, 15 April 2026 (UTC)
Yes that’s much better (maybe high standard of proof). Maybe a note specifying a date would be useful for some, idk, have seen March 2023 (release of GPT-4) used Kowal2701 (talk, contribs) 22:51, 15 April 2026 (UTC)
I've seen November 2022 (release of ChatGPT) as the more common one, but either could work. Chaotic Enby (talk · contribs) 23:08, 15 April 2026 (UTC)
I would agree with Nov 2022 as the best cutoff date; that's when we started to see the boom in AI marked text in other areas, and it seems likely that the same is true with WP contributions. But equally just "2023 and onwards" would work as a convenient round number, since I assume there's not much problematic stuff from those first few months still hanging around. Andrew Gray (talk) 12:38, 17 April 2026 (UTC)
If a specific editor only started using LLMs at a specific point in time after having previously contributed without doing so, then it would make sense to clearly note that date somewhere and use that for the basis of determining whether the PROD applies to subsequent edits. e.g. if an editor had 1000 human contributions before December 2025 and 300 of their 350 since were LLM-based, then (assuming other conditions apply) LLM-prodding their post December 2025 edits should be allowable (but not the earlier ones). Thryduulf (talk) 23:32, 15 April 2026 (UTC)
I worry a little that requiring the editor to be blocked might create an incentive to block users for AI use even when that's not necessary to prevent future disruption. For instance, if a user admits to a pattern of LLM usage and makes a credible promise that they'll stop using LLMs, I wouldn't want an admin to feel like they should block them solely to make it easier for us to clean up the mess that's already been created. Should we potentially allow LLM-prod for certain users who aren't blocked, e.g. by AINB/ANI consensus or with the user's permission as part of a cleanup effort (potentially sometimes as an unblock condition)? FWIW we don't limit copyvio PDELs to blocked users, and we do sometimes PDEL the contributions of CCI subjects who haven’t been blocked. MCE89 (talk) 03:13, 16 April 2026 (UTC)
It also can create issues when they're unblocked (whether that's through a successful unblock request, or they wait out a time-limited block). Some admin discretion, of course, but editors who get their articles G5-ed can request their undeletion upon getting unblocked. Would this be any different? GreenLipstickLesbian💌🧸 03:21, 16 April 2026 (UTC)
Agree with the risk of a perverse incentive, which is why it could be good to not make a block a strict requirement. Regarding unblocks, it depends: if it is a regular unblock that follows credible reassurances that the behavior does not repeat, that shouldn't make previous issues automatically refundable (same as an editor having their copyvios deleted wouldn't get them back after promising to not make new ones). However, if the block was clearly an error, a refund should definitely be on the cards. One more argument for not tying it to a block as a specific condition. Chaotic Enby (talk · contribs) 03:31, 16 April 2026 (UTC)
I've made some changes to incorporate the great points above, how's it looking now? Kowal2701 (talk, contribs) 06:18, 16 April 2026 (UTC)
Courtesy link to the multi-diff: Special:Diff/1348788307/1349202056. You've hit all the points, I think we're ready to submit this as a formal proposal! Chaotic Enby (talk · contribs) 16:12, 16 April 2026 (UTC)
Thanks! Sorry, I've made some more changes . Added a note clarifying a difference from PROD, and reworded a little of the Object section (instructing the order in which to do things may be WP:CREEP?) Kowal2701 (talk, contribs) 17:58, 16 April 2026 (UTC)
@Chaotic Enby@Kowal2701 I've had a look and made some edits. Two parts I worry about:
  1. "Any editor may object to content that has been presumptively removed, though to 're-add' it they must rework it themselves to ensure that it complies with policies and guidelines." – this sentence is difficult to parse. Are they objecting to the content or its removal? If the latter, what is the point of objecting if restoring the content requires extra work? I suggest removing the language about objection and simply stating the requirement that restoring presumptively-removed content requires reworking it. This issue is also visible in the "Objecting" section, where "objecting" and "restoring" are not distinguished until the very end. I would prefer using the term "restoring" across the board, as until the removal has happened there's nothing to object to. This would also allow us to heavily condense the last paragraph.
  2. "at a high standard of proof" is vague, almost meaningless. I suggest simplifying that bullet to e.g. "The contributions to be removed must bear clear similarity to AI-generated edits" or "must plausibly be AI-generated". (The latter option sounds weak, but actually excludes a lot of content, e.g. I was recently able to show at REFUND that a machine-translated article was not plausibly LLM-generated and had been reworked by a human.)
It might be worthwhile to add "community consensus" as an option under requirement #1, e.g. If consensus at ANI or AINB determines that an editor is using AI disruptively, then those edits can be removed under this guideline without specifically finding consensus for removal. Also, maybe we can find some more pronounceable shortcuts for this. Otherwise this looks pretty good, thank you and all the others who worked on it. Toadspike [Talk] 19:47, 16 April 2026 (UTC)
Thank you,
re 1., agreed, but "re-add" or "restore" don't really make sense when the content is reworked. Maybe Any editor can reverse a presumptive removal, though they must rework the content to comply with policies and guidelines. Would "reverse" be better than "object"? I'm not sure I understand re the Object section, but feel free to change it accordingly, I'm sure it can be improved
re 2., idk about this, I think to people who've done loads of LLM clean-up "high standard of proof" is easy to interpret, though probably not to others. I like most of that sentence as (moreso recently) there's only a few 'smoking guns' with other edits much less obvious, maybe ... with edits determined to very likely be AI-generated.? We could add a note giving examples like markdown, hallucinated references, a combination of AISIGNS and WP:V failures etc.
I think if there's consensus someone is abusing LLMs, either they'll get blocked, or they'll come clean and consent to clean up (because of the threat of a block). Idk how to mitigate people being difficult after consenting to clean-up Kowal2701 (talk, contribs) 20:22, 16 April 2026 (UTC)
The issue I have with Any editor can reverse a presumptive removal, though they must rework the content to comply with policies and guidelines. is that it presumes that presumptively removed content always needs reworking. There needs to be some way for someone to assert that actually this bit of content is fine (and they take responsibility for being so). Content is being removed because we (with good reason) believe that is most-likely that it was written using an LLM, but that doesn't mean it actually was. Thryduulf (talk) 23:06, 16 April 2026 (UTC)
Yep, taking full responsibility for the content is what I had in mind too. Maybe Any editor can reverse a presumptive removal, though they must closely review the content and its sources sure it complies with policies and guidelines, which will usually require them to rework the content. Chaotic Enby (talk · contribs) 06:55, 17 April 2026 (UTC)
That's better but needs a couple of tweaks imo (e.g. you've missed a couple of words). Perhaps Any editor can reverse a presumptive removed. When doing so, they must review the content and its sources to ensure that it complies with polices and guidelines, which may require reworking the content.
I chose "may" rather than "usually" as we know reworking will be needed for some cases but we can't know whether it will be required more often than not before the procedure is in place (e.g. most editors may choose to only restore stuff that doesn't require reworking).
I chose "When doing so" as there is no benefit to requiring a specific order for the workflow, e.g. restoring then reworking and restoring after reworking should both be allowed. Thryduulf (talk) 12:02, 17 April 2026 (UTC)
This is perfect imo Kowal2701 (talk, contribs) 13:12, 17 April 2026 (UTC)
@Kowal2701 Re: Your last point – say there's an editor who made a lot of large content edits, but hasn't edited in several months. Someone takes this to AINB, consensus is that they abused LLMs. Since they're not actively editing, we're not gonna block them, and they're also not gonna fess up. In that situation a community consensus option would be helpful. (The community already has the power to impose any remedy it wants, but it'd be convenient to end the conversation at "this person used LLMs" instead of having to additionally propose and gain consensus for a reversion remedy each time.) Toadspike [Talk] 07:38, 17 April 2026 (UTC)
It also appears I misunderstood what the "high standard of proof" applied to. It applies to the edits used as evidence of LLM use forming the basis for removal, not those being presumptively removed. I think it still makes sense to pick less legalistic wording, like "very likely", but I'm much less worried about that now. Toadspike [Talk] 07:55, 17 April 2026 (UTC)
Great point, maybe we could add is inactive, or there is consensus at a noticeboard such as WP:AINB to use presumptive removal. to the first requirement? Idk, I don’t see consensus as a requirement as necessary since "block", "comes clean", and "inactive" covers all the ground, but then again I thought the first two were sufficient so it may be useful for unknown exceptions (though WP:IAR serves that purpose) Kowal2701 (talk, contribs) 10:20, 17 April 2026 (UTC)
I don't think inactivity alone is sufficient. There needs to be a clear decision that the person abused LLMs, which inactivity isn't. If I go inactive today, that isn't grounds to revert all my edits a few months from now. IMO the specific wording should be about community consensus that LLM abuse took place, regardless of why there is no block/confession; the community is smart enough to use this appropriately. Toadspike [Talk] 23:13, 17 April 2026 (UTC)
I think making a consensus requirement might add extra bureaucracy, like if someone reports an account to AINB and gets little response. All the requirements apply at the same time, I think combined w the third one it’s okay, but happy to change it if others think that’s better Kowal2701 (talk, contribs) 05:51, 18 April 2026 (UTC)
I don't think you understand what I'm asking for. I'm not looking for an additional requirement necessary for presumptive removal. I'm looking for an additional pathway sufficient for presumptive removal. Toadspike [Talk] 00:05, 19 April 2026 (UTC)
Mb, yes that’s good Kowal2701 (talk, contribs) 07:16, 19 April 2026 (UTC)
What Toadspike said - it seems like this discussion is conflating presumptive deletion, which is only allowed under certain conditions, with reversion, which is already allowed in all cases as long as it doesn't become WP:OWN behavior. If I'm reviewing a user's contributions and finding that nearly every single one fails source verification, then at some point I'm just going to revert everything they have ever done if I can't verify it as true from the sources cited within a few seconds. I don't think I need a new policy to enable that beyond WP:V, which allows any content that is dubious and unsupported by sources to be removed. If a user has an established track record of inserting content with sources that don't support the content, their edits should be handled as though they did not contain sources at all. -- LWG talk (VOPOV) 00:16, 19 April 2026 (UTC)
The above ideas and concerns have been incorporated, might be ready to go if there are no further concerns Kowal2701 (talk, contribs) 15:04, 27 April 2026 (UTC)
Go ahead! Chaotic Enby (talk · contribs) 16:56, 27 April 2026 (UTC)
Thanks will do! (sometime tomorrow probably) Have you had enough of making proposals for now lol Kowal2701 (talk, contribs) 23:16, 27 April 2026 (UTC)
I fully support codifying this. I tend to work like this already; in cases where editors are blocked for flagrant AI usage my personal approach is typically to presumptively revert all of their major edits with very little scrutiny; my ideological stance here is that we should not spend any more time reviewing their edits than they did.
Adding an LLM PROD to get rid of articles that don't quite meet the G15 standard is a great idea too. Athanelar (talk) 13:03, 25 April 2026 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

AI vandalism detection

I know that using LLM's for content creation is banned. However, inspired by the thread above, which implies that vandalism is a big problem and often lingers in articles without being fixed, I propose that using AI to review edits by unregistered users might be a good idea. I think any general purpose AI would be quite capable of dividing edits into 'probably not vandalism', 'very likely vandalism' and 'not sure', it could create a list for review by human editors who would be responsible for responding to it. Imakesapage (talk) 02:04, 12 April 2026 (UTC)

We do already have an AI that reviews edits by newer editors for vandalism. It's Cluebot NG. Some tools also do use AI to review edits, such as Wikishield. 45dogs (they/them) (talk page) (contributions) 02:11, 12 April 2026 (UTC)
We already are technically using an AI called ORES to detect likely vandalism in the way you describe on recent changes (though it's not generative), plus there are bots that use much more advanced AI as said above. Feeglgeef (talk) 02:13, 12 April 2026 (UTC)
I think any general purpose AI would be quite capable of dividing edits into 'probably not vandalism', 'very likely vandalism' and 'not sure'. How reliably, though? 'Citation needed' (both for any new proposals, and for anything we are currently using), as they say around here. AndyTheGrump (talk) 02:16, 12 April 2026 (UTC)
It wouldn't have to be super-reliable, as it would only be bringing edits to human attention. I will give some thought as to how to provide a 'citation', maybe you could help with suggestions. Imakesapage (talk) 02:26, 12 April 2026 (UTC)
My suggestion is that any proposals to use AI should be accompanied by evidence that it actually works. AndyTheGrump (talk) 02:28, 12 April 2026 (UTC)
This is not a proposal, just an idea :) Anyway I will look for a way to test it. Imakesapage (talk) 02:38, 12 April 2026 (UTC)
So here's a problem: Vandalism detection is usually easy for a human. But if you give all the easy cases to a tool, and pass the uncertain ones along to the human, then the human gets worse at the job and has less fun doing it. A few rounds of classic poop vandalism is a quick and easy win for the humans, and doing the easy ones helps the humans hold on to the definition of vandalism. If the self-chosen vandal fighter instead has their feed full of POV pushing or simple broken wikitext, then it's a lot harder to figure out. It's not what they want to be working on. And over time – slowly, imperceptibly – they'll start calling all of that "vandalism" too. (It made Wikipedia worse, so that's basically vandalism, right?) And the next thing you know, their functional definition of vandalism is so far out of step with the community's rules that they're getting yelled at and even blocked. WhatamIdoing (talk) 06:54, 13 April 2026 (UTC)
My suggestion is to not to let the LLM perform any edits itself, so a vandal fighter could simply look at the 'probably vandalism' ones and revert any necessary. POV pushing and broken wikitext etc still need to be dealt with, right? These might go under 'not sure' for people with more time or whatever to look at. Imakesapage (talk) 08:37, 13 April 2026 (UTC)
If we could have the LLM flag improper claims of vandalism, and suggest a more accurate description, that might help.
I suggest this article, if you can read it: https://www.theatlantic.com/magazine/2026/04/self-driving-car-technology-tesla-crash/686054/ (For eligible editors: It's in Wikipedia:The Wikipedia Library if you search for "My self-driving car crash" in the box at the top of the page.) WhatamIdoing (talk) 18:50, 13 April 2026 (UTC)
I've experimented with this a while ago with (see User:Ca/Automated RCP), and while it didn't do too bad, I decided it wasn't worth the price. But local models have advanced significantly since then, so it might be worth trying it again. Ca talk to me! 00:23, 20 April 2026 (UTC)
From my experience with this, it's really really complex. LLMs are incredibly expensive, which practically forces you to use local models. The issue is that this is both slow and local models really really struggle with understanding wikimarkup and Wikipedia policies. When I played around with it, I saw many false positives for clearly good edits, such as the addition of templates being flagged as gibberish, or removing unsourced sections being flagged as disruptive. LuniZunie(talk) 18:15, 22 April 2026 (UTC)

Maps in page preview

Whenever I read an article about a new thing, especially geopolitical events, there's always cities, districts, villages etc. that I don't recognise. It would be helpful if the page preview for links to pages of geographic locations contained a map of the general area with that place highlighted, for example hovering over the link of a state/city would show you where the state/city is in the country. Right now the page previews contain images of the place, but a map would be more informative. Shyamm537 (talk) 15:48, 15 April 2026 (UTC)

That definitely sounds like a neat idea! mw:Page Previews is the feature that enables these, it could be good to look into this. Since they're implementing audio pronunciation snippets for Wiktionary, a map could have some potential for a future implementation too! Chaotic Enby (talk · contribs) 16:58, 15 April 2026 (UTC)
Thanks! I went through mw:Page Previews and it led me to mw:Extension:Popups and mw:Extension:PageImages, and I can see two ways to go about this. In the ?action=info page 1. either change the Page Image field with a map (if one is available) or 2. add a field called Map and set the API call to show the Map field if one is available. I haven't familiarised myself with the Extension documentation yet, so I might be missing some nuances within scenario 2 here. Shyamm537 (talk) 06:46, 16 April 2026 (UTC)
One issue is there is a subgroup of users that find maps useful for orientation and another subgroup who like photos of a place (probably more common e.g. SatNav drivers that rely on verbal prompts next turning). Both take up space in previews. It might need to be a user adjusted preference if Page Image field substitution used. Another issue is default scaling of maps may be poor for at a glance orientation of the user. ChaseKiwi (talk) 13:20, 16 April 2026 (UTC)
That (your first point) is very true. I am totally lost without a map, but many people I know never even look at them. I'm at a loss to explain how anyone can have an opinion about a place without even knowing where it is or what it is near but many such people certainly exist. Phil Bridger (talk) 14:14, 16 April 2026 (UTC)
Map reading is not the same skill as comprehending English or images which are more vital for the wikipedia project...see Zyszkowska, W (2017). "Levels and properties of map perception" (PDF). Polish Cartographical Review. doi:10.1515/pcr-2017-0002. ChaseKiwi (talk) 14:57, 16 April 2026 (UTC)
Map reading as a skill is different from seeing where a city/district/county exists inside a country/state. Page previews offer a small summary about the page, and as such a photo of a place would hold substantially less information than a map showing you where the place is. Map scaling, as you pointed out, is definitely an issue that would need to be tackled. Shyamm537 (talk) 02:36, 17 April 2026 (UTC)
I think personalisation would be useful, but I guess it also matters on what the context is. Suppose you're reading about a war or an election and you see names of cities and districts thrown around that either don't exist anymore or you're simply not aware of. In this case a map would be more useful than when, say, you're reading about a city and a building is referenced somewhere. Shyamm537 (talk) 02:30, 17 April 2026 (UTC)

Manual of Style policy for the usage of the term Linux

More information See existing discussion at WP:VPP ...
Close

Expand G15 to include content where the LLM usage has been disclosed already

Currently, G15 is mostly about deleting pages where the primary content came from an LLM but it was hidden. I propose an extension to G15 that permits the deletion of any articles that:

  • (a) Had a disclosure of LLM usage in its article body when it was created, and would obviously fail Wikipedia:NEWLLM
  • (b) Have a disclosure of LLM usage in the edit summary of the edit that led to the article's creation, and obviously would fail Wikipedia:NEWLLM

An example of where this might apply is this AfD discussion, which contains several articles that are obviously in violation of the present LLM policies and have an LLM disclosure in their initial edit summaries. GrinningIodize (talk) 12:53, 21 April 2026 (UTC)

Does that meet the requirements at the top of Wikipedia talk:Speedy deletion? Phil Bridger (talk) 13:52, 21 April 2026 (UTC)
Yes, I believe so.
The process is objective (only pages which have a disclosure of LLM usage in the initial edit), uncontestable (there are very few gray areas), frequent because we already have several examples of pages that would be deleted under these rules, and nonredundant (if I thought that another rule applied better here, I would have already used it). GrinningIodize (talk) 16:10, 21 April 2026 (UTC)
In the past, some proposals to expand G15 (such as adding oaicite markers) were rejected on the grounds that, while they were indisputably evidence of AI, they were not evidence of unreviewed AI, and the editor could have simply overlooked them. As WP:NEWLLM has since been enacted, and AI-generated articles are now prohibited in general, it makes sense to reconsider these as a collection (including, but not limited to, the explicit disclosure you mentioned above). Chaotic Enby (talk · contribs) 14:38, 21 April 2026 (UTC)
Possibly, if the article was created after NEWLLM was enacted and the current revision of the article is sufficiently problematic that it needs major work (we don't want to penalise the good, human work that has resulted in a good article just because it was initially created by AI). However, I'm not sure that leaves anything that doesn't already meet G15?
I'll leave a note about this discussion at Wikipedia talk:Speedy deletion if nobody has beaten me to it. Thryduulf (talk) 15:11, 21 April 2026 (UTC)
Missionary linguistics appears to have been written with large portions of unreviewed LLM content and the current revision is still tainted, to the point where it would need a full rewrite to be acceptable. GrinningIodize (talk) 16:12, 21 April 2026 (UTC)
But that was written before the new LLM guideline, unless I'm much mistaken? -- asilvering (talk) 16:53, 21 April 2026 (UTC)
Yes, but that doesn't matter, because we're not penalizing anyone, we're just cleaning stuff up to meet the new policies. GrinningIodize (talk) 16:55, 21 April 2026 (UTC)
Thryduulf said, Possibly, if the article was created after NEWLLM was enacted and the current revision of the article is sufficiently problematic that it needs major work (we don't want to penalise the good, human work that has resulted in a good article just because it was initially created by AI). However, I'm not sure that leaves anything that doesn't already meet G15? I, too, would like to see some examples that meet these conditions. -- asilvering (talk) 16:58, 21 April 2026 (UTC)
The article that I linked does not appear to meet G15 and has very little human text in it. It wasn't created after NEWLLM, but once again, that doesn't really matter. GrinningIodize (talk) 17:04, 21 April 2026 (UTC)
Please see WP:AINB for examples of the huge mountain of LLM-generated article content that was created before WP:NEWLLM that requires cleanup. Most does not meet G15, and some still won’t under any potential expansion. I2Overcome talk 17:05, 21 April 2026 (UTC)
I'm not asking about content that was created before NEWLLM, and neither was Thryduulf. -- asilvering (talk) 20:10, 21 April 2026 (UTC)
It sounded to me like you were both implying that only content created after NEWLLM was enacted could be speedy-deleted under any new WP:G15 criteria. G15 does not have a "recently created" requirement, and I see no reason why any changes could not be applied to older articles. In any case, there are plenty of examples of LLM articles created after NEWLLM that don’t meet the G15 criteria too that can be found at WP:AINB. In general, most LLM-generated articles, even ones with serious issues, do not meet G15. I2Overcome talk 20:37, 21 April 2026 (UTC)
(edit conflict) We don't typically apply policies retroactively, so unless someone presents some very good evidence of a need to do so in this case I would strongly oppose any new or expanded speedy deletion criterion for LLM-written material that predates the policy's adoption (March 2026). Thryduulf (talk) 20:41, 21 April 2026 (UTC)
Do you have any examples of other CSD modifications that explicitly couldn't be used on pages predating a related policy's enactment? GrinningIodize (talk) 20:51, 21 April 2026 (UTC)
WP:NEWLLM for article creation dates back to December 2025. It was only expanded in March 2026 to cover all content additions, but that shouldn't "reset the clock". Chaotic Enby (talk · contribs) 20:51, 21 April 2026 (UTC)
Interesting! I didn't know that. GrinningIodize (talk) 20:54, 21 April 2026 (UTC)
Can the guideline itself be applied to article content that was generated by an LLM before March 2026? In other words, is WP:NEWLLM an acceptable reason for tagging, reverting, stubifying, PRODing, or AfDing AI-written articles from before March (or December)? Because that is how it’s being applied by editors at WikiProject AI cleanup. I2Overcome talk 20:55, 21 April 2026 (UTC)
I would argue yes. Consensus is that LLM-generated content doesn't belong here with a couple of small exceptions, and that consensus should be followed regardless of context. GrinningIodize (talk) 21:00, 21 April 2026 (UTC)
What do you mean by a collection? GrinningIodize (talk) 16:27, 21 April 2026 (UTC)
Considering all the unambiguous criteria for AI-generated articles as a whole, rather than discussing them one by one. That wouldn't include more nuanced WP:AISIGNS like redlinks in "See also" mentioned below, but would definitely include disclosed AI-generated articles as well as AI-exclusive artifacts such as oaicite. Chaotic Enby (talk · contribs) 17:52, 21 April 2026 (UTC)
I see now, thanks. GrinningIodize (talk) 18:37, 21 April 2026 (UTC)
I think redlinks in See also and nonexistent categories should also be added to G15. It is highly unlikely that any human editor would make those mistakes. I2Overcome talk 16:53, 21 April 2026 (UTC)
I disagree. Plenty of humans, including me, often make mistakes like that, and it's always possible that a page or category got deleted later on. GrinningIodize (talk) 16:56, 21 April 2026 (UTC)
That’s a good point. It would have to be combined with other AISIGNS, which would not really be objective. I2Overcome talk 17:12, 21 April 2026 (UTC)
Regarding objectivity, see WP:AINB#Resonance of the Soul: Flowers and Harmonics for a good example of where editors experienced with AI detection disagree about whether something was or wasn't written by an LLM. Thryduulf (talk) 18:20, 21 April 2026 (UTC)
I'm still of the opinion that stubification is usually sufficient for newly-created AI articles. What do we gain by deleting them? The issue is the presence of the AI text, not the presence of the article. LLMs are actually pretty good at digging up obscure sources so I'd rather see a one-sentence stub with 10 references, 3 of which are good, than no article at all. If, after removing the LLM text, there is truly no content left, you will often be able to delete under A3, A7, or A9. So the benefit of expanding G15 here seems small. Also, pragmatically-speaking, this change might encourage people to lie about LLM use, which makes things harder for everyone. -- LWG talk (VOPOV) 15:51, 21 April 2026 (UTC)
By deleting them, we gain the aspect of being written for humans, by humans, which is what separates us from competitors like Grokipedia. If people lie about LLM usage, then we will ban people who are clearly using LLMs, same as sockpuppetry or any other potentially-secretive infraction. GrinningIodize (talk) 16:07, 21 April 2026 (UTC)
By deleting them, we gain the aspect of being written for humans, by humans I agree that maintaining our human-written status is critical, but how does speedy deletion achieve that goal more effectively than simply editing the article to remove all LLM content (leaving behind sources), and then deleting or keeping what remains based on the normal criteria? I agree that people who deliberately lie to other editors to evade accountability for their editing practices are WP:NOTHERE and should be blocked, but it's not trivial to identify these people and I have WP:BEANS-adjacent concerns about certain types of AI policy pages. -- LWG talk (VOPOV) 17:06, 21 April 2026 (UTC)
Rewriting the article from scratch has been our main strategy for years now, and it's incredibly ineffective, as evidenced by my other comments. What would take 2 days to delete might take a year to rewrite. As for not stuffing beans up one's nose with policy pages, that would be better discussed in another topic. GrinningIodize (talk) 17:15, 21 April 2026 (UTC)
If we're being pragmatic, WP:SOCKDELETE has a reason: the goal isn't to punish the sockpuppet, but to take away the reward for violating policy. Thebiguglyalien (talk) 22:36, 24 April 2026 (UTC)
You might have a point about possibly encouraging people to lie about their AI use. However, the ones that know it is prohibited are already doing that anyway.
I don’t see what good a one-sentence stub is to an encyclopedia. We are not a dictionary. Also, if only 3/10 refs are good, why keep the 7 that are bad? I2Overcome talk 17:28, 21 April 2026 (UTC)
Agreed. GrinningIodize (talk) 17:31, 21 April 2026 (UTC)
I don’t see what good a one-sentence stub is to an encyclopedia. Not much, but I don't see that it does much harm either.
if only 3/10 refs are good, why keep the 7 that are bad? Because it saves me the time of checking them to find out which 3 are good. When someone is ready to actually write a non-stub article, they can examine the sources and keep/discard as appropriate.
The point I'm trying to make is that once the LLM text is removed, which post-WP:NOLLM can be boldly done by anyone, there isn't a need to speedy delete anymore. So it feels like we don't gain much by expanding G15. -- LWG talk (VOPOV) 18:33, 21 April 2026 (UTC)
But in many cases, there is no usable article to be had after removing LLM-generated text. GrinningIodize (talk) 18:36, 21 April 2026 (UTC)
If that is objectively true then in most cases it can be speedily deleted under an existing criterion (e.g. no content). If it is not objectively true then speedy deletion isn't appropriate under any criterion. Thryduulf (talk) 18:39, 21 April 2026 (UTC)
But if it has no content because said content was removed in an edit (as it would be here), wouldn't the LLM-generated content have to be restored instead? GrinningIodize (talk) 19:58, 21 April 2026 (UTC)
No, why would it? If you remove all infringing content and nothing remains, we delete. See eg G12. -- asilvering (talk) 20:12, 21 April 2026 (UTC)
I think the main difference here is that the revisions we could have G12-ed are eligible for CSD. Revisions ineligible for G15 (or another criterion) are ineligible for CSD, and thus render the page ineligible. Blanking a page and tagging it for A3/A7 is more akin to how you can't BLAR a page to draftspace, then R2 it, imo. And like how, even though UPE is banned, we don't G11 the non-G11 eligible promotional creations, or even blank the infringing promotion then A7/A3. GreenLipstickLesbian💌🧸 20:17, 21 April 2026 (UTC)
Wikipedia:ALPHABETTISPAGHETTI comes to mind when reading your comment. GrinningIodize (talk) 20:57, 21 April 2026 (UTC)
Anyone having difficulty understanding GreenLipstickLesbian's comment becuase of the abbreviations just try prefixing them with "WP:" and linking them: WP:G12 WP:CSD WP:G15 WP:A3 WP:A7 WP:BLAR WP:R2 WP:UPE WP:G11. Phil Bridger (talk) 08:45, 22 April 2026 (UTC)
Thanks! GrinningIodize (talk) 12:00, 22 April 2026 (UTC)
You mentioned G12, but how is that relevant here? LLM usage is not unambiguous copyright infringement; people are still debating about that. GrinningIodize (talk) 20:53, 21 April 2026 (UTC)
People are debating whether LLM text is inherently copyright infringement due to the sourcing of their training data, but they are definitely capable of producing unambiguous copyright infringement, for example when they are asked to write an article based on sources and their output quotes large portions of the sources verbatim or nearly verbatim. -- LWG talk (VOPOV) 15:30, 22 April 2026 (UTC)
True. GrinningIodize (talk) 17:48, 22 April 2026 (UTC)
What Thryduulf said: axe the LLM text, then check what remains against A3, A7, or A9. -- LWG talk (VOPOV) 18:53, 21 April 2026 (UTC)
Even if there is usable content left, and it doesn't meet CSD, that doesn't mean the article is suitable. Cleaned up AI articles still end up at AfD for other issues, especially notability. It seems to me that having to deal with these LLM generated articles is a waste of editor time. It takes seconds to LLM generate an article--then, after however long it takes to spot it, tag it, and clean it up, they end up at AfD, which can take a week or more. StartOkayStop (talk) 22:58, 22 April 2026 (UTC)
The problem is that stubification usually requires the reviewing editor to check each source and claim one by one, which the editor creating the article has not done. This shifts the burden towards reviewing volunteers and creates a major asymmetry of effort, which is why WP:AINB currently has so much of a backlog. Chaotic Enby (talk · contribs) 17:55, 21 April 2026 (UTC)
stubification usually requires the reviewing editor to check each source and claim one by one If the text is known to be LLM-generated, that kind of detailed review is optional - it's a valid choice to just axe it. For example, it took me just a few minutes to clean up this article, and I feel like I was being generous in the extent to which I tried to find salvageable content in the sources. -- LWG talk (VOPOV) 18:48, 21 April 2026 (UTC)
I agree with LWG here. Most content issues unrelated to notability (outside of libel, slander, BLP issues, WP:CHILDPROTECT and other issues that involve people) are resolvable by stubification. There is no reason to expand a speedy deletion criteria and delete notable topics because the first version was WP:IMPERFECT. Katzrockso (talk) 14:36, 22 April 2026 (UTC)
Notified: Wikipedia:WikiProject AI Cleanup/Noticeboard. GrinningIodize (talk) 16:26, 21 April 2026 (UTC)
I have some doubts about how frequently this occurs. Phil Bridger (talk) 17:55, 21 April 2026 (UTC)
So, some users in the past disclosed a use of AI, as a token of transparency and proof of good intentions, even if it was not actually required to do so, and the idea is to punish such users? Cambalachero (talk) 14:16, 22 April 2026 (UTC)
I don't think it should be seen as a punishment, although I can see how it might be interpreted as such. However, the given example, Model specification (artificial intelligence), was written after WP:NEWLLM was expanded and three whole months after it was passed for new articles. Being transparent as to the fact that you're breaking policies and guidelines shouldn't give you a free pass to break them. Chaotic Enby (talk · contribs) 14:24, 22 April 2026 (UTC)
Fully agreed. GrinningIodize (talk) 14:26, 22 April 2026 (UTC)
But the proposal is not about users doing that now, but when policies allowed it. Retroactive enforcement of new policies is usually the source of lots of disputes and acrimony. Cambalachero (talk) 14:37, 22 April 2026 (UTC)
The proposal doesn't seem to specify that anywhere, and, as all the examples were from a time where policies didn't allow it, I'm not sure why it should be interpreted as retroactive. Chaotic Enby (talk · contribs) 14:41, 22 April 2026 (UTC)
My intent was for it to be retroactive. GrinningIodize (talk) 14:47, 22 April 2026 (UTC)
Thanks! I thought your clarification above regarding this was a suggestion separate from the original proposal, but that makes sense too. Are there pre-December 2025 cases of such issues that would make it necessary to have a retroactive policy? Chaotic Enby (talk · contribs) 14:50, 22 April 2026 (UTC)
I am not aware of any at the moment, but I consider those to be just as bad as ones created in violation of the policies at that time, because it would still be going against consensus. GrinningIodize (talk) 14:52, 22 April 2026 (UTC)
So, some users in the past disclosed a use of AI, as a token of transparency and proof of good intentions, even if it was not actually required to do so, and the idea is to punish such users? I don't actually think this G15 expansion is a good idea, but I cannot possibly emphasize enough that reversion of "your" edits/deletion of "your" articles is not a punishment. It's a foundational principle of what we do here. The moment you click "publish" you are releasing your content into a world where it can and should be mercilessly refactored whenever doing so would improve the wiki, or removed if its presence doesn't improve the wiki. -- LWG talk (VOPOV) 15:38, 22 April 2026 (UTC)
Yes, that's the written idea, but not the way it actually works in real life. That's the disadvantage of a human-written encyclopedia: that it has human editors, humans have feelings, and if you push them too hard in the name of arcane regulations, they would simply pack and leave. There's Wikipedia:Please do not bite the newcomers precisely because of that. Cambalachero (talk) 16:05, 22 April 2026 (UTC)
That's fair, and I agree with your core point that this policy might disincentivize people from being transparent about their editing practices. I also agree that we should be gentle in explaining expectations and give grace to new editors who are still learning the ropes, but not to the extent that we allow bad content to stay online just because the person who put it online meant well. -- LWG talk (VOPOV) 16:45, 22 April 2026 (UTC)
WP:BITE is not an excuse to never make any edits ever because it might hurt someone's feelings. Especially in this situation, given that the operative word is "newcomers," and someone who's been around long enough to do something "in the past" is, by definition, not a newcomer anymore. If they're even around anymore at all. Gnomingstuff (talk) 17:21, 22 April 2026 (UTC)
We need to stop thinking of removing material that does not belong on WP as some sort of “punishment”. It is called “editing” and is part of what editors are supposed to do. Blueboar (talk) 23:15, 22 April 2026 (UTC)
I concur. GrinningIodize (talk) 23:19, 22 April 2026 (UTC)
My comment above hasn't attracted a reply yet. Could someone please link to these "frequent" articles that have disclosures of LLM use, because I haven't seen any yet, although I'm perfectly prepared to believe that they exist. Phil Bridger (talk) 20:20, 22 April 2026 (UTC)
State AI laws in the United States is a good example. I nominated nearly a dozen others for deletion as well, but someone took it upon themselves to rewrite those, I presume for the purpose of making my proposal seem less fit. GrinningIodize (talk) 20:23, 22 April 2026 (UTC)
More for the purpose of putting my money where my mouth is on my claim that stubification is an easy way to deal with these. It's always annoying when people say "existing processes handle this AI problem fine" without lifting a finger to actually help, so I wanted to make sure I wasn't doing that. -- LWG talk (VOPOV) 20:35, 22 April 2026 (UTC)
I see; sorry about making that false presumption. GrinningIodize (talk) 20:47, 22 April 2026 (UTC)
OK, I can see one example, but I'm not convinced that this happens frequently enough to justify adding it to a speedy deletion criterion. Remember that we can delete such articles now, just not speedily. Phil Bridger (talk) 22:04, 23 April 2026 (UTC)
Well, that's your opinion. GrinningIodize (talk) 22:57, 23 April 2026 (UTC)
It's an opinion that I share. Remember that WP:NEWCSD point 3 is "Frequent", unless something happens so frequently that it is actually a frequent occurrence at the relevant XFD then it's not suitable for speedy deletion. Without a track record at XFD it's also very difficult to demonstrate the proposal meets point 2 (uncontestable). Thryduulf (talk) 23:10, 23 April 2026 (UTC)
You don't need to delete these if the article topic is notable. Just stubify them. SuperPianoMan9167 (talk) 23:46, 23 April 2026 (UTC)
Even if they do get stubified, it's likely for many of the subconscious biases of the LLM to still leak through, whereas a deletion would force editors to start from a clean slate. GrinningIodize (talk) 12:44, 24 April 2026 (UTC)
That's also a concern for me, and I did my best to avoid it while stubifying that recent slate of articles. Let me know any feedback for how I could have done better. I think it's analogous to stubifying an article whose first draft was purely promotional WP:UPE content. It might be better to WP:TNT delete in some cases, but that seems like the kind of judgement call that is firmly outside of speedy deletion territory. -- LWG talk (VOPOV) 15:06, 24 April 2026 (UTC)
Got it. GrinningIodize (talk) 15:07, 24 April 2026 (UTC)
Regarding stubbifing, based on what I've seen, I think people are more likely to create articles out of redlinks than to improve one-sentence stubs. InfernoHues (talk) 15:51, 24 April 2026 (UTC)
Agreed. GrinningIodize (talk) 15:53, 24 April 2026 (UTC)
I'm doubtful about this, but it seems like something that we could find an answer for, rather than just guessing based on each individual's own limited experience. WhatamIdoing (talk) 20:10, 24 April 2026 (UTC)

Guided tours!

Help:Guided tours are a very powerful functionality that can allow for easy onboarding of newcomers, or help out more experienced users get a hold of tools/workflows with steep learning curves. However, they are sadly quite underutilized. I've recently worked on one to help administrator election clerks set up SecurePoll, and I would love to hear out your ideas for more functionalities that tours could help with! Chaotic Enby (talk · contribs) 23:26, 24 April 2026 (UTC)

  • I'd like to see something like this that helps familiarise new users with some of our policies and guidelines. Many new users aren't familiar with the existence of our policies & guidelines, let alone how to find them, so bumping up against them is an unfortunately common (and often discouraging) occurrence. I'm not sure how this could be done exactly, but guided tours seems like it could be helpful to direct users in the right direction. --Grnrchst (talk) 17:13, 26 April 2026 (UTC)
    One thing we've heard from newcomers in the past is that almost all of the policies and guideline pages look the same. If you don't remember the name, you're stuck. There's no fallback to help your memory or communication, like "Um, I think I was looking at a page with a grumpy cat on it?" or "It was a weird shade of green". WhatamIdoing (talk) 01:48, 29 April 2026 (UTC)
    Solution: put a different memorable animal on each page, like we've done at Wikipedia:Please do not bite the newcomers. Chaotic Enby (talk · contribs) 01:53, 29 April 2026 (UTC)
    A la O'Reilly Books? "Please get out your copy of the cricket book, and we'll both turn to page 123, where it says..." WhatamIdoing (talk) 02:02, 29 April 2026 (UTC)
    Made me think of the dragon book, which was used in a course I took in the early 1980s. Donald Albury 13:02, 29 April 2026 (UTC)
See mw:Article guidance – maybe this could be developed further to broader scope/application and it could be sth like that is already planned but I'm not sure about it...could also be a broad goal about helpful guidance at point needed/useful or nothing 'official'. Two other or related ways for this is some editor assistant tool as proposed here which can provide links and maybe help if you ask things in natural language and videos – see c:Category:Instructional videos on using Wikipedia. Prototyperspective (talk) 14:49, 29 April 2026 (UTC)
These all look very interesting, although a bit separate in functionality from guided tours, as the first one is another extension intended specifically for article creation and the second one is some kind of chatbot, which might not be as well-received by the community. Guided tours are more flexible (as they can be directly written on-wiki without requiring developer work, since the extension already exists) and can provide interface-level guidance, so they are in some way complementary with what you suggest. Chaotic Enby (talk · contribs) 15:14, 29 April 2026 (UTC)
Yes, Article guidance currently is just about article creation but again maybe that could be expanded. The context was that the extension you linked seems not use much by both people providing tools and especially users and has been developed it seems around 2013 while the article guidance is used by many users and has been developed just recently. So to me it seemed like expanding that is probably the more feasible approach for optimal results. Maybe one could ask about the differences between the two at the article guidance talk page where maybe it could be clarified if the guided tours extension could be used or why not. Prototyperspective (talk) 11:53, 30 April 2026 (UTC)
I'm very confused as these two extensions seem to have very different purposes and functionalities, so using one over the other doesn't seem especially meaningful? For example, the SecurePoll setup tours aren't something that could have been done with the Article guidance extension, as the goal is to guide users through navigating an existing interface rather than have them answer a series of questions. Noting that the latter extension is also currently experimental, and hasn't been fully deployed yet, so expanding it would be a bit premature (although it's good to plan it in advance!) Chaotic Enby (talk · contribs) 14:34, 30 April 2026 (UTC)
Guides tours are interactive tours of a part of Wikipedia. They are meant to complement help pages, by showing users directly how to do something in a step-by-step way overlaps with Article guidance … provides tailored, community-adjustable guidance throughout the creation process [in this context also other processes and pages] and I don't see why it couldn't be adapted for SecurePoll setup; also things like SecurePoll setup seemed like an example, not some particular thing you were asking about. Prototyperspective (talk) 16:10, 30 April 2026 (UTC)
The intent is similar, but the actual execution is different, as Article guidance provides a standalone window while Guided tours show up as an overlay. Both are helpful in different use cases – to go back to my SecurePoll example (which I'm referring to as it's the one I've developed recently, so it helps to have a concrete example), the setup requires the clerk to use Special:SecurePoll/create, while article creation is more flexible and a separate window can be used to send preload data (which I don't think is feasible with SecurePoll). In general, Article guidance is still experimental and more tailored to the specific task of article creation, and doesn't yet have options to be adapted by the community for other tasks. Chaotic Enby (talk · contribs) 16:42, 30 April 2026 (UTC)

New overcategorization guideline for fictional elements

Due to recurring issues with original research in categorization of fictional elements by in-universe attributes, I have opened Wikipedia_talk:Overcategorization#Fictional_elements?. I suspect that there is enough precedent to establish one, but I don't know exactly what it would entail. –LaundryPizza03 (d) 14:46, 25 April 2026 (UTC)

The topic as framed is already in-universe, not strictly being about fictional elements :) DMacks (talk) 15:42, 25 April 2026 (UTC)

Idea for a potential bot

Would it be possible to make a bot thingy for the sole purpose of removing double space? Wikipedian12512(alt) (talk) 11:55, 27 April 2026 (UTC)

It would be possible for someone to create such a bot, but it wouldn't be approved. See WP:COSMETICBOT. Anomie 12:31, 27 April 2026 (UTC)
Also see MOS:DOUBLESPACE, which implies that they should be just left alone. Graham87 (talk) 06:26, 28 April 2026 (UTC)
I'm talking more about accidental double spaces in between words. I've had to correct that occasionally. Wikipedian12512(alt) (talk) 11:49, 28 April 2026 (UTC)
Such double spaces do not show in reading mode, and so do no harm. Its OK to correct them when performing more substantive edits, but editing an article solely to reduce double spaces is unnecessary, and may be considered a form of gaming the system to increase edit counts. Donald Albury 15:37, 28 April 2026 (UTC)
Thanks! Wikipedian12512(alt) (talk) 21:25, 28 April 2026 (UTC)
While we're all here: I've heard that the double-space actually is visible, but only to the small number of people who use the Wikipedia app. WhatamIdoing (talk) 01:50, 29 April 2026 (UTC)
Sounds like a bug in the app then... Anomie 02:03, 29 April 2026 (UTC)

Is there an article about the practice of addressing people in the third person as something more formal in some languages?

Thryduulf (talk) 21:05, 30 April 2026 (UTC)

Something about WP:GAME

WP:GAME doesn't appear to be an intuitive rule to me, and there are many cases where it restricts a possible way to avoid large reverts. For example, segmented edits are better for controversial topics, as they allow people to easily revert a particular problem caused by a good faith editor. So here's my question: couldn't we remove GAME and support the separation of controversial changes and other, trivial changes? Or we could have editors decide whether to link two changes, and have them count as one for the edit count. Heck, we could even make a system that only counts one edit per page per day or something for the edit count (I feel that this might be preferable to just having a warning templates if someone breaks GAME). Wikipedian12512(alt) (talk) 21:38, 28 April 2026 (UTC)

Making several edits to allow for specific reverts wouldn't fall under "gaming the system" in my opinion. The matter is more suited for something like making 10 dummy edits adding one character to a sandbox, rather than making several substantial content edits on the same article. This is one of the reasons why having Wikipedia not being a legal system is useful, as a rigid rule on "don't make multiple edits in a row on the same page" would indeed be much less practical. Chaotic Enby (talk · contribs) 21:48, 28 April 2026 (UTC)
And, as someone often guilty of changing my mind and adding more to one of my comments, I'd be the first one affected by this! Chaotic Enby (talk · contribs) 21:48, 28 April 2026 (UTC)
Yes, I agree that the flexability is important, but a system that I think would work better is one where you can add many dummy edits to a sandbox, but only the first counts. Maybe one every hour per article in the main space, and one per day in the non main space? (These are all per page, different things on different pages are treated seperately, and this isn’t saying you can’t go back and add another tweak, just that the second one doesn’t count for edit counts if it’s too soon.) Wikipedian12512 (talk) 00:11, 29 April 2026 (UTC)
That could be a good idea! In that case, the edit count requirement should be decreased accordingly as not every edit from legitimate editors will count anymore. Given the recent technical changes to autoconfirmed, this is absolutely something that can be taken into consideration! Chaotic Enby (talk · contribs) 00:22, 29 April 2026 (UTC)
Thanks :)! Wikipedian12512 (talk) 00:47, 29 April 2026 (UTC)
My guess would be that the auto confirmed should stay the same, as the 10 edit req is kinda just to make sure someone doesn’t just make the account, wait a bit, then vandalize. Plus, with the linking bot suggestions, it shouldn’t be too hard for anyone. (If necessary, I think seven is a good number, but ten is also quite practical.)
Extended confirmed might need a much larger decrease, as the bot suggestions stop after a while and people start to find their “groove,” hitting things that they really like, therefore lowering their range and leading to doing a lot of work on the same article. My guess would be that 350-400 would be the ideal new range for that.
Anything else needing an edit count should be lowered by 20-40%.
(I’m probably getting ahead of myself here.)
Wikipedian12512 (talk) 00:55, 29 April 2026 (UTC)
  • Note the OP presumably means WP:Gaming the system (WP:GAME) rather than Wikipedia:WikiProject Ghost towns (WP:GTS). Please check the shortcut goes where you think it does before posting - in this case the link target enough of a non sequitur to make it clear that there is an error and there is sufficient context for experienced editors like me to work out the intended meaning. However when the target is more plausibly relevant, there is less context and/or the reader is less familiar with the English Wikipedia a mistake like this could cause serious confusion and miscommunication. Thryduulf (talk) 01:05, 29 April 2026 (UTC)
    Sorry. Every single time, I mix those up.
    Note to anyone confused: I meant Wikipedia:Gaming the system. I’ll be changing this now. Thanks! Wikipedian12512 (talk) 01:11, 29 April 2026 (UTC)
    And, please, everyone, check that the WP:UPPERCASE actually says what you think it says. Our rumor mill/telephone game is not a reliable source for what pages like WP:QUO or WP:NOTNEWS actually say. WhatamIdoing (talk) 01:59, 29 April 2026 (UTC)
    …What? Wikipedian12512 (talk) 02:27, 29 April 2026 (UTC)
    Am I missing context? Wikipedian12512 (talk) 02:27, 29 April 2026 (UTC)
Apparently continued at Wikipedia:Village pump (policy) § Proposal: Changing WP:GAME. Graham87 (talk) 07:16, 30 April 2026 (UTC)

Changes to Autoconfirmed and Extended Confirmed

The WMF has recently changed Autoconfirmed to apply from the date the first edit was made, not the date the account was first created.

I'm opening this discussion because there was some discussion in that AN thread about additional changes to autoconfirmed or extended confirmed, with the intent to get a list of changes that the community might want to see made to these rights. These possible changes would then be taken to a multi-part RfC.

How they are presented in the RfC depends on the proposed change. Configuration changes (changing the seniority or number of edits required for autoconfirmed or extended confirmed) are simple to implement and only require a consensus to be formed and a request made through the process listed at meta:Requesting wiki configuration changes, and would be listed in the RfC as being there to determine whether the required consensus exists.

Proposals for changes that are not simple to implement, such as changing extended confirmed to be from the date the first edit was made, not the date the account was first created, would be listed in the RfC as going into an open letter if there is a consensus for them, which would request that the WMF consider implementing them.

Changes to discuss in the RfC could include:

Configuration changes
  1. Change AC to require 7/14/30 days of seniority, not 4
  2. Change AC to require 25/50/100 edits, not 10
  3. Change ECR to require 60/90/180/365 days of seniority, not 30
  4. Change ECR to require 750/1000/2000 edits, not 500
Other changes:
  1. Change AC to require 7 days with edits, not 7 days seniority
  2. Change ECR to apply from the date the first edit was made, not the date the account was first created
  3. Change ECR to apply from the date the account received autoconfirmed, not the date the account was first created
  4. Change ECR to require 30 days with edits, not 30 days seniority
  5. Change ECR to count only the first 10/25/50 edits on any day

Please proposal additional ones. BilledMammal (talk) 06:29, 30 April 2026 (UTC)

I think other changes 1, 4, and 5 are too complicated to explain to new editors, but I wanted to include them here to allow for discussion on them.
Regarding configuration change 3, ECR period, according to statistics from Sean.hoyland, most accounts already take more than a year to obtain it, so extending the period is not likely to be disruptive.
BilledMammal (talk) 06:29, 30 April 2026 (UTC)
One easily overlooked effect of increasing the requirements is that we would get more requests for manual granting of these permissions, so more work for admins. Phil Bridger (talk) 08:10, 30 April 2026 (UTC)
I'm not sure how sensitive the number of requests for manual EC grants is to the requirements. Most requests seem to be related to either the content translation tool, alternative accounts, restoration of the grant after it was revoked or the occasional "the rules don't apply to me because..." claims (which can presumably be declined at leisure). It's not obvious to me how or the extent to which requirements changes would impact those sets. Sean.hoyland (talk) 13:18, 30 April 2026 (UTC)
I note that changing ECR to count from first edit would be simple to implement, just swap APCOND_AGE for APCOND_AGE_FROM_EDIT in the configuration. Some wikis already do this. Anomie 11:58, 30 April 2026 (UTC)
I have to wonder what the purpose of all these proposed changes is. I find it hard to believe that the sort of vandals that make sleeper accounts would be significantly inconvenienced by the change from "age from creation" to "age from first edit" or by modest increases in the thresholds. Similarly, the "days with edits" might make it easier for vandal fighters to catch obvious patterns (if they have the bandwidth to watch), but other than that I don't see it being much barrier to persistent vandals either. On the other hand, legitimate new editors are probably much more likely to be affected. Anomie 11:58, 30 April 2026 (UTC)
What would be the effects on legitimate new editors in your view? I should clarify that I don't see EC as a barrier for vandals. Vandals can just be reverted (or they won't be able to edit EC protected pages anyway). The EC grant is interesting for me because I think it is effectively our only enforceable restriction for contentious topic areas, and there are interesting statistical relationships between grant acquisition rates and outcomes for accounts (e.g. ). It's possible that the EC requirements could be one of the few dials we can turn to regulate disruption in contentious topic areas. Sean.hoyland (talk) 13:50, 30 April 2026 (UTC)
Having said that, when it comes to contentious topic areas, with their complicated feedback loops, I have very little confidence in our ability to predict the effects of actions. Things done in good faith to improve things may not have the intended effect and may make things worse e.g. AE seems like a good idea in theory, but in a polarized environment with the additional issue of an inability to establish whether accounts are in good standing, the system becomes weaponized and a cause of conflict. Similarly, changing the EC requirements might have the opposite of the intended effect by filtering out the semi-disinterested parties (who can't be bothered to acquire EC) and concentrating the dedicated partisans willing to put in the effort. One contentious topic area, for example, is currently under attack by organized, probably paid, ban/block/lock evading actors, and I doubt that changing the EC requirements would significantly impact these kinds of actors, although it would extract more pre-EC work from them, which probably benefits the project. Sean.hoyland (talk) 15:04, 30 April 2026 (UTC)
Change ECR to require 50 edits, not 500. The current bar is already absurdly high and makes XC protected articles considerably worse. LordCollaboration (talk) 14:03, 30 April 2026 (UTC)
Any time-based limits are moot to me, since you don't need experience to go and do something else for a bit.
ECR involves articles with the most potential for disruption, so I absolutely think #4 in config changes is my preference (1k edits). There are plenty of editors with 500 edits at ANI or Teahouse who still don't know how to find reliable sources properly - especially when it comes to contentious topics.
The difference between the amount of editors who manually ask for EC vs. the potential save in disruption from otherwise-well meaning editors (or not-so-well meaning ones) is going to be pretty significant IMO.
AC editors can still work on any non-ECR article, so this won't impact new editors that much, it's just raising the bar on our definition of an experienced editor.
I also think #4 on the second section is best (within that section) as it has the least potential for exploitation, #5 would be great but I don't know if that's technically possible? Overall, I prefer #4 at the top. Blue-Sonnet 14:07, 30 April 2026 (UTC)
If you look at the Days and Unique Dates columns in https://gamingcheck.toolforge.org/recent_grants_table with the Verbose option switched on, it's clear that account age and active editing days are very different kinds of time that would presumably have different effects in requirements. Sean.hoyland (talk) 14:18, 30 April 2026 (UTC)
Would it be possible for autoconfirmed to require mainspace edits? Or at the very least non-userspace/draftspace edits, so that somebody can't get 10 edits to their own article draft and then be able to create it in mainspace. ScalarFactor (talk) 17:00, 30 April 2026 (UTC)
I don't think any criteria can give us what we want. The idea of the "confirmed" status is to give us some idea that the user is not just here to vandalise, and "extended confirmed" to show us that the user is experienced and has some idea of policy/guidelines. These are very difficult to automate. For exanple a user with 50 edits could know policy/guidelines inside out, but a user with 5000 could know nothing about them. Phil Bridger (talk) 22:08, 30 April 2026 (UTC)
I think a big part of that problem is that "we" don't agree on what "we" want in the first place. You (and I) think of autoconfirmed as a low bar to stop the simplest drive-by vandalism by people who've effectively never edited before. Dennis Brown, just below, wants it to be something much stronger for many different purposes. Anomie 01:53, 1 May 2026 (UTC)
  • I personally think AC should be 50 edit + 30 days. Some may argue this is a high bar, but I would argue that for one the largest and heavily trafficked websites on the planet, it is a reasonable limit that allows new users to edit most articles, and protects sensitive articles. This isn't just about vandalism, it also protects against sockpuppetry/meatpuppetry, and even from innocent "bad edits" from users that are too new to understand how things work here. 50 edits is also enough to establish patterns for bad acting accounts, to see if they are gaming the system, if they are essentially SPAs, and if they are actually WP:HERE to build an encyclopedia. I don't think we need to limit it to $x number over $y days, or only count mainspace edits, just keep it simple. 50/30 makes it easy to understand, easy to determine someone's motivation, and is based on the reality that it takes a month for a good faith editor to have a clue about things like WP:BRD, how to use talk pages, etc. If anything, it would prevent new, good faith editors, from getting into trouble on sensitive pages simply due to a lack of experience, and improve the experience for all editors. Dennis Brown - 2¢ 23:41, 30 April 2026 (UTC)

Unbolding the donate button

Donate button in Vector 2022

Recently, Vector 2022 has been showing the donate button in bold to logged out users, making it more prominent than "Create account".

In a discussion at the Fundraising Hub, Toadspike, Barkeep49, and Chaotic Enby have proposed unbolding it but the WMF doesn't seem receptive. However, it should be possible for us to do that ourselves, by editing MediaWiki:Vector-2022.css.

I want to start a discussion on doing so here, to see what the general opinion of the community is, and if the community is supportive then possibly open an RfC if the WMF isn't willing to act. BilledMammal (talk) 07:04, 30 April 2026 (UTC)

Yep, I think giving the "Donate" button more prominence than the "create account" button really gets our priorities backwards. We are in great need of more editors. We are not really in need of more donations. Toadspike [Talk] 08:43, 30 April 2026 (UTC)
Would bolding Create account put a thumb on that scale? CMD (talk) 09:35, 30 April 2026 (UTC)
Agree but maybe it's styled bold because the button is there temporary, unlike the create account button. On a related note, I think it needs open source developers / technical contributors too and this is essentially never highlighted (I've suggested occasional campaigns/banners above IT-related articles). Prototyperspective (talk) 11:58, 30 April 2026 (UTC)
I don't love the message bolding Donate and not create an account sends. I have consistently pushed for "editing" to be a meaningful part of the "donation" process because a semi-regular editor provides far more value to the project than someone throwing money equivalent to a few cups of coffee at the foundation once a year. However, I'm not sure I agree that We are not really in need of more donations. First I don't agree because I don't see myself as part of the WMF so their need for donations is not something that includes me in a "we" situation. But I also don't agree because, for the first time since I became an active participant in these kinds of discussions, I think the WMF does need donations. Even for those who think all the Foundation should do is keep servers running and defend us legally, the costs of maintaining the project have risen dramatically in the last year, with AI scrapers placing real strain on our servers and the political and legal climates in numerous places causing increase stain there. And, for me, I want the foundation also developing the mediawiki software and think its done a better, though still highly imperfect, job of orienting that development towards community desires and needs. Together, I think the WMF's objective needs are greater and it's doing a better job of spending the money it does raise. So I am troubled by the decreased number of donors in last year's fundraising drive both from a financial standpoint and because of what it means for our readership (with readership being something I care about a huge amount). I suggest instead of finding ways to unbold "Donate" we should go the opposite direction, leaving that bold, and instead using our abilities to also bold "Create an account". Best, Barkeep49 (talk) 14:50, 30 April 2026 (UTC)
Co-signing the above, and I'd like to draw on because of what it means for our readership a bit more. AI isn't just a threat to our server capacity. It's preventing readers of our information from being readers of Wikipedia. And that's where we get both editors and donations from. We are going to start needing both, much more than usual. -- asilvering (talk) 15:20, 30 April 2026 (UTC)
Seconding Barkeep once again on this. The only worry I'm having with this is that bolding conveys relative importance as well as absolute importance – if everything is bolded, we're giving the reader more or less the same impression as if nothing is, although the buttons will be (on aggregate) slightly more visible. Bolding the Donate and Create account buttons but not the Log in button could be another option, although I'm afraid it might look a little odd. But yes, we do need both more editors and more donors in the grand scheme of things. Chaotic Enby (talk · contribs) 16:55, 30 April 2026 (UTC)
Hi all - A major area of the Wikimedia Foundation’s 2026-2027 annual plan is around deepening contributor engagement, where we’re trying to double the number of retained editors over the next two years. That’s a huge goal, but it’s the right one; our entire mission depends on growing the number of volunteers who create and improve Wikipedia content. It’s also one that has been strongly recommended by editors on the Product and Technology Advisory Council (PTAC). Foundation teams are spending a lot of time thinking about what are the most effective entry points to do that, including things like lowering the barriers to entry for newcomers through structured editing. If you have any thoughts about this, we’d love your feedback on the annual plan talk page. KStineRowe (WMF) (talk) 20:59, 30 April 2026 (UTC)
cc @KStineRowe (WMF)'s point, I remember talking about this exact thing with @OVasileva (WMF) about this a while back after Toadspike mentioned it on the Fundraising Hub page. TLDR, I think the current status quo of donate having more prominence than creating a account is temporary? and the plan is to double down on bringing account creation to the forefront in the next year to my understanding. Sohom (talk) 21:16, 30 April 2026 (UTC)
For those who don't speak WMF, "next year" refers to the next Fiscal Year which starts in July. But future promises does nothing in my mind to diminish actions we might want to take as a community now. Best, Barkeep49 (talk) 21:22, 30 April 2026 (UTC)
@Barkeep49 Yes and no. They are working on some of it this year. For example, mw:Readers/Reader Experience/Reading lists trials (one of the features that WMF thinks might get a lot of folks to create accounts) are ongoing right now. But again you are right that most of the work appears to be scheduled for the next year. Sohom (talk) 21:33, 30 April 2026 (UTC)
Also cc @SToyofuku-WMF who was more directly involved in the experiment per the List_of_experiments_in_Product_and_Technology. Sohom (talk) 21:36, 30 April 2026 (UTC)
Reading lists are a project because the WMF sees a large number of people who create accounts and whose only desired role is to be a reader. The WMF wants to make Wikipedia more useful for them. This is great. I support such work. I take our readers seriously and delivering value to users keeps them coming here as opposed to relying on AI slop; it's an example of why I want to see the WMF able to continue to fundraise successfully. That, however, is different from getting people to be an editor which is what I am focused on in this discussion. The good news is that bolding the "Create an account" serves both kinds of registered accounts. Best, Barkeep49 (talk) 21:44, 30 April 2026 (UTC)
On the linked page it does say that it may encourage more users to become editors. Personally I think this is unlikely, though I agree that improving the reader experience is still a worthwhile goal.  novov talk edits 12:53, 1 May 2026 (UTC)
Thanks all - we're pretty open to discussing turning off the donor button bolding and exploring the idea of bolding the account creation button, but these decisions fall across a few different teams/timezones, and some key people are at an offsite. Would it be ok with you if we paused this discussion and followed up with you all next week? KStineRowe (WMF) (talk) 14:50, 1 May 2026 (UTC)
Sounds great, thanks a lot for the feedback! Chaotic Enby (talk · contribs) 15:07, 1 May 2026 (UTC)
That definitely works! Sohom (talk) 15:32, 1 May 2026 (UTC)
What are the reasons that make this current temporary status quo (which might add confusion to readers about our priorities) necessary, and would it be possible to make these changes sooner instead of waiting a fiscal year? I'm worried that, while "temporary" at first, it might stay here as a fait accompli and be much harder to change moving forward. Chaotic Enby (talk · contribs) 21:53, 30 April 2026 (UTC)
Noting that it was Some1 not me who raised the bolding issue. It got misattributed to me. Best, Barkeep49 (talk) 14:26, 30 April 2026 (UTC)
Thanks for the ping Barkeep49, but I believe it was Toadspike who raised the bolding issue, not me . I haven't commented on the new Donate button yet, but since I'm here, I'll just give my 2 cents and say that bolding the Donate link makes it look tacky. My preference would be to not bold any of those three top-right corner links (donate, create an account, log in), unless data shows that bolding those links will increase the number of donations/user account creations, etc. Some1 (talk) 22:39, 30 April 2026 (UTC)
But the WMF seems perfectly prepared to "look tacky" as long as the money keeps rolling in. Phil Bridger (talk) 18:32, 1 May 2026 (UTC)
Would we really need an RfC for this? It’ll just get speedily closed in favour and stir up animosity Kowal2701 (talk, contribs) 16:23, 30 April 2026 (UTC)
I think if the community wants Iadmins to start a fight with the WMF then it should be really clear that they truly have community support. Speaking as an interface admin myself I definitely would not be willing to do this without an RfC. * Pppery * it has begun... 17:29, 30 April 2026 (UTC)
Same here (although, being WP:INVOLVED, I shouldn't be the one to do this either way). If there isn't a clear message from the community showing overwhelming consensus, we shouldn't be doing this. And, if there is, our primary goal should be convincing the WMF instead of circumventing them. Chaotic Enby (talk · contribs) 17:46, 30 April 2026 (UTC)
Ditto from me, I'm happy to talk with WMF folks on behalf of the community, I'm not going to be taking any controversial WP:IADMIN actions without strong community consensus. Sohom (talk) 21:25, 30 April 2026 (UTC)
On a different front from the IA points raised by Pppery and CE, as this thread shows in the discussion so far even when there's agreement on the problem (bolding the Donate button by itself is bad), there may be disagreement on the solution and so finding consensus can be useful. Best, Barkeep49 (talk) 17:55, 30 April 2026 (UTC)
Imho the only button that should be bolded is "Edit", as it was before Monobook.-- sapphaline (talk) 19:44, 30 April 2026 (UTC)

Add article assessment info to mobile/app pages

The Wikipedia mobile site and app currently provide no information about an article’s assessment level (FA, GA, etc). Neither the article pages themselves nor the corresponding talk pages show it. A little icon on either of those pages (or just making some of the standard talk page templates visible) would be helpful. Philroc (talk) 18:19, 30 April 2026 (UTC)

The talk pages do show it, at least on web on my phone. The assessment templates are hidden behind a button saying "Learn more about this page". It would be good for them to be visible without having to press a button. CheeseAndJamSamdwich (talk) 19:01, 30 April 2026 (UTC)
Similar discussion occurred at Wikipedia:Village pump (proposals)/Archive 174#Move good/featured article topicons next to article name. There is apparently an issue (though I don't know if it applies to skins added since 2020) that prevents GA/FA topicons from appearing on mobile (apologies if I'm totally wrong about this now). As stated, most of this information can be found on the "Learn more about this page" when looking at a talk page if you really need it. I was certain more discussion was had at length about assessments being visible on mobile in the past but I can't find it now. -- Reconrabbit 17:58, 1 May 2026 (UTC)

Related Articles

Wikiwand AI