Wikipedia:Bot requests/Archive 73
From Wikipedia, the free encyclopedia
| This is an archive of past discussions on Wikipedia:Bot requests. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current main page. |
| Archive 70 | Archive 71 | Archive 72 | Archive 73 | Archive 74 | Archive 75 | → | Archive 80 |
Make more use of external link templates
{{Anarchist Library text}}, for example, has just three transclusions; yet we currently have 222 links to the site which it represents. A number of similar examples have been uncovered, with the help of User:Frietjes and others, in recent days, at the daily Wikipedia:Templates for discussion/Log sub-pages. I've made {{Underused external link template}} to track these; it adds templates to Category:External link templates with potential for greater use.
Using external links templates aids tracking, facilitates quick updates when a site's link structure changes, and makes it easy to export the data into Wikidata.
Is anyone interested in running a bot to convert such links to use the templates, please? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:58, 6 August 2016 (UTC)
- @Pigsonthewing: This is possible, but I need an example to follow off of. I also noticed that many of the articles that link to the domain but don't transclude the template are links in the references, does {{Anarchist Library text}} still fit in that case? Dat GuyTalkContribs 05:20, 14 September 2016 (UTC)
- @Pigsonthewing: - this thread is old and archived, but are you aware of any general discussions on how to handle dead links in external link templates? On average links die after 7 years, so eventually all these templates are just dead links. Do we replace the template with
{{webarchive}}, or add a{{webarchive}}after the template, or replace the URL within the template with an archive URL, or should every template have a mechanism for dead links, or ? And how does that work with Wikidata. -- GreenC 16:54, 16 December 2016 (UTC)
Non-notable article finder bot
Almost 1000+ pages get created everyday. Our New page patrollers work hard. Still some articles which are clearly non-notable survive deletion. In Special:NewPages we can see only one month old articles, if we click the "oldest".
Some pages can only be deleted only through AFD. Users tag them with speedy deletion, administrators remove speedy deletion tag with edit summary suggesting "that the article has no notability but can be deleted in AFD". In some cases the article is not taken to AFD, as the user who tagged for speedy moves on to other new pages, or he is busy with his personal life (they are volunteers). And as the AFD template stays for two/three days before being removed by administrators, other new page patrollers who focus on pages two days old sees the speedy deletion tag. But sometimes, they don't notice that administrator removed the speedy deletion tag with a suggestion of AFD. Luckily a few of these articles pass one month limit and survives on English Wikipedia.
Some articles are prodded for deletion, the prod is removed after two/three days. If anybody notices that the article is not notable, then it will be taken to AFD.
And some articles where the PROd is removed survives if the article is long, well written, has paragraphs, infobox template, categories, seemingly reliable sources, good English,(But only doing extra research can show that the article is not-notable). Means spend our internet bandwidth.
As there is a proverb "finding needle in a haystack"
. Finding these articles from five million articles is a nightmare. We don't have the time, energy and eternal life, nor any other editor.
I am damn sure that there are thousands of such article among five million articles. Only a bot can find these articles.
This is what the bot will do. In Wikimedia commons they have flickr Bot.
- This Bot will check articles, which are more than six months old. If any article which was
speedily deleted before and recreated by the same user
, but was not deleted after recreation by the same user, this bot will put a notice on the talk page of the article. Volunteers will check the deletion log and see whether the article was speedily deleted before.
- Those article which are
minimum six month old, has less than 50 edits in edit history and edited by less than 15 editors
. This bot will google news search with "article name" inside quotation marks"_____". The bot will also google book search the article name. If both the results are not satisfactory, then the bot will put a notice on the talk page of the article (If google news results show that the article is not notable, but google book search shows good result, then the bot won't tag the article's talk page). Then volunteers will check the notability of the article. The bot will not make more than 30 edits everyday.
The problem is that many good articles are unsourced and badly written, After checking on the internet, editors decide that the article is notable. While some articles which doesn't have any notability are wonderfully written. Thank you. Marvellous Spider-Man 13:13, 22 August 2016 (UTC)
User:Marvellous Spider-Man, for something like this, could you apply the above rules to a set of articles manually, and see what happens - pretend your a bot. Then experiment with other rules and refine and gain experience with the data. Once an algo is established that works (not many false positives), then codifying it becomes a lot less uncertain because there is evidence the algo will work and datasets to compare with and test against. You could use the "view random article" feature and keep track of results in columns and see what washes out to the end: Column 1: Is six months old? Column 2: Is 50 edits? etc.. -- GreenC 13:56, 29 August 2016 (UTC)
- I think Green Cardamom's suggestion is great. With a bot where the algorithm isn't immediately obvious, the proposed algorithm should definitely be tested manually first. Enterprisey (talk!) 22:45, 3 September 2016 (UTC)
- We need to be very careful about deletion bots, especially ones that look for borderline cases such as articles that didn't meet the speedy deletion criteria but do merit deletion. We need to treasure the people who actually write Wikipedia content and try to reduce the number of incorrect deletion tags that they have to contend with. Anything that speeds up sloppy deletion tagging and reduces the accuracy threshold for deletion tags is making the problem worse.
- More specifically, we have lots of very notable articles that 15 or fewer editors have edited. I'd be loathe to see "number of editors" become a metric that starts to be used in our deletion processes. Aside from the temptation on article creators to leave in a typo to attract an edit from gnomes like me; and put in a high level category to attract an edit from one of our categorisers; we'd then expect a new type of gaming the system from spammers and particular enthusiasts as groups of accounts start editing each others articles. You do however have a point that some newpage patrollers will incorrectly tag articles for speedy deletion where AFD might have resulted in deletion. But I think the solution to that is better training for newpage patrol taggers and a userright that can be taken away from ones that make too many errors. ϢereSpielChequers 10:21, 4 September 2016 (UTC)
- Here's an anecdotal example of testing your proposed criteria. I have created about 11 articles. I tend to create articles for people and things that are notable but not famous. All of my articles have fewer than 50 edits and fewer than 15 editors, so they would all fail the proposed test, but all of the articles are likely to survive notability tests.* That's a 100% false positive rate for the initial filter. I haven't tested the Google search results, but searching for most of the names of articles I have created would lead to ambiguous search results that do not hit the person or thing that the article is about.
- I think it would be better to focus on a category of article that is known to have many AfD candidates, like music singles or articles about people that have no references.
- * (Except perhaps the articles for music albums, which the music project holds to a different standard from WP:GNG for some reason.) – Jonesey95 (talk) 16:33, 4 September 2016 (UTC)
- There exists a hand-picked set of articles that specifically address a perceived lack of notability that no one has bothered to take to AfD yet: articles tagged with {{Notability}} in category Category:All articles with topics of unclear notability. Someone (a bot perhaps) should systematically take these items to AfD. The metric here is far more reliable than anything suggested above: it's not guesswork based on who has edited how much and when, but actual human editors tagging the articles because they seem to lack notability and prompting (but overwhelmingly not resulting in) an AfD nomination. – Finnusertop (talk ⋅ contribs) 16:43, 4 September 2016 (UTC)
- Other good places to look for deletion candidates are Category:All unreferenced BLPs and Category:Articles lacking sources. – Jonesey95 (talk) 17:03, 4 September 2016 (UTC)
- Even if you ignored all articles tagged for notability that have subsequently been edited, you would risk swamping AFD with low quality deletion requests. Better to go through such articles manually, remove notability tags that are no longer correct, do at least a google search to see if there are sources out there and prod or AFD articles if that is appropriate. ϢereSpielChequers 16:34, 14 September 2016 (UTC)
Birmingham City Council, England
Birmingham City Council have changed their website, and all URLs in the format:
https://www.birmingham.gov.uk/cs/Satellite?c=Page&childpagename=Member-Services%2FPageLayout&cid=1223092734682&pagename=BCC%2FCommon%2FWrapper%2FWrapper
are dead, and, if in references, need to be either marked {{Dead link}} on converted to archived versions.
Many short URLs, in the format:
http://www.birmingham.gov.uk/libsubs
are also not working, but should be checked on a case-by-case basis. *sigh* Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:54, 24 August 2016 (UTC)
- @Pigsonthewing: I believe Cyberpower678's InternetArchiveBot already handles archiving dead URLs and thus no new bot is needed. Pppery (talk) 15:03, 24 August 2016 (UTC)
- One month on, this doesn't seem to have happened; we still have over 200 dead links, beginning
http://www.birmingham.gov.uk/cs/alone. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:33, 19 September 2016 (UTC)
- One month on, this doesn't seem to have happened; we still have over 200 dead links, beginning
- @Pigsonthewing: Did all pages get moved to the new URL format, or did they create an entirely new site and dump the old content? If the former, it may be helpful to first make a list of all the old URLs on Wikipedia, and then try to find the new locations for a few of them. That may help make the bot job easier if a good pattern can be found. Having the new URL format is helpful, but having real examples of the before and after for multiple pages should make it easier. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 06:23, 25 August 2016 (UTC)
The page for the first, long, URL I gave above, like many others I've looked for, appears not to have been recreated on the new site.
The page that was at:
http://www.birmingham.gov.uk/cs/Satellite?c=Page&childpagename=Parks-Ranger-Service%2FPageLayout&cid=1223092737719&pagename=BCC%2FCommon%2FWrapper%2FWrapper
(archived here) is now, with rewritten content, at:
https://www.birmingham.gov.uk/info/20089/parks/405/sutton_park
and clearly there is no common identifier in the two URLs. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:05, 27 August 2016 (UTC)
Auto-revert bot
Match Bandcamp links to articles
Please can someone do this:
For all the results in which have the format http[s]://[www.]XXX.bandcamp.com, match the ID ("XXX") to the corresponding article title's PAGENAMEBASE.
Example matches include:
- http://20minuteloop.bandcamp.com = 20minuteloop -> 20 Minute Loop
- http://aaronkent.bandcamp.com/album/winter-coats-summer-shorts = aaronkent -> Aaron Kent
- http://anagram.bandcamp.com = anagram -> Anagram (band)
- http://andremarques.bandcamp.com/ = andremarques -> André Marques (filmmaker)
- http://www.alvinpurple.bandcamp.com = alvinpurple -> Alvin Purple (band)
matches may be individual people, bands, or record companies.
These are not matches:
- http://bennytipene.bandcamp.com/album/room-demo-live-complete = bennytipene != Benny Tipene discography
- http://beastwars.bandcamp.com/ = beastwars != Beastwars (album)
- http://battlecircus.bandcamp.com/album/battle-circus = battlecircus != File:Battle Circus (album).jpg
Then, discard any duplicates, and for each match, fetch the Wikidata ID for the article.
If I could have the results in a spreadsheet, CSV or Wiki table, with four columns (URL, ID, article name, Wikidata ID), that would be great.
I have proposed a corresponding property on Wikidata, and will upload the values there if and when it is created. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:49, 26 September 2016 (UTC)
- The requirement for only people, bands, or record companies is tricky. I guess with people look for certain categories like "YYYY births" or "living persons" though not precise. Are there similar universal categories for bands and record companies, or other methods like common infoboxes that would identify an article as likely being a band or record company? -- GreenC 21:39, 26 September 2016 (UTC)
- Thanks; I wouldn't go about it that way (if anyone wants to, then Wikidata's "instance of" would be the way to go), but by eliminating negative matches such as pages with "discography" "(album)" or "(song)" in the title; or files. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:44, 27 September 2016 (UTC)
Hatnote templates
Could a bot potentially modify articles and their sections which begin with indents and italicized text (i.e. ^\:+''([^\n'][^\n]+)''\n) to use {{Hatnote}} (or one of the more specific hatnote templates, if the article's message matches)? Beginning articles like that without the hatnote template tends to mess up Hovercards, which are currently a Beta feature. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 11:38, 29 September 2016 (UTC)
- Nihiltres has been doing hatnote cleanup of late. Maybe he's already doing this? --Izno (talk) 11:50, 29 September 2016 (UTC)
- I've done a fair amount of cleanup, but I'm only one person. I mostly use the search functionality to pick out obvious cases with regex by hand. Here's a list of handy searches I've used, copied from my sandbox:
- Special:Search/insource:"for other uses" -insource:/for other uses\./ -hastemplate:"hatnote"
- Special:Search/insource:"redirects"+insource:/:\s*''[^\n]*?\s*[Rr]edirects+(here|to+this+page)/
- Special:Search/insource:/:\s*''\s*[Ff]or [^\n]*?,?\s*see/
- Special:Search/insource:"this article is about" insource:/''\s*This article is about/ -hastemplate:"about" -hastemplate:"hatnote"
- Special:Search/insource:/:\s*''\s*[Ff]or+[^\n]*?,?\s*see/
- I've avoided doing broad conversion to {{hatnote}} because there's more work than I can handle just with the cases that should use more specific templates like {{about}} or {{redirect}}. {{Hatnote}} itself ought to only be used as a fallback when there aren't any more-specific templates appropriate. Doing broad conversion would be relatively quick, but more work in the long run: most instances would still need conversion to more specific templates from the general one, and it'd be harder to isolate the "real" custom cases from those that were just mindlessly converted from manual markup to {{hatnote}}. Moreover, a bot would need to avoid at least one obvious false positive: proper use of definition list markup and italics together ought not to be converted … probably easy enough to avoid with a check for the previous line starting with a semicolon? Either way I'll encourage people to join me in fixing cases such as the ones listed in the searches mentioned. {{Nihiltres |talk |edits}} 23:09, 29 September 2016 (UTC)
- I've done a fair amount of cleanup, but I'm only one person. I mostly use the search functionality to pick out obvious cases with regex by hand. Here's a list of handy searches I've used, copied from my sandbox:
Fixing hundreds of broken URLs - updating links to different server with the same reference number
I edit hundreds/thousands of wp articles relating to Somerset. Recently (for at least a week) all links to the Somerset Historic Environment Records have been giving a 404. I contacted the web team who's server hosts the database & they said: "as you may be aware what is now the ‘South West Heritage Trust’ is independent from Somerset County Council – some of their systems eg. HER have been residing on our servers since their move – and as part of our internal processes these servers are now being decommissioned. Their main website is now at http://www.swheritage.org.uk/ with the HER available at http://www.somersetheritage.org.uk/ . There are redirects in place on our servers that should be temporarily forwarding visitors to the correct website eg: http://webapp1.somerset.gov.uk/her/details.asp?prn=11000 should be forwarding you to http://www.somersetheritage.org.uk/record/11000 - this appears to be working for me, so unsure why it isn’t working for you".
According to this search there are 1,546 wp articles which include links to the database. Is there any quick/automated way to find & replace all of the links (currently http://webapp1.somerset.gov.uk ) to the new server name ( http://www.somersetheritage.org.uk/record/ ) but keep the identical record number at the end? A complication is that two different formats of the url previously work ie both formats for the URL ie /record/XXXXX & /her/details.asp?prn=XXXXXX.
I don't really fancy manually going through this many articles & wondered if there was a bot or other technique to achieve this?— Rod talk 14:52, 29 September 2016 (UTC)
- There are approximately 800 links in some 300 articles. The way I would recommend doing this is a find/replace in WP:AWB. --Izno (talk) 15:24, 29 September 2016 (UTC)
- I have never got AWB to work in any sensible way. Would you (or anyone else) be able to do this?— Rod talk 15:29, 29 September 2016 (UTC)
- @Rodw: I'll take a look at this and see what I can do when I get home tonight. It seems like it would be pretty easy to fix with AWB. Omni Flames (talk) 22:16, 29 September 2016 (UTC)
- @Omni Flames: Thanks for all help & advice - the broken links are now fixed.— Rod talk 07:11, 30 September 2016 (UTC)
- @Rodw: I'll take a look at this and see what I can do when I get home tonight. It seems like it would be pretty easy to fix with AWB. Omni Flames (talk) 22:16, 29 September 2016 (UTC)
- I have never got AWB to work in any sensible way. Would you (or anyone else) be able to do this?— Rod talk 15:29, 29 September 2016 (UTC)
Missing WP:lead detection
I would like to see a bot that would flag up articles probably in need of a lead - articles with either (a) no text between the header templates and the first section header, or (b) no section headers at all and over say 10kB of text. The bot would place the articles in Category:Pages missing lead section, and possibly also tag them with {{lead missing}}: Noyster (talk), 13:22, 19 September 2016 (UTC)
- Sounds like a good idea to me, at least in case (a). I think it should tag them with the template (the template adds them to the category). At least one theoretical objection comes to my mind: it's technically possible to template the entire lead text from elsewhere, making it appear as a header template in wikitext but as a verbose lead in the actual text (in practice I've never seen this and it doesn't sound like a smart thing to do anyhow). – Finnusertop (talk ⋅ contribs) 21:59, 23 September 2016 (UTC)
- I thought, that this resulted in some script, but not. --Edgars2007 (talk/contribs) 07:39, 25 September 2016 (UTC)
- Thanks Edgars2007 for linking to that discussion from last year. The main participants there Hazard-SJ, Casliber, Finnusertop, and Nyttend may have a view. If "tag-bombing" is objectionable then it may be less controversial to just add the articles to the category, so anyone wanting to supply missing leads can find material to choose from. Other topics to decide upon are minimum article size, and whether to include list articles: Noyster (talk), 15:44, 25 September 2016 (UTC)
- I'd be worried in this tagging a huge number of articles - I'd say excluding lists and maybe doing a trial run with sizable articles only (?6kb of prose?) might be a start and see what comes up...? Cas Liber (talk · contribs) 18:22, 25 September 2016 (UTC)
- As I noted in that previous discussion, lists need to be carefully excluded. We have absolutely no business listening to anyone who says It's just that this is one of those things that has never been enforced much and the community has developed bad practices — aside from provisions required by WMF, community practice is the basis for all our policies and guidelines, and MOS must bow to community practice; people who insist on imposing the will of a few MOS editors on the community need to be shown the door. On the technical aspects of this proposal, I'm strongly opposed to having a bot add any visible templates; this is a CONTEXTBOT situation, and we shouldn't run the risk of messing up some perfectly fine articles because the bot's algorithm mistakenly thought they needed fuller intros. However, a hidden category would be fine, simply because it won't be visible to readers and won't impact anything; humans could then go through the bot-added category and make appropriate changes, including adding {{lead missing}} if applicable. Nyttend (talk) 21:31, 25 September 2016 (UTC)
- OK thanks commenters. Revised proposal: Bot to detect and categorise articles NOT having "List" as first word of the title AND either (a) no text between the header templates and the first section header, or (b) no section headers at all and over 6kB of text. The bot would place the articles in Category:Pages missing lead section: Noyster (talk), 18:43, 7 October 2016 (UTC)
- As I noted in that previous discussion, lists need to be carefully excluded. We have absolutely no business listening to anyone who says It's just that this is one of those things that has never been enforced much and the community has developed bad practices — aside from provisions required by WMF, community practice is the basis for all our policies and guidelines, and MOS must bow to community practice; people who insist on imposing the will of a few MOS editors on the community need to be shown the door. On the technical aspects of this proposal, I'm strongly opposed to having a bot add any visible templates; this is a CONTEXTBOT situation, and we shouldn't run the risk of messing up some perfectly fine articles because the bot's algorithm mistakenly thought they needed fuller intros. However, a hidden category would be fine, simply because it won't be visible to readers and won't impact anything; humans could then go through the bot-added category and make appropriate changes, including adding {{lead missing}} if applicable. Nyttend (talk) 21:31, 25 September 2016 (UTC)
- I'd be worried in this tagging a huge number of articles - I'd say excluding lists and maybe doing a trial run with sizable articles only (?6kb of prose?) might be a start and see what comes up...? Cas Liber (talk · contribs) 18:22, 25 September 2016 (UTC)
- Thanks Edgars2007 for linking to that discussion from last year. The main participants there Hazard-SJ, Casliber, Finnusertop, and Nyttend may have a view. If "tag-bombing" is objectionable then it may be less controversial to just add the articles to the category, so anyone wanting to supply missing leads can find material to choose from. Other topics to decide upon are minimum article size, and whether to include list articles: Noyster (talk), 15:44, 25 September 2016 (UTC)
- I thought, that this resulted in some script, but not. --Edgars2007 (talk/contribs) 07:39, 25 September 2016 (UTC)
G4 XFD deletion discussion locator bot
Would it be possible to build a bot that could hone in on CSD-G4 tagged articles and determine if in the template the alleged link to the XFD in question is actually there? So many times people tag an article as G4 but the recreated article is located in a new article space name, and its aggravating when on CSD patrol to have manually look through XFD logs to ensure that the article does in fact have an XFD and therefore does in fact qualify as a CSD-G4 deletion. For example, if Example was afd'd, then recreated at Example (Wikipedia article) an alert user would deduce that this was a recreation of Example, but due to the way the G4 template works the article's afd would not show at Example (Wikipedia article) because that wasn't where it was when the afd closed as delete. Under this proposal, a bot that was programed to monitor the G4 tags would notice then and automatically update the G4 template to link to the afd at Example so that the admin arriving at Example (Wikipedia article) would have the proof required to act on the G4 template in a timely manner. Owing to their role in managing the affairs of an estate I would propose the name for this bot - if it is decided to move forward with writing one - be ExecutorBot. TomStar81 (Talk) 03:16, 11 October 2016 (UTC)
aeiou.at
We have around 380 links to http://www.aeiou.at/ using the format like http://www.aeiou.at/aeiou.encyclop.b/b942796.htm
Both the domain name and URL structure have changed. The above page includes a search link (the last word in "Starten Sie eine Suche nach dieser Seite im neuen AEIOU durch einen Klick hier") and when that link is clicked the user is usually taken to the new page; in my example this is: http://austria-forum.org/af/AEIOU/B%C3%BCrg%2C_Johann_Tobias
To complicate matters, 84 of the links are made using {{Aeiou}}.
Some pages may already have a separate link to the http://austria-forum.org/ page, and some of those may use {{Austriaforum}}.
Can anyone help to clear this up, please?
Ideally the end result will be the orphaning of {{Aeiou}} and all links using {{Austriaforum}}. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:56, 14 August 2016 (UTC)
- @Pigsonthewing: To make sure I understand
- First subtask: for each invocation of {{Aeiou}}
- Construct the Fully qualified URL
- Get the content of the page
- Search through the content for the "hier" hyperlink
- Extract the URL from the hyperlink
- Parse out the new Article suffixing
- Replace the original invocation with the new invocation
- Second subtask: For each instance of the string http://www.aeiou.at/aeiou/encyclop. in articlespace
- Construct the Fully qualified URL
- Get the content of the page
- Search through the content for the "hier" hyperlink
- Extract the URL from the hyperlink
- Parse out the new Article suffixing
- Replace the original string with the the austriaforum template invocation
- Do I have this correct? Also can you please link the discussion that ensorses this replacement? Hasteur (talk) 13:33, 12 October 2016 (UTC)
- All correct, with the provisos - first for clarity - that in the first subtask, "new invocation" means "new invocation of {{Austriaforum}}"; and that if {{Austriaforum}} is already present, a second is not needed. The current target page says "You are now in the "old" (no longer maintained version) of the AEIOU. The maintained version can be found at..." To the best of my knowledge, we don't need a special discussion, other than this one, to endorse fixing 380 such links. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:36, 12 October 2016 (UTC)
Ecozone moved to Biogeographic realm
The article Ecozone was moved to Biogeographic realm, in a standardisation of biogeographic terminology in WP. Now, the next step is change some pages that are currently using the term "ecozone" instead of "biogeographic realm". Can a boot do it, please? The pages are:
- Template:Infobox ecoregion (and its articles);
- Category:Ecozones (with its subcategories, except subcategory Category: Ecozones of Canada and article Ecozones of Canada, in which the usage of "ecozone" must be maintained). Zorahia (talk) 01:08, 13 October 2016 (UTC)
Not a good task for a bot. There's very little hope of gaining consensus for a bot changing article text. There's too many edge cases for this to work well. For instance, it's beyond the capability of a bot to avoid editing articles in these categories which discuss the history of the term, etc. ~ Rob13Talk 19:14, 30 October 2016 (UTC)
A bot in Simple Wikipedia
Hey i request for a personal bot in my Simple Wikipedia account. — Preceding unsigned comment added by Trunzep (talk • contribs) 09:41, 9 November 2016 (UTC)
- @Trunzep: This appears to be the page you want. It's not part of English Wikipedia: Simple English Wikipedia is separate and has its own project structure, and you will need to find your way around it if you are a regular editor there. If you request a bot anywhere, please state clearly what it would do and how it would be of benefit to that Wikipedia: Noyster (talk), 10:38, 9 November 2016 (UTC)
Update NYTtopic IDs
{{NYTtopic}} currently stores values like people/r/susan_e_rice/index.html, for the URL http://topics.nytimes.com/top/reference/timestopics/people/r/susan_e_rice/index.html; these have been updated, and redirect to URLs like http://www.nytimes.com/topic/person/susan-rice; and so the value stored should be person/susan-rice.
This applies to other patterns, like:
organizations/t/taylor_paul_dance_co->organization/paul-taylor-dance-company
We have around 640 of these IDs in templates, and many more as external wiki links or in citations.
The URL in the template will need to be changed at the same time - whether that's done first or last, there will be a period when the links don't work.
I've made the template capable of calling values from Wikidata; so another alternative would be to remove these values and add the new ones to Wikidata at the same time; otherwise, I'll copy them across later.
Non-templated links should also be updated; or better still converted to use the template. Can someone help, please? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 12:50, 29 September 2016 (UTC)
- @Pigsonthewing: How do you suggest a bot predict the correct pattern? It seems to vary quite a bit. ~ Rob13Talk 19:25, 30 October 2016 (UTC)
- @BU Rob13: A bot should not "predict", but follow the current link and note the URL to which it redirects. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:37, 30 October 2016 (UTC)
- @Pigsonthewing: If the URLs are redirecting properly, what's the point of this change? Is there a reason to believe the redirects will go away? ~ Rob13Talk 19:50, 30 October 2016 (UTC)
- There is no reason to suppose that they will be maintained indefinitely; however, the reason for the change is so that the template can be updated, to accept new values; and for compatibility with Wikidata, from where the values should ultimately be fetched. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:58, 30 October 2016 (UTC)
Needs wider discussion. Please obtain a consensus for this. It would be kicked back at BRFA as close enough to a cosmetic edit that consensus is needed. Try a relevant village pump, possibly. ~ Rob13Talk 14:20, 8 November 2016 (UTC)
- There is no reason to suppose that they will be maintained indefinitely; however, the reason for the change is so that the template can be updated, to accept new values; and for compatibility with Wikidata, from where the values should ultimately be fetched. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:58, 30 October 2016 (UTC)
- @Pigsonthewing: If the URLs are redirecting properly, what's the point of this change? Is there a reason to believe the redirects will go away? ~ Rob13Talk 19:50, 30 October 2016 (UTC)
- @BU Rob13: A bot should not "predict", but follow the current link and note the URL to which it redirects. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:37, 30 October 2016 (UTC)
BSBot
This bot will automically give barnstars to users. "BS" means "BarnStar. GXXF T • C 15:37, 3 December 2016 (UTC)
- On what basis would they be given? --Redrose64 (talk) 21:11, 3 December 2016 (UTC)
- Not a good task for a bot in general IMO -FASTILY 23:12, 4 December 2016 (UTC)
- Uh is this a joke/hoax?? --Zackmann08 (Talk to me/What I been doing) 06:13, 5 December 2016 (UTC)
- Not a good task for a bot in general IMO -FASTILY 23:12, 4 December 2016 (UTC)
Filling out values in a table?
Would a bot be able to handle the following request: use the hexadecimal value in column eight in order to look up (using any database) the corresponding RGB and HSV values for each of the colors, then replace the values in each of those cells with the correct values? I have a larger table that needs to be filled, but this is a small sample of what I have. Evan.oltmanns (talk) 23:00, 12 December 2016 (UTC)
| Color | H | S | V | R | G | B | Hexadecimal |
|---|---|---|---|---|---|---|---|
| Golden Yellow 67104 | 346 | 96 | 93 | 252 | 76 | 2 | #FFCD00 |
| Scarlet 67111 | 346 | 96 | 93 | 252 | 76 | 2 | #BA0C2F |
| Purple 67115 | 346 | 96 | 93 | 252 | 76 | 2 | #5F259F |
| Bluebird 67117 | 346 | 96 | 93 | 252 | 76 | 2 | #7BAFD4 |
- @Evan.oltmanns: I created a script that can be run through a browser's Javascript console, which converts all of the HSV and RGB values to the correct values, and copies the new table code to your clipboard - would you like the script, or would you like me to copy the new table directly to your sandbox? Alex|The|Whovian? 00:43, 13 December 2016 (UTC)
- @AlexTheWhovian: If you could run the script for me and replace the existing (full) table on my sandbox, that would be greatly appreciated. The full table can be found on my sandbox here: User:Evan.oltmanns/sandbox#Department_of_Defense_Standard_Shades_for_Heraldic_Yarns Thank you. Evan.oltmanns (talk) 01:45, 13 December 2016 (UTC)
Done Alex|The|Whovian? 01:48, 13 December 2016 (UTC)
- @AlexTheWhovian: Thank you! You saved me from enduring many hours of manual editing. Evan.oltmanns (talk) 01:53, 13 December 2016 (UTC)
- Glad to help! I use JavaScript a lot for automatic editing, and when I saw this post, I thought that it was very similar to some other content I've edited, since I used the RGBtoHSV and HEXtoRGB functions from my script here. Alex|The|Whovian? 02:07, 13 December 2016 (UTC)
- @AlexTheWhovian: Thank you! You saved me from enduring many hours of manual editing. Evan.oltmanns (talk) 01:53, 13 December 2016 (UTC)
- @AlexTheWhovian: If you could run the script for me and replace the existing (full) table on my sandbox, that would be greatly appreciated. The full table can be found on my sandbox here: User:Evan.oltmanns/sandbox#Department_of_Defense_Standard_Shades_for_Heraldic_Yarns Thank you. Evan.oltmanns (talk) 01:45, 13 December 2016 (UTC)
Bot to remove external link to MySpace?
Following the TfD discussion here: Wikipedia:Templates_for_discussion/Log/2015_January_15#Template:Myspace, I think we should remove all external links to MySpace as unreliable. They keep poping up. -- Magioladitis (talk) 13:05, 20 August 2016 (UTC)
- We have over 3400 links to MySpace. I'd want to see a much wider discussion before these were removed. [And its deplorable that a template for a site we link to so many times was deleted] Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:43, 22 August 2016 (UTC)
They are often in the external links section. See 120 Days. It is the band's homepage. Is there reason to delete primary sources? Also don't understand the template deletion. -- GreenC 13:21, 29 August 2016 (UTC)
Apparently the links were added after the template was deleted. -- Magioladitis (talk) 13:41, 29 August 2016 (UTC)
I've posted a proposal to recreate the template. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:16, 29 August 2016 (UTC)
Green Cardamom and Pigsonthewing thanks for the heads up. I would expect that according to the deletion reason of the template that the links were removed. Otherwise, the template is a valuable shortcut that helps against linkrot. - Magioladitis (talk) 15:23, 29 August 2016 (UTC)
I see no problem with this idea so i whole-heartedly endorse the idea, i see no problem with removing links that lead to a no longer used/nearly dead website/service where there are much better alternatives to link to. --RuleTheWiki (talk) 11:01, 14 October 2016 (UTC)
Helping to expand (blacklisted) url-shortened links by suggesting to user
I see many editors trying to add shortened or otherwise redirected urls (typically bit.ly, goo.gl, youtu.be, or google.com/url? ..) to pages, and them fail continuously since they fail to expand/replace the link with the proper link (shortening services are routinely globally blacklisted). Some try repeatedly, likely becoming frustrated at their inability to save the page.
I think it would be of great help to editors that when they would hit the blacklist with a shortened url, that a bot would pick up and post a message on their talkpage along the lines of "hi, I saw that you tried to add bit.ly/abcd and that you were blocked by the blacklist. URL shorteners are routinely blacklisted and hence cannot be added to any page in Wikipedia. You should therefore use the expanded url 'http://aaa.bc/adsfsdf/index.htm' instead. (sig etc.)" (the bot should take into account that the original link is blacklisted, but also the target may be blacklisted - so if it fails saving the expanded link with http, it may try again without the http. --Dirk Beetstra T C 12:02, 20 October 2016 (UTC)
- Perhaps a edit filter set to warn would be better? Dat GuyTalkContribs 15:12, 21 October 2016 (UTC)
- @DatGuy: Sorry, that does not work. These links are standard globally blacklisted, and the blacklist hits before the EditFilter. And in any case, the EditFilter cannot expand the links for the editor, they would get the same message as the spam blacklist is providing - expand your links. --Dirk Beetstra T C 15:51, 22 October 2016 (UTC)
- The user gets both MediaWiki:Spamprotectiontext and MediaWiki:Spamprotectionmatch. The first includes in our customized version:
- @DatGuy: Sorry, that does not work. These links are standard globally blacklisted, and the blacklist hits before the EditFilter. And in any case, the EditFilter cannot expand the links for the editor, they would get the same message as the spam blacklist is providing - expand your links. --Dirk Beetstra T C 15:51, 22 October 2016 (UTC)
- Note that if you used a redirection link or URL shortener (like e.g. 'goo.gl', 't.co', 'youtu.be', 'bit.ly'), you may still be able to save your changes by using the direct, non-shortened link - you generally obtain the non-shortened link by following the link, and copying the contents of the address bar of your web-browser after the page has loaded.
- The second shows the url, e.g.
- The following link has triggered a protection filter: bit.ly/xxx
- MediaWiki:Spamprotectiontext does not know the url but MediaWiki:Spamprotectionmatch gets it as $1. It would be possible to customize the message for some url's, e.g by testing whether $1 starts with goo.gl, t.co, youtu.be, bit.ly. In such cases the message could display it as a clickable link with instructions. MediaWiki:Spamprotectiontext also has instructions but there the same long text is shown for all url's. PrimeHunter (talk) 23:32, 22 October 2016 (UTC)
Although it is probably still true if the editor got an extensive explanation on their talkpage - this editor (and many others) simply do not read what the message is saying (and I see many of those). With a talkpage message they at least get the message twice
There is a second side to the request - whereas many of the redirect insertions are in good faith (especially the youtu.be and google.com/url? ones), some of them are bad faith attempts to circumvent the blacklist (Special:Log/spamblacklist/148.251.234.14 is a spammer). It would be great to be able to track these spammers with this trick as well. --Dirk Beetstra T C 03:43, 23 October 2016 (UTC)
Template:China line
Could someone help substitute all transclusions of {{China line}}, which has been in the TfD holding cell for more than a year? (In addition, could instances of
{{China line|…}}{{China line|…}} where two or more transclusions are not separated by anything (or by just a space in parameter |lines= of {{Infobox station}}), be replaced with
{{Plainlist|1=
* {{China line|…}}
* {{China line|…}}
}}
since this format seems to be heavily used in some infoboxes?)
Thanks, Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 16:04, 8 August 2016 (UTC)
- If no-one else picks this task up, ping me in like two weeks and I'll do it. ~ Rob13Talk 21:17, 8 August 2016 (UTC)
- The first part of your request can already be handled by AnomieBOT if Template:China line is put in Category:Wikipedia templates to be automatically substituted. Pppery (talk) 22:09, 21 August 2016 (UTC)
- @Pppery and BU Rob13: So maybe just do №2 with a bot through AWB and then put the template into AnomieBOT's category? Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 16:04, 23 August 2016 (UTC)
- (Pinging BU Rob13 and Pppery again, because that might not have gone through Echo. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 16:05, 23 August 2016 (UTC))
- Wait, Jc86035, I missed something in my previous comment. AnomieBOT will only substitute templates with many transclusions if they are listed on the template-protected User:AnomieBOT/TemplateSubster force. Note that this process of wrapper-then subst has been done before with Template:Scite. (I did get the above ping, by the way) Pppery (talk) 16:07, 23 August 2016 (UTC)
- @Pppery: Thanks for the clarification; although since BU Rob13 is an administrator, that's not necessarily going to be much of a problem. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 16:12, 23 August 2016 (UTC)
- @Jc86035: Note that pings only work if you do nothing but add a comment (not also move content around in the same edit), and thus neither me nor Bu Rob13 got the ping above. Pppery (talk) 16:19, 23 August 2016 (UTC)
- (neither do pings work if I misspell the username, BU Rob13) Pppery (talk) 16:20, 23 August 2016 (UTC)
- @Pppery: Thanks for the clarification; although since BU Rob13 is an administrator, that's not necessarily going to be much of a problem. Jc86035 (talk • contribs) Use {{re|Jc86035}} to reply to me 16:12, 23 August 2016 (UTC)
- Wait, Jc86035, I missed something in my previous comment. AnomieBOT will only substitute templates with many transclusions if they are listed on the template-protected User:AnomieBOT/TemplateSubster force. Note that this process of wrapper-then subst has been done before with Template:Scite. (I did get the above ping, by the way) Pppery (talk) 16:07, 23 August 2016 (UTC)
I can probably still handle this, but I currently have a few other bot tasks on the back burner and I'm running low on time. I'll get to it if no-one else does, but it's up for grabs. ~ Rob13Talk 20:39, 23 August 2016 (UTC)
Bumping to prevent this from getting archived. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 08:30, 8 October 2016 (UTC)
- Unfortunately, I've gotten quite busy, so I probably can't handle this. Up for grabs if anyone else wants to do it; it's a very simple AWB task. Alternatively, we could just do the substitution and not worry about formatting for the sake of getting this done quickly. ~ Rob13Talk 11:13, 8 October 2016 (UTC)
- Hmm, I might start working on this. Basically, what you want Jc86035 is to subst {{China Line}}, and if there are two China Line templates on the same line, to convert it to plainlist? Dat GuyTalkContribs 11:19, 8 October 2016 (UTC)
- @DatGuy: Yeah, basically. (The reason why this situation exists is because the template was standardised to use {{RouteBox}} and {{Rail color box}}; before this, the template had whitespace padding and someone just didn't bother to add
<br>tags or anything.) Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 11:47, 8 October 2016 (UTC) - Oh, and just a caveat: for multiple instances in the same row with parameter
|style=boxor=b, they should only be separated by a space (like in the {{Beijing Subway Station}} navbox). Thanks for your help! Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 11:52, 8 October 2016 (UTC)
- @DatGuy: Yeah, basically. (The reason why this situation exists is because the template was standardised to use {{RouteBox}} and {{Rail color box}}; before this, the template had whitespace padding and someone just didn't bother to add
- Hmm, I might start working on this. Basically, what you want Jc86035 is to subst {{China Line}}, and if there are two China Line templates on the same line, to convert it to plainlist? Dat GuyTalkContribs 11:19, 8 October 2016 (UTC)
@DatGuy, BU Rob13, and Pppery: The template appears to have been substituted entirely by Primefac; not sure if {{Plainlist}} has been added to infoboxes. Might be easier to just do a search for {{Rail color box}} and replace semi-automatically with AWB. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 09:35, 21 October 2016 (UTC)
- Jc86035, didn't know this thread existed, so other than the replacement (it wasn't a true subst) I didn't change anything. I checked a handful of the latter ones (which I seem to recall I had some issues with because they were next to each other) and it looks like they mostly were separated by <br>. You're welcome to look yourself, though; my edits replacing this template are here. Primefac (talk) 15:02, 21 October 2016 (UTC)
- Would something like find:
- (\{\{[cC]hina [lL]ine\|[^<][^\n])
- Replace with:
- *$1
- work? (I probably have a mistake, but the general idea?)
- Actually, since the template has been deleted, do we need to do something else? Dat GuyTalkContribs 16:00, 21 October 2016 (UTC)
- Unless you wanted to go through every instance of {{rail color box}} and {{rint}}, going through my edit history would probably be easier (and fewer false positives in unrelated articles). I would bet, though, that the number of side-by-sides without <br> (maybe 10?) is going to not really be worth all that hassle. Primefac (talk) 16:13, 21 October 2016 (UTC)
- @Primefac: You also seem to have substituted the template incorrectly (not sure how that happened); the parameter
|inline=yesis usually used for subway/metro lines for {{Rail color box}} in infoboxes (e.g. in New York, Hong Kong and others). Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 08:16, 22 October 2016 (UTC) - @Primefac: And you seem to have neglected to replace "HZ" with "HZM" (example), and "NB" with "NingboRT" (example). Wouldn't it have been easier to substitute the template normally? Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 16:24, 22 October 2016 (UTC)- Yeah, so I fucked up. I'm working on fixing the lines that I didn't translate properly. Primefac (talk) 22:20, 22 October 2016 (UTC)
- And for the record, lest you think I'm completely incompetent and managed to screw up a simple subst: I didn't subst because I didn't realize the template was already a wrapper. Honestly not sure how I made that mistake, but there you go. Primefac (talk) 22:55, 22 October 2016 (UTC)
- @Primefac: Oh well. I guess AWB edits can always be fixed by more AWB edits. (Thanks for the quick response.) Though I'd still prefer going through all of them to add
|inline=, because I'd already substituted some instances before you replaced the rest. Jc86035 (talk) Use {{re|Jc86035}}
to reply to me 04:20, 23 October 2016 (UTC)- Jc86035, probably the easiest way to make that list would be to find what transcludes the templates in Category:China rail transport color templates. Be a bit of a big list, but it might be simpler than trying to mess around with pulling out the specific edits I made to replace everything. Primefac (talk) 04:26, 23 October 2016 (UTC)
- @Primefac: Oh well. I guess AWB edits can always be fixed by more AWB edits. (Thanks for the quick response.) Though I'd still prefer going through all of them to add
- @Primefac: You also seem to have substituted the template incorrectly (not sure how that happened); the parameter
- Unless you wanted to go through every instance of {{rail color box}} and {{rint}}, going through my edit history would probably be easier (and fewer false positives in unrelated articles). I would bet, though, that the number of side-by-sides without <br> (maybe 10?) is going to not really be worth all that hassle. Primefac (talk) 16:13, 21 October 2016 (UTC)
Update all refs from sound.westhost.com to sound.whsites.net
This site has changed domains so there's link rot on probably a lot of audio articles 71.167.62.21 (talk) 11:50, 31 October 2016 (UTC)
- Couldn't you just use AWB? 76.218.105.99 (talk) 03:14, 8 November 2016 (UTC)
Done - 78 articles edited. Example. -- GreenC 15:48, 13 December 2016 (UTC)
A bot that will replace "colwidth=30" in reflist to just reflist|30
Hello. Upon reading that the "reflist|colwidth=30em" is depreciated [See: Template:Reflist#Obsolete], I was going to go through and manually change this but was pointed to this page. Requesting a bot that will change it to reflist|30em And also maybe one to change reflist|3 to reflist|30em --Jennica✿ talk / contribs 06:40, 21 November 2016 (UTC)
- The term is "deprecated", not "depreciated" - the meanings are very different. In particular, "deprecated" does not mean "you must get rid of this at the earliest opportunity", it means "please don't use this in future, preferable alternatives exist".
- However, changing
{{reflist|colwidth=30em}}to{{reflist|30em}}is a straight
Not done per WP:COSMETICBOT. Regarding changing {{reflist|3}}to{{reflist|30em}}, that has been discussed before, IIRC the consensus was that since different people have different setups (particularly where screen width is concerned), what is a suitable conversion on one article is not necessarily suitable for all, so that each one needs consideration on a case by case basis. --Redrose64 (talk) 10:40, 21 November 2016 (UTC)
Archive bot
Now and then references added to pages becomes dead links so I think there should be a bot which could archive the references cited on a particular page using "web.archive.org" upon request. --Saqib (talk) 06:51, 25 November 2016 (UTC)
- You may be looking for User:InternetArchiveBot. --Izno (talk) 21:04, 26 November 2016 (UTC)
Update URL for New South Wales 2015 election results.
I've just edited Electoral district of Ballina because the link to the New South Wales electoral commisison's site for results changed from
http://pastvtr.elections.nsw.gov.au/SGE2015/la/ballina/cc/fp_summary/ to http://pastvtr.elections.nsw.gov.au/SGE2015/la/ballina/cc/fp_summary/
My edit is at
https://en.wikipedia.org/w/index.php?title=Electoral_district_of_Ballina&oldid=752858012
I've checked the first few districts in the alphabetical list of NSW electoral districts (see Electoral districts of New South Wales and they all have links to the vtr... site.
Could a bot update all these links?
Thanks, Newystats (talk) 01:52, 5 December 2016 (UTC)
There's also the URL http://pastvtr.elections.nsw.gov.au/SGE2015/la/albury/dop/dop which changed to http://pastvtr.elections.nsw.gov.au/SGE2015/la/albury/dop/dop
Yeah I can do this working on it. -- GreenC 02:35, 13 December 2016 (UTC)
Done, 186 articles edited. Newystats, could you check the remaining 4, they need manual help which I'm not sure how to fix. -- GreenC 04:33, 13 December 2016 (UTC)
Thanks! 2 of those did need the new URL & I've done them manually - 2 did not need an update. — Preceding unsigned comment added by Newystats (talk • contribs) 05:05, 13 December 2016 (UTC)
Automatic change of typographical quotation marks to typewriter quotation marks
Could a bot be written, or could a task be added to an existing bot, to automatically change typographical ("curly") quotation marks to typewriter ("straight") quotation marks per the MoS? Chickadee46 (talk|contribs) 00:16, 15 June 2016 (UTC)
- I think this is done by AWB already. In citations AWB does it for sure. -- Magioladitis (talk) 09:25, 18 June 2016 (UTC)
- Potentially this could be done, but is it really that big an issue that it needs fixing? It seems like a very minor change that doesn't have any real effect at all on the encyclopedia. Omni Flames (talk) 11:14, 30 June 2016 (UTC)
- This is a "general fix" that can be done while other editing is being done. All the best: Rich Farmbrough, 16:10, 13 August 2016 (UTC).
- Magioladitis, AWB may be doing it but I don't feel it's keeping up. Of my last 500 edits, 15% of those pages had curlies, and I similarly found 7 pages with curlies in a sample of 50 random pages. So that's what, potentially
4 million(correction: 700k) articles effected? Omni Flames, one big problem is that not all browsers and search engines treat straight and curly quotes and apostrophes the same so that a search for Alzheimer's disease will fail to find Alzheimer’s disease. Also, curly quotes don't render properly on all platforms, and can't be easily typed on many platforms. If content is to be easily accessible and open for reuse, we should be able to move it cross-platform without no-such-character glyphs appearing. There was a huge MOS discussion on this in 2005 (archived here and here) which is occasionally revisited with consensus always supporting straight quotes and apostrophes, as does MOS:CURLY. If it's really 4 million articles,that might break the record for bot-edits to fix, so perhaps not practical. What about editor awareness? Would it be feasible to set up a bot to check recent edits for curlies and, when detected, post a notice on that editor's talk page (similar to DPL bot when an editor links to a disambiguation page) alerting them and linking them to a page with instructions for disabling curlies in popular software packages? If we can head-off new curlies working into the system, then AWB-editors may have a better chance of eventually purging the existing ones. Thoughts? (BTW: I'm inexperienced with bots but would happily volunteer my time to help.) - Reidgreg (talk) 22:38, 25 September 2016 (UTC)- If I have time I'll try to see if this is a good estimate. All the best: Rich Farmbrough, 18:51, 19 October 2016 (UTC).
- If I have time I'll try to see if this is a good estimate. All the best: Rich Farmbrough, 18:51, 19 October 2016 (UTC).
- Magioladitis, AWB may be doing it but I don't feel it's keeping up. Of my last 500 edits, 15% of those pages had curlies, and I similarly found 7 pages with curlies in a sample of 50 random pages. So that's what, potentially
- @Reidgreg: 15% of mainspace is ~700k articles, not 4m. And whether we use it outside mainspace is mostly irrelevant, since curlies don't usually break a page. --Izno (talk) 22:46, 25 September 2016 (UTC)
Reidgreg if there is consensus for such a change I can perform the task. -- Magioladitis (talk) 22:45, 25 September 2016 (UTC)
- Thanks for the quick replies! (And thanks for correcting me, Izno. I see you took part in the last discussion at MoS, revisiting curly quotes.) I'll have to check on consensus, I replied quickly when I noticed this because I didn't want it to be archived. The proposals I've found at MoS have been the other way around, to see if curlies could be permitted or recommended and the decision has always been "no". Will have to see if there is support for a mass change of curlies to straights, or possibly for MoS reminders. - Reidgreg (talk) 17:18, 26 September 2016 (UTC)
While there is general support for MOS:CURLY, there is a feeling that curlies tend to be from copy-and-paste edits and may be a valuable indicators of copyright violation. So there's a preference for human editors to examine such instances (and possible copyvio) rather than a bot making the changes. - Reidgreg (talk) 16:53, 7 October 2016 (UTC)
@Chickadee46: @Omni Flames: @Rich Farmbrough: @Magioladitis: @Izno: Hi, I'm Philroc. If you know me already, great. I've been over at the talk page for the MOS for a while talking about what the bot this discussion is about should do. I've decided that it will put in {{Copypaste}} with a parameter which changes the notice to talk about how the bot found curlies and changed them, and it will also say that curlies are the sign that what the template says happened. See its sandbox and its testcases. PhilrocMy contribs 13:49, 19 October 2016 (UTC)
- @Philroc: I appreciate the enthusiasm and I'm sorry if I'm being blunt, but from the MOS discussion there is no consensus for having automated changes of curly to straight quotes & apostrophes, nor for the automatic placing of this template. I'm in the middle of another project right now but hope to explore this further next week. - Reidgreg (talk) 14:46, 19 October 2016 (UTC)
- @Reidgreg: We can get consensus from this discussion, can't we?
- @Reidgreg: Wait, we're talking about the number of articles affected by curlies on the MoS. After we're done with that, we will talk about consensus. PhilrocMy contribs 23:18, 19 October 2016 (UTC)
- I reviewed a small number of articles with curlies and found copyvio issues and typographical issues which would not be easily resolved by a bot. (More at MOS discussion.) I hope to return to this at some point in the future. – Reidgreg (talk) 19:24, 30 October 2016 (UTC)
- @Reidgreg: We can get consensus from this discussion, can't we?
Help with anniversary calendar at Portal:Speculative fiction
In order to more easily update the anniversary section of the calendar, I would like a bot that:
- Runs once per week
- Makes a list at Portal:Speculative fiction/Anniversaries/Working of mainspace articles listed within Category:Speculative fiction and its subcategories (the categories in the "Subcategories" section on the category page).
- Updates Portal:Speculative fiction/Anniversaries/Current with all mainspace articles currently linked from the anniversaries pages (there are pages for every day of the year in the format Portal:Speculative fiction/Anniversaries/January/January 1).
- Checks Portal:Speculative fiction/Anniversaries/Ignore for a list of articles marked to be ignored (this page will be updated manually unless we can figure out a good system where the bot can do the listing).
- Updates Portal:Speculative fiction/Anniversaries/Todo with all mainspace articles from step 2 that are not in the list in step 3 and not listed to be ignored in step 4.
I hope that makes sense. Anyone up to the task? Thanks in advance for your time. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 06:18, 25 August 2016 (UTC)
- @Nihonjoe: I don't think we really need a bot to do this. I can update the pages every week semi-manually if you like. Just one thing, I'm a bit confused as to what the "ignore list" is meant to do? How do you plan on getting the articles to go on it? Omni Flames (talk) 22:05, 29 August 2016 (UTC)
- @Omni Flames: I figured a bot would be able to do it faster than a person. It shouldn't be too complicated a task, either, but it would be tedious (hence the bot request). I could do it manually myself, but it would take a lot of time. The ignore list would likely be updated manually, with pages determined to not be needed on the Todo list. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 22:10, 29 August 2016 (UTC)
- @Nihonjoe: Well, when I said manually, I didn't really mean manually. I meant more that I'd create the lists using a bot each week and paste it on myself. That would mean we wouldn't even need a BRFA or anything. However, we can do it fully-automatically if that suits you better. Omni Flames (talk) 22:58, 29 August 2016 (UTC)
- @Omni Flames: If that's easier, that's fine. I figured having a bot do it automatically would relieve someone of having to manually do something every week. I'm fine either way, though. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 17:15, 30 August 2016 (UTC)
- @Omni Flames: Just following up to see if you plan to do this. Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 17:39, 8 September 2016 (UTC)
- @Nihonjoe: I'll see what I can do. I've had a lot on my plate lately. Omni Flames (talk) 08:49, 9 September 2016 (UTC)
- @Omni Flames: Okay, I appreciate any help. I'll follow up in a couple weeks. Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 19:08, 9 September 2016 (UTC)
- @Omni Flames: Just following up as promised. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 19:58, 6 October 2016 (UTC)
- @Nihonjoe: Sorry, but I don't think I have time to do this at the moment. I've had a lot going on in real life at the moment and I haven't been very active on wiki recently. Hopefully you can find someone else to help you with this, sorry for the inconvenience. Omni Flames (talk) 09:46, 7 October 2016 (UTC)
- @Omni Flames: I can understand real life taking over. Thanks, anyway. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 16:22, 7 October 2016 (UTC)
- @Nihonjoe: Sorry, but I don't think I have time to do this at the moment. I've had a lot going on in real life at the moment and I haven't been very active on wiki recently. Hopefully you can find someone else to help you with this, sorry for the inconvenience. Omni Flames (talk) 09:46, 7 October 2016 (UTC)
- @Omni Flames: Just following up as promised. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 19:58, 6 October 2016 (UTC)
- @Omni Flames: Okay, I appreciate any help. I'll follow up in a couple weeks. Thanks! ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 19:08, 9 September 2016 (UTC)
- @Nihonjoe: I'll see what I can do. I've had a lot on my plate lately. Omni Flames (talk) 08:49, 9 September 2016 (UTC)
- @Nihonjoe: Well, when I said manually, I didn't really mean manually. I meant more that I'd create the lists using a bot each week and paste it on myself. That would mean we wouldn't even need a BRFA or anything. However, we can do it fully-automatically if that suits you better. Omni Flames (talk) 22:58, 29 August 2016 (UTC)
- @Omni Flames: I figured a bot would be able to do it faster than a person. It shouldn't be too complicated a task, either, but it would be tedious (hence the bot request). I could do it manually myself, but it would take a lot of time. The ignore list would likely be updated manually, with pages determined to not be needed on the Todo list. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 22:10, 29 August 2016 (UTC)
- Anyone else interested? It should be a pretty quick job. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 16:22, 7 October 2016 (UTC)
- @Nihonjoe: I wrote some code for this, but I have run into a recursion issue when getting all the articles in the Category:Speculative fiction tree. The tree either has loops (Category:Foo in Category:Bar [in ...] in Category:Foo) or is very large. I fixed one loop (Category:Toho Monsters), but I don't have time to check the entire tree. If I increase the maximum number of recursions permitted, it will work if the tree doesn't have any loops. It has been tested on smaller, clean trees with success. — JJMC89 (T·C) 15:39, 21 October 2016 (UTC)
- @JJMC89: Is there a bot that can check for loops and output a list? That will make it easier to fix. ···日本穣 · 投稿 · Talk to Nihonjoe · Join WP Japan! 02:20, 27 October 2016 (UTC)
- @Nihonjoe: I wrote some code for this, but I have run into a recursion issue when getting all the articles in the Category:Speculative fiction tree. The tree either has loops (Category:Foo in Category:Bar [in ...] in Category:Foo) or is very large. I fixed one loop (Category:Toho Monsters), but I don't have time to check the entire tree. If I increase the maximum number of recursions permitted, it will work if the tree doesn't have any loops. It has been tested on smaller, clean trees with success. — JJMC89 (T·C) 15:39, 21 October 2016 (UTC)
blocking IPs that only hit the spam blacklist
I've asked this for Procseebot (User:Slakr), but not gotten any response - maybe there are other solutions to this problem (I also am not sure whether it involves open proxies).
The spam blacklist is blocking certain sites which were spammed. One of the problems that we are currently facing is that there are, what are likely, spambots continuously hitting the spam blacklist. That involves a certain subset of attempted urls. The editors only hit the blacklist (I have yet to see even one editor having constructive edits at all on their IP), and they do that continuously (hence my suspicion that these are spambots). It is good that we see that the spam blacklist is doing it's job, the problem is that sometimes the log becomes unreadable because these IPs hit the blacklist thousands of times, flooding the log (admin-eyes-only example).
When no-one is watching, it sometimes takes a long time before the IPs get blocked. I would therefore request that when IPs without edits are hitting the blacklist for the specific set of urls, that they get blocked as soon as they hit the blacklist (lengthy, I tend to block for a month at first, a year at the second; withdraw talkpage access (see Special:Log/spamblacklist/175.44.6.189 and Special:Log/spamblacklist/175.44.5.169; admin-eyes-only, they hit their own talkpages just as happily and that is not affected by a regular block)), and subsequently tagging the talkpages with {{spamblacklistblock}}. Are there any bots that could take this task? Thanks. --Dirk Beetstra T C 06:14, 5 October 2016 (UTC)
To put a bit of more context on occurrence, User:Kuru and I (who I know follow the logs) have made 14 of these blocks in the last 24 hours. --Dirk Beetstra T C 10:45, 5 October 2016 (UTC)
- This is a really odd spambot; I think it is just one spammer accounting for about 30% of the hits on the blacklist log. They target a small group of core articles (Builletin Board System, for example), and then a larger set of what appear to be completely random articles (usually low traffic or even deleted articles). The links are obvious predatory spam for pharma, clothing, shoes, etc. This occurs daily, and the same bot has been active at least two years. If blocked, they then often switch to attempting to add links to the IP's talk page. These all just seem to be probes to test the blacklist. Oddly, I can't seem to find any recent instance where they've been successful in avoiding the blacklist, so I don't know what would happen on success. Interesting problem. Kuru (talk) 15:55, 5 October 2016 (UTC)
- Filter 271 is set up to handle most cases of 'success'. It looks likely to be the same bot. The filter's worth a read if you think the articles are random. It hasn't been adjusted for a while but might need some adjustment in the NS3 department. Drop me a line if you want to discuss the filter further. Sorry, can't help with a blocking bot. -- zzuuzz (talk) 20:34, 5 October 2016 (UTC)
- @Zzuuzz: The filter would indeed catch those that pass the blacklist, I'll have a look through the results whether there is anything there related to these spammers. Nonetheless, the ones that keep hitting the blacklist should be pro-actively blocked, preferably on one of the first attempts. I tried to catch them with a separate filter, but the filter only triggers after the blacklist, so no hits there. --Dirk Beetstra T C 05:38, 6 October 2016 (UTC)
- That's a really interesting read; certainly the same spammer in some cases. Will have to spend some time digging through there to analyze the pattern. Thanks! Kuru (talk) 17:12, 6 October 2016 (UTC)
- Filter 271 is set up to handle most cases of 'success'. It looks likely to be the same bot. The filter's worth a read if you think the articles are random. It hasn't been adjusted for a while but might need some adjustment in the NS3 department. Drop me a line if you want to discuss the filter further. Sorry, can't help with a blocking bot. -- zzuuzz (talk) 20:34, 5 October 2016 (UTC)
Ping. I've blocked 10 13 17 IPs this morning, that are responsible for a massive chunk (75%) of the total attempts to add blacklisted links. Can someone please pick this up? --Dirk Beetstra T C 03:38, 17 October 2016 (UTC)
- FWIW, I support someone making this bot. It'll need a BRFA, but I'm willing to oversee that. Headbomb {talk / contribs / physics / books} 14:08, 17 October 2016 (UTC)
please. --Dirk Beetstra T C 05:28, 26 October 2016 (UTC)
- How frequently would you want the bot checking for new hits? How do you suggest the bot know which subset of links are worthy of bot-blocking? Anomie⚔ 18:26, 26 October 2016 (UTC)
- @Anomie: 'Constantly' - these are about 200 attempts in one hour. I does not make sense to have an editor running around for 10 minutes and have 34 hits in the list before blocking (it would still flood), I would suggest that we attempt to get the IP gets blocked on the second hit at the latest. For that, I would suggest a quick poll of the blacklist every 1-2 minutes (last 10-20 entries or so).
- Regarding the subset, I'd strongly recommend that the bot maintains a blacklist akin User:XLinkBot/RevertList where regexes can be inserted. The subset of urls is somewhat limited, and if new ones come up (which happens every now and then), the specific links, or a wider regex, can be used (e.g. for the url shorteners, the link needs to be specific, because not every url-shortener added is this spammer, for the cialis and ugg-shoes stuff the filter can be wider). (I do note that the IPs also have a strong tendency to hit pages within the pattern '*board*' and their own talkpages, but that may not be selective enough to filter on). --Dirk Beetstra T C 10:34, 27 October 2016 (UTC)
- I just went through my blocking log, and I see that I block up to 27 IPs A DAY (98 in <10 days; knowing that User:Kuru and User:Billinghurst also block these IPs). Still, the editor here manages to get almost 200 hits .. --Dirk Beetstra T C 10:43, 27 October 2016 (UTC)
- If you'd want to narrow it down, one could consider to have two regex-based blacklists, one for links, one for typical pagenames - if an editor hits twice with a blacklisted link attempt to a blacklisted page, then block. And I have no problem with the bot working incremental - 3 hours; 31 hours; 1 week; 1 month; 1 year .. (the IPs do tend to return). --Dirk Beetstra T C 11:31, 27 October 2016 (UTC)
- @Beetstra: Please fill in User:AnomieBOT III/Spambot URI list. Anomie⚔ 13:23, 27 October 2016 (UTC)
- @Anomie: I have filled in the current domains. I will work backward to fill in some more. Thanks, lets see what happens. --Dirk Beetstra T C 13:47, 27 October 2016 (UTC)
- @Beetstra: Please fill in User:AnomieBOT III/Spambot URI list. Anomie⚔ 13:23, 27 October 2016 (UTC)
Comment: The ip spam hits are the same xwiki, and I am whacking moles at Commons (most), Meta, enWS and Mediawiki; other sites unknown as I don't have rights. The real primary issue is that the spambots are getting in through our captcha defences to edit in the first place. Then we have the (fortunate?) issue that we have blacklisted these addresses, and able to identify the problem addresses. Some of the penetrating spambots are on static IPs, and some are in IP ranges, mostly Russian.
As this is a very specific and limited subset of the blacklist urls, we could also consider the blocking capability of filters themselves. It should be possible to utilise the test and challenge of an abuse filter to warn and then disallow an IP edit, or variation, and then block based on subsequent hits. Plenty of means to stop false positives. — billinghurst sDrewth 12:53, 27 October 2016 (UTC)
- @Billinghurst: That would mean that we would globally de-blacklist the urls, and have a global filter to check for the urls and set that filter to block. That is certainly an option, with two 'problems' - first that it is going to be heavy on the server (regex testing on urls is rather heavy for the AbuseFilter; though there is some pattern in the pages that are hit, it is far from perfect). Second problem is that the meta-spamblacklist is used also way beyond MediaWiki. Though it is not our responsibility, I am not sure if the outside sites would like that we de-blacklist (so that they all have to locally blacklist and/or set up their own AbuseFilters). I have entertained this idea on the meta blacklist, but I don't know whether de-blacklisting and using an AbuseFilter will gain much traction.
- I have considered to set up a local abusefilter to catch them, but the abusefilter does not hit before the blacklist (Special:AbuseFilter/791). That would only work if I would locally whitelist the urls (which would clear the blacklist hits), and have a local filter to stop the IPs (I do not have the blocking possibility here on en.wikipedia, that action is not available .. I would just have to ignore the hits on the filter, or manually block all IPs on said filter).
- Or we use a bot to block these IPs on first sight. --Dirk Beetstra T C 13:20, 27 October 2016 (UTC)
- That being said, I would be in favour of a global solution to this problem .. --Dirk Beetstra T C 13:24, 27 October 2016 (UTC)
- I wasn't thinking of removing them from the blacklist. I see many examples of blacklisted urls in global logs, so it surprises me that it is the case. With regard to no blocking capability in abuse filters, that is a choice, and maybe it needs that review. The whole system is so antiquated and lacking in flexibility. :-/ — billinghurst sDrewth 23:16, 27 October 2016 (UTC)
- @Billinghurst: if you can get filter 791 to work so it shows the same edits as the blacklist (or a global variety of it) so it catches before. These editors don't show up (for these edits) in filter 271 either, though they obviously are there trying to do edits with not blacklisted links. --Dirk Beetstra T C 04:04, 28 October 2016 (UTC)
- I wasn't thinking of removing them from the blacklist. I see many examples of blacklisted urls in global logs, so it surprises me that it is the case. With regard to no blocking capability in abuse filters, that is a choice, and maybe it needs that review. The whole system is so antiquated and lacking in flexibility. :-/ — billinghurst sDrewth 23:16, 27 October 2016 (UTC)
Coding... (for the record). Code is mostly done, I believe, although it'll probably need to wait for next Thursday for phab:T149235 to be deployed here before I could start a trial. Anomie⚔ 13:57, 27 October 2016 (UTC)
- Why do you need to wait for the grant, I thought bot III was an adminbot, so it can see the spamblacklistlog? --Dirk Beetstra T C 14:10, 27 October 2016 (UTC)
- One of the pieces of security that OAuth (and BotPasswords) gives in case you want to use some tool with your admin account is that it limits which rights are actually available to the consumer, instead of it automatically getting access to all the rights your account has. The downside is that if there's not a grant for a right, you can't let the consumer use that right. It's easy enough to add grants to the configuration, as you can see in the patches on the linked task, but code deployment can take a little time. Anomie⚔ 20:18, 27 October 2016 (UTC)
- @Anomie: Thanks for the answer, I wasn't aware of that .. not running admin bots does not expose you to that. --Dirk Beetstra T C 04:04, 28 October 2016 (UTC)
- One of the pieces of security that OAuth (and BotPasswords) gives in case you want to use some tool with your admin account is that it limits which rights are actually available to the consumer, instead of it automatically getting access to all the rights your account has. The downside is that if there's not a grant for a right, you can't let the consumer use that right. It's easy enough to add grants to the configuration, as you can see in the patches on the linked task, but code deployment can take a little time. Anomie⚔ 20:18, 27 October 2016 (UTC)
- Why do you need to wait for the grant, I thought bot III was an adminbot, so it can see the spamblacklistlog? --Dirk Beetstra T C 14:10, 27 October 2016 (UTC)
Bot to automatically add Template:AFC submission/draft to Drafts
Let me explain my request. There are quite a few new users who decide to create an article in the mainspace, only to have it marked for deletion (not necessarily speedy). They might be given the option to move their article to the draft space, but just moving it to the draft space doesn't add the AfC submission template. Either someone familiar with the process who knows what to fill in for all the parameters (as seen in drafts created through AfC) or a bot would need to add the template, as the new user would definitely not be familiar with templates, let alone how to add one.
My proposal is this: Create a bot that searches for articles recently moved from the mainspace to the draft space and tags those articles with all the parameters that a normal AfC submission template would generate. For those who just want to move their articles to the draft space without adding an AfC submission template (as some more experienced editors would prefer, I'm sure), there could be an "opt-out" template that they could add. The bot could also search for drafts created using AfC that the editor removed the AfC submission template from and re-add it. Newer editors may blank the page to remove all the "interruptions" and accidentally delete the AfC submission template in the process, as I recently saw when helping a new editor who created a draft. Older editors could simply use the "opt-out" template I mentioned above. If possible, the bot could mention its "opt-out" template in either its edit summary or an auto-generated talk page post or (because it'll mainly be edited by one person while in the draft space) in an auto-generated user talk page post.
I realize this may take quite a bit of coding, but it could be useful in the long run and (I'm assuming) some of the code is there already in other bots (such as auto-generated talk page posts, as some "archived sources" bots do). -- Gestrid (talk) 06:55, 12 July 2016 (UTC)
- Sounds like a sensible idea; maybe the bot could check the move logs? Enterprisey (talk!) (formerly APerson) 01:59, 13 July 2016 (UTC)
- Sorry for the delay in the reply, Enterprisey. I forgot to add the page to my watchlist. Anyway, I'm guessing that could work. I'm not a bot operator, so I'm not sure how it would work, but what you suggested sounds right. -- Gestrid (talk) 07:44, 28 July 2016 (UTC)
- I don't think this should be implemented at all; adding an AfC template to drafts should be opt-in, not opt-out, since drafts with the tag can be speedely deleted. A bot can't determine whether an article moved to draft comes from an AfC or some other review or deletion process. Diego (talk) 10:12, 9 August 2016 (UTC)
- @Diego Moya: I'm no bot operator, but I'm pretty sure there are ways for bots to check the move log of a page. It comes up in #cvn-wp-en connect. CVNBot will say "User [[en:User:Example]] Move from [[en:2011 Brasileiro de Marcas season]] to [[en:2011 Brasileiro de Marcas]] URL: https://en.wikipedia.org/wiki/2011_Brasileiro_de_Marcas_season 'moved [[2011 Brasileiro de Marcas season]] to [[2011 Brasileiro de Marcas]]'". (That last part that starts "Moved on" is the edit summary.)
- As a side-note, in case you don't know, #cvn-wp-en is the automated IRC channel that monitors potential vandalism for all or most namespaces on English Wikipedia, courtesy of the Countervandalism Network.
- -- Gestrid (talk) 07:36, 25 September 2016 (UTC)
- I don't think this should be implemented at all; adding an AfC template to drafts should be opt-in, not opt-out, since drafts with the tag can be speedely deleted. A bot can't determine whether an article moved to draft comes from an AfC or some other review or deletion process. Diego (talk) 10:12, 9 August 2016 (UTC)
- Sorry for the delay in the reply, Enterprisey. I forgot to add the page to my watchlist. Anyway, I'm guessing that could work. I'm not a bot operator, so I'm not sure how it would work, but what you suggested sounds right. -- Gestrid (talk) 07:44, 28 July 2016 (UTC)
Excuse me, but I can't see how that would help. With the move log you can't distinguish an article moved to draft space by a newcomer from an article moved by an experienced editor who never heard of this proposed bot. Diego (talk) 08:27, 25 September 2016 (UTC)
- In that case, an edit like that is easily undo-able (if admittedly slightly irritating), and the bot can have a link to its "ignore template" within the edit summary, similar to how ClueBot NG has its "Report False Positive?" link in its edit summary. -- Gestrid (talk) 16:50, 25 September 2016 (UTC)
- You're assuming that there will be someone there to make such review, which is not a given. But then you want to create a procedure where, by default, drafts are put on a deletion trail without human intervention after someone makes a non-deleting movement, and it requires extra hoops to avoid this automated deletion outcome. Such unsupervised procedure should never be allowed, in special when it gets to delete content that was meant to be preserved. And what was the expected benefit of such procedure again? Diego (talk) 23:37, 25 September 2016 (UTC)
- Before this goes any further, Gestrid have you secured a consensus for this? A place to have this discussion might be at WT:Drafts as I know there have been multiple discussions about automatic enrollment of pages in the Draft namespace into the AFC process. While I acknoledge consensus can change there have been several editors who have combatively opposed the position before. Hasteur (talk) 15:05, 18 November 2016 (UTC)
- At this point, I have not secured a consensus for this. — Gestrid (talk) 17:01, 18 November 2016 (UTC)
- @Diego Moya: Sorry for taking such a long time to respond. I'm not exactly sure what you mean by putting a draft on a deletion trail. I know that drafts are deleted six months after the last edit, if that's what you mean. — Gestrid (talk) 22:10, 18 November 2016 (UTC)
- I've asked for consensus over at Wikipedia talk:Drafts#Bot to automatically add Template:AFC submission/draft to Drafts per Hasteur's suggestion. — Gestrid (talk) 22:16, 18 November 2016 (UTC)
WP:U1 and WP:G6 Deleting Bot
I feel like this would take a lot of workload off of administrators. As of the time I am writing this (17:54, 5 October 2016 (UTC)), there are 25 pages and 7 media files tagged for speedy deletion under these criteria. An administrator will have to personally delete each of these pages, even though it would fairly simple for a bot to do this, as the bot would only have to look at the category of these pages, and check the edit history to make sure that the tag was added by the user. Sometimes, there are dozens of pages in this category, and they all create unnecessary work for admins. I am not an admin, or I would probably code it up myself, but because you have to be an admin in order to run an admin bot, I cannot. Thanks, Gluons12 talk 17:54, 5 October 2016 (UTC).
- U1 would probably be a good idea (wait 10 minutes or more for the user to rethink it?). However, G6 probably will be impossible to implement as how would a bot know if it's controversial or not? Dat GuyTalkContribs 18:17, 5 October 2016 (UTC)
- Some kinds of G6 are probably automatable ("empty dated maintenance categories") but not all. Le Deluge (talk) 22:00, 5 October 2016 (UTC)
- This tends to be suggested reasonably frequently, and typically gets shot down because there aren't backlogs in these categories and admins dealing with these categories haven't expressed any need for bot help. So your first step should be to get admins to agree that such a bot would be needed. Anomie⚔ 18:50, 6 October 2016 (UTC)
- I would very strongly oppose giving a bot power to delete any article. There are too many chances to go wrong. There are almost never more than 1 or 2 day backlogs at Speedy Deletion. Getting admins to delete articles is not one of our problems. All too many of us admins enjoy it. DGG ( talk ) 02:05, 14 October 2016 (UTC)
- I think he obviously means WP:G7 above, and this can easily be automated if the bot only deletes ones that have a single contributing editor. We already have bots that delete pages, DGG, although perhaps not articles. ~ Rob13Talk 23:15, 10 November 2016 (UTC)
- G7 is not always straightforward. Often there are non-substantial edits by others, and it is not easy to judge if they are truly non-subatantial, so I suppose you intend a bot which doesn't delete if there is any non-bot edit to the page. And I suppose it would check that it was actually the same editor. Even so, I have turned down G7 a few times and asked the ed. to rethink it, and once or twice I have simply decided to carry on with the article myself--after all, the license is not revocable. I asked about bots with deletion access here: they are for: broken redirects, mass deletions after XfD approval--run not as a matter of course but electively for appropriate cases, implementing CfD, &deleting files uploaded for DYK. These are procedures requiring no judgment, and suitable for bots--and necessary because of the volume. We should not extend the list unless we really need to. I don't think the 2 or 3 a day of speedys that would fall under here would be worth it. DGG ( talk ) 02:40, 11 November 2016 (UTC)
