Wikipedia:WikiProject User warnings/Testing

From Wikipedia, the free encyclopedia

This page is the tracking page for efforts on the English Wikipedia to improve the quality of user talk warnings. Links to this project on other Wikipedias can be found on the cross-wiki hub on Meta.

Scope

The purpose of this page is to measure the efficacy of different user talk templates through randomized testing. This effort does not entail creating new categories of user warnings or to organize them more efficiently, but rather to improve on the quality of our current communication methods.

We're aiming to fine tune the template messages we send to editors, in order to encourage more good faith contributors and discourage outright vandals, spammers, and other bad faith contributors.

Participants

Tasks

Any effort to make templates more simple, friendly, and accessible is welcome. Specifically, you can:

  • Draft new templates or assess the templates that are currently in the draft phase and suggest improvements in their content or structure. We aim to lessen the bitey-ness of warnings, but templates should still comply with the design guidelines and the usage and layout best practices.
  • Suggest which templates to test next and what changes should be made.
  • Help us analyze our data. We're using mixed methods, with both quantitative statistics and qualitative coding.
  • Recruit more participants.

Tests

Templates have been tested in the following places:

Those pages each have lists and dates of all the templates tested including results if available. The following are tracking tables with start and end dates. For tests in projects other than English Wikipedia, please see our hub on Meta.

More information Test name, Brief description ...
Test nameBrief descriptionDate startedDate endedDurationTracking templatesData (if available)
Huggle 1Level 1 vandalism warning. 3 test templates (image/no image; teaching, personal) and 1 controlJuly 19, 2011August 5, 2011~2 weeks{{z49}} {{z50}} {{z51}} {{z52}} {{z53}} {{z54}} {{z55}} {{z56}}
Huggle 2Level 1 vandalism warning. 2 test templates (personal, personal + no directives) and 1 controlSeptember 25, 2011October 10, 2011~2 weeks{{z57}} {{z58}} {{z59}}
Huggle 3All level 1 warnings (except vandalism), 9 total. 1 test template (personal + no directives) and 1 controlOctober 18, 2011November 19, 20111 month{{z60}} {{z61}} {{z62}} {{z63}} {{z64}} {{z65}} {{z66}} {{z67}} {{z68}} {{z69}} {{z70}} {{z71}} {{z72}} {{z73}} {{z74}} {{z75}} {{z76}} {{z77}}
Twinkle 1AfD and PROD notices, 2 total. 1 test template (personal + no directives) and 1 controlNovember 9, 2011December 9, 20111 month{{z78}} {{z79}} {{z81}} {{z82}}
Huggle short 1Level 1 vandalism warning. 2 test templates (personal + no directives, short) and 1 controlNovember 8, 2011December 9, 20111 month{{z84}} {{z85}} {{z86}}
XLinkBotLevel 1-4 spam warnings, 4 total. 1 test template (personal + no directives) and 1 control; Welcomeg and welcomeanon, 1 test template (personal + no directives) and 1 controlNovember 17, 2011December 17, 20111 monthspam 1-4: {{z87}} {{z88}} {{z89}} {{z90}} {{z91}} {{z92}} {{z93}} {{z94}}; welcomes: {{z95}} {{z96}} {{z97}} {{z98}}
Huggle short 2All issue-specific level 1 warnings, 9 total. 1 test template (short) and 1 controlNovember 22, 2011December 22, 20111 month{{z99}} {{z100}} {{z101}} {{z102}} {{z103}} {{z104}} {{z105}} {{z106}} {{z107}} {{z108}} {{z109}} {{z110}} {{z111}} {{z112}} {{z113}} {{z114}} {{z115}} {{z116}}
SDPatrolBotNewbie CSD tag removal warning, 1 test template (personal + no directives) and 1 controlNovember 3, 2011January 3, 20122 monthsN/aWikitable
Shared IP archivingarchiving of shared and dynamic IP talk pages regularly to remove irrelevant warningsDecember 19, 2011February 19, 20122 monthsN/aList of all archived talk pages, SharedIPArchiveBot's log
ImageTaggingBotImage notifications about sourcing/licensing, 4 total. 2 test templates (directives, no directives) and 1 control.December 21, 2011February 21, 201230 days (minimum){{z131}} {{z132}} {{z133}} {{z134}} {{z135}} {{z136}} {{z137}} {{z138}} {{z139}} {{z140}} {{z141}} {{z142}}
CorenSearchBotCopyvio notifications, 6 total. 1 test template (personal + no directives) and 1 control; welcomelaws, 1 test (short) and 1 control.January 12, 2012January 31, 20121 month{{z117}} {{z118}} {{z119}} {{z120}} {{z121}} {{z122}} {{z123}} {{z124}} {{z125}} {{z126}} {{z127}} {{z128}}
28botTest edit warning, registered and unregistered, 2 total. 1 test template (personal + no directives) and 1 controlJanuary 19, 2012March 27, 2012{{z143}} {{z144}} {{z145}} {{z146}}
Close

Testing method

The following are the requirements to conduct comparative A/B testing of any user talk template. Doing randomized experiments allows us to get hard data about what kinds of content are most successful at helping us achieve these goals. What you'll need is...

  1. A "randomizer" that delivers all the templates in your test. This is the template that should be included in the configuration of whichever bot or tool you are testing with, and it randomly delivers one of the templates via a parser function.
  2. A control, usually the existing default template. Note that you should replicate the default in a new template rather than use the current template page, in order to avoid including old instances of the default in your experiment.
  3. A new version or versions of the template you want to test. Try to use a canonical name that matches the type, purpose, and level of the warning you're interested in.
  4. A Z number tracking template for all templates being tested. If you do not include a separate Z number in each template, you will lose track of your test cases once they are substituted.

In some cases, such as for bots where all contribs are by one account, this method can be greatly simplified.

Analysis

We have so far used a mixed method of both quantitative measurement and qualitative assessment. If you'd like to help sort and analyze tests, please sign up above.

Testing results from all projects are available here.

Related Articles

Wikiwand AI