User:SieonHazelLee

From Wikipedia, the free encyclopedia

Why do people leave information on Wikipedia? Why do they invest their time and effort when it's not like Wikipedia is paying for writing? The answer is explained in BSOC. The reward for users is not money, but the inner satisfaction they feel themselves. When people feel proud when the knowledge information they leave is helpful to others, and the more they like, the more accurate information they leave about the subject, the more they want to be recognized, comes into play. Also, in the process of collaborating with others and building knowledge, they feel a sense of belonging and affection for the Wikipedia community. This intrinsic motivation is the key point in Wikipedia's work. (Kraut, R. E., Resnick, P., & Kiesler, S. (2016). Building successful online communities : evidence-based social design. The Mit Press.)

But what will happen if people use AI to fill Wikipedia with information? First of all, the motivation that users want to work on Wikipedia can be greatly reduced. What if you have experienced a subject with a passion firsthand and filled in each letter based on the information you know, but what other users bring seems perfect on the outside, but there is no relevant source, and there is even a possibility that it is AI Hallucination? This will lead to existing editors feeling that their efforts have been denied and losing their subjectivity as editors. Also, as the human connection we discussed and created on the Talk page weakens, long-acting editors may lose interest and leave Wikipedia. (Kraut, R. E., Resnick, P., & Kiesler, S. (2016). Building successful online communities : evidence-based social design. The Mit Press.) (Halfaker et al., 2013)

The use of AI negatively affects non-editors as well as editors. Most people come to Wikipedia for information, and if it is not clear how the document you are looking at was made or whether the source was reliable, you will lose trust in the entire Wikipedia platform. (Baker-White, 2020) In addition, Wikipedia will quickly be devoured by AI-generated information if Wikipedia starts using it for an unlimited amount of time, even though the quality of the text is poor.

Large platforms such as Wikipedia are originally more difficult to manage as they grow in size. However, with the use of AI, the barrier to newcomers will be lowered, so a huge amount of data will be received in an instant. In that case, WMF has no choice but to strengthen the rule enforcement, which significantly changes the community culture. There may also be a phenomenon in which existing users who have not adapted to the changed atmosphere leave. (Choudary, 2014 — Reverse Network Effects) In addition, more thorough modulation is needed to examine the vast amount, which in turn results in more labor, time and money. (Kraut, R. E., Resnick, P., & Kiesler, S. (2016). Building successful online communities : evidence-based social design. The Mit Press.)

To prevent this problem, the first method I want to recommend to WMF is the introduction of AI-generated text display function. When people write a document with AI, it automatically appears in that area, making the author more wary of using AI, and allowing readers to judge for themselves what information and how much they should trust.

However, in reality, various LLM models such as ChatGPT, Gemini, and NotebookLM are already so widely used that it is difficult to completely exclude the use of AI one by one in Wikipedia. Therefore, to some extent, it is necessary to accept AI through soft modulation, but also to determine the "how far to allow" line. For example, AI-detected editing is automatically sent to the sandbox, checked by the human editor to see if it is a reliable source or factually correct, and then merged it into a public document once verified. Wikipedia's openness will then be maintained while preventing low-quality or hallucinated documents from entering the text immediately. (Baker-White, 2020)

References

Kraut, R. E., Resnick, P., & Kiesler, S. (2016). Building successful online communities : evidence-based social design. The Mit Press.

Baker-White, R. (2020). Digital media trends and algorithmic content.

Halfaker et al., 2013

Choudary, 2014 — Reverse Network Effects

Related Articles

Wikiwand AI