User:StaringBook631/Report

From Wikipedia, the free encyclopedia

Wikipedia Advising Report - Sandy Mazon (Addressing Generative AI within Wikipedia)

It is difficult to distinguish AI-generated content from human-created content and furthermore, generative AI content can be misleading, biased, and can even hallucinate content/sources. For an online community, it is extremely harmful as it can drown out the voices of real people trying to participate in the community. As for the Wikipedia Foundation, which is focused on empowering people to curate content together, I see AI as a serious threat. Which is why I have 3 recommendations as to how we can manage the inevitable insertion of AI into the community. By acknowledging it, regulating it, and even adding consequences to the spread of the unwanted content.


These recommendations are for you,  the Wikimedia Foundation, which can be taken as a step-by-step guide to AI integrating further into your community. My first recommendation is adding a separate page specifically for AI content for users to seek out if it aligns with their goals. As I feel this would create a pleasant experience for users who use AI, and possibly spark more interest in contributing to the page.

A great source for how to manage online communities is an academic textbook I have researched quite extensively under the University of Washington. The book is called Building Successful Online Communities and one claim that they state is that creators/moderators should “redirect inappropriate posts to other places to create less resistance than moving them” (CH 4, pg 120, BSOC). Creating a safe area for your users wanting to engage and create AI content would give them an outlet to satisfy their methods of contributions. Shutting them down completely would eventually create heavy resistance that could increase the amount of AI content on your platform or even worse, divert attention from high quality contributors.. There is one unintended outcome that could occur due to this action which is that members might feel ostracized from the community. But by having a space where AI can be used for the better, it will give a sense of trust bestowed onto the community increasing intrinsic motivation to continue belonging to the Wikipedia foundation.


Often one of the biggest disruptors of online communities are newcomers. Which is why for my second recommendation would be regulation. This said regulation can be done through modulating the most common users. This is done by establishing a well-defined moderating committee to guide newcomers, thus fostering a culture that encourages high-quality posts.

Confusion and lack of belonging are common things for new users to experience, and as a result, they may contribute poorly due to simply not knowing what your page is about. A recent study emphasizes that “moderators  of  individual  subreddits have a great deal of influence over the culture of the small communities they lead.” (Fiesler C., 2018) This is a significant finding from another online community that shows how moderation, when done with structure, consistency, and communication, can shape community behavior the most effectively. Keeping this in mind it would be wise to create a whole system where new users that are more likely to create un-educational and low quality content can be in a sense assimilated to their new community's norms and standards. One risk to consider when enforcing a pre-made culture onto them from the start is that you could unintentionally create a feeling of discrimination and hostility for the ideas of newcomers. But even so the benefits of  assimilating members from the start, the Wikipedia foundation can ensure their culture/standards are being held evenly among their members.


If the previous attempts of acknowledging and even hands-off regulation of AI content have had little to no impact on managing the overflow of AI content, then I believe that’s when my third recommendation would be the next best course of action.Which would be adding more transparent and stricter regulation on the Wikipedia foundation, followed by consequences.

Consistent transparent moderation practices not only deter harmful behavior but it also reinforce user trust. As noted in Building Successful Online Communities, "Consistently applied moderation criteria, a chance to argue one’s case, and appeal procedures increase the legitimacy and thus the effectiveness of moderation decisions” (CH 4, pg 120-121, BSOC). Before having their content removed or heavily shadowbanned, the appeal process ensures users feel respected while still enacting disciplinary procedures. In doing so, you, the Wikimedia Foundation, would strengthen a sense of community ownership, rather than enforcing rules that can be perceived as arbitrary. As if un-regulated AI posts can overflow the main page of your community and drown out the purpose of the page and even your own users. Additionally, there are useful systems such as the “reputation system, which summarizes the history of someone's online behavior, encourages good behavior and deter norm violations” ( CH 4, pg 141-142, BSOC). A system like this would keep long-time trustworthy contributors distinguishable, and encourage continuous positive behavior. In the best terms gamify being a productive member of the Wikipedia foundation, further feeding into extrinsic motivation. A well researched negative effect of gamifying a community is that it discourages people from participating at all due to members being pitted against each other. But without creating an extrinsic motivation in the community it would led to more intense moderating guidelines which could harm the community even more than not moderating passively . Over time as well, this self-regulating mechanism will most likely promote a culture in which its users can feel pride for contributing high-quality educational content rather than pushing Wikimedia’s limits by posting unhelpful AI content that would harm their own community.


By combining a reputation-based structure with consistent moderation and having an outlet for users who use AI, I believe the Wikimedia Foundation can not only protect the integrity of its content but also create an adaptive model for other online communities struggling with the same challenges of AI integration.

Biblography-

Fiesler, C., Jiang, J., McCann, J., Frye, K., & Brubaker, J. (2018). Reddit Rules! Characterizing an Ecosystem of Governance. Proceedings of the International AAAI Conference on Web and Social Media, 12(1). https://doi.org/10.1609/icwsm.v12i1.15033

Kraut, Robert E., et al. Building Successful Online Communities: Evidence-Based Social Design. The MIT Press, 2011. JSTOR, http://www.jstor.org/stable/j.ctt5hhgvw. Accessed 9 Nov. 2025.

Related Articles

Wikiwand AI