| # | Verdict | Confidence | Source | Comments |
| [1] | ? Source unavailable | 0% | — | No URL found in reference |
| [2] | ? Source unavailable | 0% | source | The source text is a JavaScript error message asking the user to enable JavaScript. It does not contain any actual article content or information relevant to the claim. |
| [3] | Y Supported | 85% | source | The source text confirms the Filter Bubble Transparency Act (S.2024) establishes requirements for algorithmic transparency in recommendation systems, specifically requiring platforms to notify users and provide an alternative version using only expressly provided data. This directly supports the claim that the act frames requirements around recommendation system transparency. |
| [4] | Y Supported | 90% | source | The source text confirms that the Science, Innovation and Technology Committee (SITC) published a report on 11 July 2025 discussing how recommendation algorithms contributed to the spread of misinformation during the 2024 Southport riots. The report explicitly states: 'The unrest and riots of summer 2024 were driven in part by misinformation and hateful content that was amplified on social media platforms by recommender algorithms.' This directly supports the claim. |
| [5] | Y Supported | 95% | source | The source text confirms that the Joint Declaration was adopted on 24 October 2025 by the four mandate holders listed in the claim. It also directly states: 'recommender systems, and other AI-powered curation tools exert a large hidden influence and gatekeeper role over what information and media people access and consume.' This matches the claim's assertion verbatim. (Source is long, only partially checked.) |
| [6] | Y Supported | 85% | source | The source text states: 'The origins of modern recommender systems date back to the early 1990s when they were mainly applied experimentally to personal email and information filtering.' This directly supports the claim that recommendation systems originated in the early 1990s for personal email and information filtering. |
| [1] | ? Source unavailable | 0% | — | No URL found in reference |
| [7] | ? Source unavailable | 0% | source | Could not fetch source content |
| [8] | ? Source unavailable | 0% | source | The source text is a blog post from YouTube's official blog, which is usable content. However, the blog post discusses updates to the recommendation system in 2019 and does not mention any changes made in 2012. Therefore, the source does not provide information to support or contradict the claim about a 2012 change. |
| [9] | Y Supported | 85% | source | The source text states that YouTube changed its algorithm to 'highlight videos that keep viewers engaged' and 'reward engaging videos that keep viewers watching,' which aligns with the claim that the platform prioritized watch time. It also mentions that the change was made to address concerns about 'misleading thumbnails and low-quality videos,' directly supporting the claim. |
| [10] | ? Source unavailable | 0% | source | Could not fetch source content |
| [11] | ? Source unavailable | 0% | source | Could not fetch source content |
| [12] | ? Source unavailable | 0% | source | The source text only contains metadata and a brief abstract with no actual content about when Amazon adopted item-based collaborative filtering or the historical context of its recommendation engine. |
| [13] | Y Supported | 90% | source | The source text confirms that Amy Adler argues the pornography industry migrated to algorithm-driven platforms starting in 2007, controlled by Aylo (formerly MindGeek), and that these platforms use algorithmic search engines, suggestions, rigid categorization, and AI-driven search term optimization, producing distorting effects like filter bubbles and feedback loops. This directly supports the claim. |
| [14] | ? Source unavailable | 0% | — | No URL found in reference |
| [15] | Y Supported | 85% | source | The source text states that social media ranking algorithms 'predict what they will engage with' using 'behaviors like retweeting, replying, watching an embedded video, or lingering on a tweet for at least 2 min' and that 'users' revealed preferences, expressed through behaviour such as clicks and viewing time, do not always align with their stated preferences, expressed through explicit feedback such as surveys or content controls.' This directly supports the claim about signals used (engagement rates, viewing duration, click-through rates) and the distinction between revealed and stated preferences. (Source is long, only partially checked.) |
| [16] | Y Supported | 85% | source | The source text states that 'increasing the strength of social influence increased both inequality and unpredictability of success' and that 'success was only partly determined by quality.' This supports the claim that early engagement (social influence) can create feedback dynamics leading to unequal visibility outcomes, even with small initial quality differences. |
| [14] | ? Source unavailable | 0% | — | No URL found in reference |
| [17] | ? Source unavailable | 0% | source | Could not fetch source content |
| [18] | Y Supported | 90% | source | The source text states that social media platforms can 'enable health officials to deliver timely information' and that 'SMP algorithms for searches, recommendations, and popups can facilitate access to relevant information,' while also noting that they 'facilitate the fast spread of falsified content, fake news, manipulated information, or uninformed opinion.' This directly supports the claim that platforms help distribute timely information at scale but also risk amplifying misinformation. (Source is long, only partially checked.) |
| [19] | ? Source unavailable | 0% | — | No URL found in reference |
| [20] | ? Source unavailable | 0% | source | Could not fetch source content |
| [21] | Y Supported | 90% | source | The source text states that 'Algorithmic recommendation systems form the basis of automated recommendations to consumers. They assume the role of a cultural intermediary between consumers and music.' It also notes that 'There are widely held beliefs that the use of these technologies might serve to unfairly advantage certain groups at the expense of others,' which aligns with the claim about concerns among creators and industry stakeholders. The report is explicitly described as a 2023 UK government publication. (Source is long, only partially checked.) |
| [22] | Y Supported | 90% | source | The source text states: 'Georgina Born and Fernando Diaz have argued that recommendation algorithms for cultural content should promote diversity and commonality of experience rather than optimising solely for individual engagement, drawing on the programming traditions of public service broadcasting organisations such as the BBC.' This directly supports the claim. (Source is long, only partially checked.) |
| [23] | Y Supported | 90% | source | The source text states that 'Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth' and that 'false news spreads more than the truth because humans, not robots, are more likely to spread it.' This directly supports the claim that false news spreads faster and more broadly than accurate stories, and that human sharing behavior is the primary reason, not platform algorithms. |
| [8] | Y Supported | 90% | source | The source text states: 'We’ll continue that work this year, including taking a closer look at how we can reduce the spread of content that comes close to—but doesn’t quite cross the line of—violating our Community Guidelines. To that end, we’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways...' This directly supports the claim that YouTube announced changes in January 2019 to reduce recommendations of borderline or conspiratorial material. |
| [24] | ? Source unavailable | 0% | source | Could not fetch source content |
| [25] | Partially supported | 70% | source | The source text confirms that Facebook's algorithm prioritized emotional and provocative content, including anger, to increase engagement, which aligns with the claim about the 2018 algorithm change. However, the source does not mention Frances Haugen, the U.S. Securities and Exchange Commission, the Wall Street Journal, the Facebook Files, or Haugen's testimony before the U.S. Senate Commerce Committee, the UK Parliament, or the European Parliament. It also does not mention Meta's $13 billion investment or the 40,000 employees working on safety and security. |
| [26] | Y Supported | 90% | source | The source text confirms that the 2024 study used counterfactual bots to find that YouTube's algorithm 'pushes users to more moderate content,' with the effect being 'most pronounced for heavy partisan consumers.' It also states that 'individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role,' directly supporting the claim. (Source is long, only partially checked.) |
| [15] | Y Supported | 90% | source | The source text states: 'In a preregistered algorithmic audit, we found that, relative to a reverse-chronological baseline, Twitter’s engagement-based ranking algorithm amplifies emotionally charged, out-group hostile content that users say makes them feel worse about their political out-group. Furthermore, we find that users do not prefer the political tweets selected by the algorithm, suggesting that the engagement-based algorithm underperforms in satisfying users’ stated preferences.' This directly supports the claim about the algorithm amplifying divisive content and the mismatch between engagement-driven content and user preferences. (Source is long, only partially checked.) |
| [27] | ? Source unavailable | 0% | source | Could not fetch source content |
| [28] | Y Supported | 85% | source | The source text states that 'Meta’s algorithms proactively amplified and promoted content which incited violence, hatred, and discrimination against the Rohingya – pouring fuel on the fire of long-standing discrimination and substantially increasing the risk of an outbreak of mass violence.' This directly supports the claim that algorithmic amplification was linked to mass violence and that Amnesty International argued Facebook's features actively amplified anti-Rohingya hatred in Myanmar preceding the 2017 atrocities. |
| [29] | Y Supported | 90% | source | The source text confirms that the Christchurch Call was adopted in May 2019 and that it specifically committed signatories to 'review how companies’ algorithms direct users to violent extremist content,' which aligns with the claim about algorithmic amplification. It also notes the live-streamed nature of the attack and the focus on online content distribution. |
| [30] | Partially supported | 30% | source | The source text confirms the existence of the 'Christchurch Call Initiative on Algorithmic Outcomes' and its goal to develop privacy-preserving tools for independent researchers to study algorithmic impacts, including radicalisation pathways. However, it does not explicitly state that the initiative was launched in 2022 by 'the Call' (a term not clearly defined in the text). The timeline and specific launch date are not mentioned, so the claim is only partially supported. |
| [31] | Y Supported | 90% | source | The source text confirms that the case reached the Supreme Court (Gonzalez v. Google LLC, 2023) and that the family of a victim of the 2015 Paris attacks sued Google, alleging that YouTube's algorithm directed users toward ISIS content. The Court did not rule on Section 230 but instead dismissed the case on other grounds, noting that the complaint failed to state a claim for relief. This aligns with the claim's assertion that the Court declined to rule on the Section 230 question and disposed of the case on other grounds. (Source is long, only partially checked.) |
| [32] | Y Supported | 85% | source | The source text confirms that extremist content appears in platform recommendations (e.g., YouTube amplifies extreme content) and explicitly states that 'policymakers have yet to fully understand the problems inherent in “de-amplifying” legal, borderline content.' It also notes the conceptual ambiguity between user choice and algorithmic effects in academic and policy discussions, directly supporting the claim. (Source is long, only partially checked.) |
| [33] | Y Supported | 90% | source | The source text confirms the 2026 BBC investigation based on over a dozen whistleblowers and former employees at Meta and TikTok. It details how competitive pressure led to safety trade-offs, including a former senior Meta researcher sharing internal research showing 75% higher bullying and harassment on Instagram Reels compared to the main feed, 19% higher hate speech, and 7% higher violence and incitement. A former Meta engineer stated senior management directed his team to allow more borderline harmful content to compete with TikTok, linking the decision to falling share price. Internal documents showed Facebook's engagement-based algorithm rewarded negativity and that algorithmic incentives were not aligned with the company's mission. Meta denied the claims, stating it had strict policies to protect users and had invested significantly in safety over the preceding decade. (Source is long, only partially checked.) |
| [34] | ? Source unavailable | 0% | — | No URL found in reference |
| [17] | ? Source unavailable | 0% | source | Could not fetch source content |
| [35] | ? Source unavailable | 0% | source | Could not fetch source content |
| [36] | ? Source unavailable | 0% | source | Could not fetch source content |
| [37] | N Not supported | 15% | source | The source text presents specific findings showing consistent amplification of the political right over the left across six out of seven countries studied, and favoring right-leaning news sources in the US. This directly contradicts the claim that research has produced 'mixed results' varying by platform, methodology, and time period. The study's methodology and results are clearly defined, making the claim unsupported. (Source is long, only partially checked.) |
| [38] | ? Source unavailable | 0% | source | Could not fetch source content |
| [39] | Partially supported | 70% | source | The source text discusses research on Facebook's algorithmic recommendations and political content, noting that 'like-minded' sources are prevalent but that an intervention to reduce exposure to them had 'no measurable effects on political polarization.' This suggests mixed results in the impact of algorithmic recommendations on political content, supporting the claim's assertion of varying findings. However, the source is specific to Facebook and the 2020 US election, so it does not fully cover the broader claim about 'platforms, methodology, and time period.' (Source is long, only partially checked.) |
| [26] | Y Supported | 85% | source | The source text states: 'To date, empirical studies using different methodological approaches have reached somewhat different conclusions regarding the relative importance of algorithmic recommendations.' This directly supports the claim that research has produced mixed results. The text also mentions that findings vary by platform (YouTube), methodology (audit studies vs. panel studies), and time period (algorithm changes in 2019), which aligns with the claim's assertion of variation across these factors. (Source is long, only partially checked.) |
| [37] | Y Supported | 90% | source | The source text confirms that the study found 'in six out of seven countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left' and that 'algorithmic amplification favors right-leaning news sources.' It also states that 'we did not find evidence to support this hypothesis' regarding amplification of far-left and far-right groups. The study's methodology and findings align with the claim. (Source is long, only partially checked.) |
| [38] | ? Source unavailable | 0% | source | Could not fetch source content |
| [39] | Y Supported | 90% | source | The source text states: 'We found that the intervention increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims.' This directly supports the claim that large-scale experimental studies found algorithmic ranking altered the mix of political content but produced limited measurable effects on political attitudes or polarization. (Source is long, only partially checked.) |
| [40] | Y Supported | 90% | source | The source text states that algorithmic ranking altered the mix of political content users encountered (e.g., increased political and untrustworthy content, decreased uncivil content on Facebook) but produced limited measurable effects on political attitudes or polarization over the study period. This directly supports the claim. |
| [15] | Partially supported | 70% | source | The source text discusses how engagement-based ranking algorithms amplify divisive content and suggests that focusing on stated preferences could improve online discourse. However, it does not explicitly state that engagement-based ranking can shift political attitudes or quantify the effects as modest at the individual level but significant when aggregated. The claim is partially supported by the general findings but lacks the specific causal evidence and aggregation details mentioned in the claim. (Source is long, only partially checked.) |
| [15] | ? Source unavailable | 0% | source | The provided source text is a PMC (PubMed Central) page with metadata, author information, and an abstract, but it does not contain the actual content of the article that would support or contradict the claim about Meta's platforms and X. The text is not usable for verification. (Source is long, only partially checked.) |
| [39] | Y Supported | 85% | source | The source text states that a large-scale field experiment on Facebook found 'no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims' when exposure to like-minded content was reduced. This supports the claim that experimental studies on Meta's platforms found weaker or more limited effects on political attitudes compared to observational and audit-based work on X (Twitter). (Source is long, only partially checked.) |
| [40] | Y Supported | 85% | source | The source text states that 'moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity' and that 'the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.' This supports the claim that large experimental studies on Meta's platforms found weaker or more limited effects on political attitudes compared to observational and audit-based work on X. |
| [37] | ? Source unavailable | 0% | source | The provided source text is from a PMC (PubMed Central) page and contains only metadata, author information, and the beginning of the article abstract. There is no actual content discussing experimental studies on Meta's platforms or comparing them to observational work on X (Twitter). The text is not usable for verifying the claim. (Source is long, only partially checked.) |
| [38] | ? Source unavailable | 0% | source | Could not fetch source content |
| [41] | Y Supported | 95% | source | The source text states: 'we show that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) such rankings can be masked so that people show no awareness of the manipulation.' It also mentions the total number of participants (4,556) and the countries involved (United States and India), directly supporting the claim. (Source is long, only partially checked.) |
| [42] | Y Supported | 90% | source | The source text states: 'In both waves, we found more identity-congruent and unreliable news sources in participants' engagement choices... than they were exposed to in their Google Search results. These results indicate that exposure to and engagement with partisan or unreliable news on Google Search are driven not primarily by algorithmic curation but by users' own choices.' This directly supports the claim that user choice, rather than algorithmic curation, was the primary driver of exposure to partisan and unreliable news through search. |
| [43] | ? Source unavailable | 0% | source | Could not fetch source content |
| [44] | Y Supported | 85% | source | The source text states that Amnesty International's research shows TikTok’s ‘For You’ feed can draw children and young people into 'rabbit holes' of potentially harmful content, including videos that romanticize and encourage depressive thinking, self-harm, and suicide. This directly supports the claim that the report argues targeted recommendations could intensify exposure to depressive and self-harm-related material among vulnerable young users. |
| [33] | Y Supported | 90% | source | The source text confirms that a TikTok trust and safety employee (referred to as 'Nick') spoke to the BBC in 2026 about the company's internal case prioritisation system. It states that political cases were prioritised over reports of harm involving minors, including a 16-year-old in Iraq who reported sexualised images. The employee explicitly stated that the prioritisation was to maintain relationships with governments and avoid regulatory action. TikTok's response is also quoted, rejecting the characterisation and stating that child safety cases are handled by dedicated teams in separate review structures. (Source is long, only partially checked.) |
| [45] | ? Source unavailable | 0% | — | No URL found in reference |
| [46] | Y Supported | 85% | source | The source text confirms that KOSA was introduced in the Senate in 2022 and passed the Senate in July 2024. It also states that the bill requires platforms to allow minors to disable personalized algorithmic recommendations and imposes a duty of care regarding harms from platform design. The claim that it 'did not complete passage through the House before the end of the 118th Congress' is consistent with the source's implication that the bill passed the Senate but does not mention House passage. |
| [47] | Y Supported | 95% | source | The source text states that S.1748, the Kids Online Safety Act, was 'Introduced in Senate (05/14/2025)' and refers to the 119th Congress (2025-2026). This directly supports the claim that it was reintroduced in 2025. |
| [48] | Y Supported | 95% | source | The source text confirms that the SAFE for Kids Act, signed into law in 2024, requires social media platforms to restrict algorithmically personalized feeds (addictive feeds) for users under 18 unless parental consent is obtained. It states: 'The SAFE for Kids Act... requires social media companies to restrict algorithmically personalized feeds... for users under the age of 18 unless parental consent is granted.' It also clarifies that users under 18 will be shown content only from accounts they follow or otherwise select in a set sequence, such as chronological order, unless they obtain parental consent for an algorithmic personalized feed. (Source is long, only partially checked.) |
| [49] | ? Source unavailable | 0% | source | Could not fetch source content |
| [50] | ? Source unavailable | 0% | source | Could not fetch source content |
| [51] | ? Source unavailable | 0% | source | Could not fetch source content |
| [50] | ? Source unavailable | 0% | source | Could not fetch source content |
| [51] | ? Source unavailable | 0% | source | Could not fetch source content |
| [52] | ? Source unavailable | 0% | source | Could not fetch source content |
| [53] | Y Supported | 90% | source | The source text confirms that the 2021 Ada Lovelace Institute report identified six technical methods for auditing algorithmic systems: code audit, user survey, scraping audit, API audit, sock-puppet audit, and crowd-sourced audit. It also states that each method involves trade-offs between experimental control and ecological validity, aligning with the claim. (Source is long, only partially checked.) |
| [37] | Y Supported | 95% | source | The source text states: 'We provide quantitative evidence from a long-running, massive-scale randomized experiment on the Twitter platform that committed a randomized control group including nearly 2 million daily active accounts to a reverse-chronological content feed free of algorithmic personalization.' This directly supports the claim about the Huszár et al. study using a long-running randomized experiment with a control group of nearly two million daily active accounts receiving a reverse-chronological feed. (Source is long, only partially checked.) |
| [40] | Y Supported | 90% | source | The source text states: 'We investigated the effects of Facebook's and Instagram's feed algorithms during the 2020 US election. We assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms.' This directly supports the claim that Meta conducted experiments in 2020 by deactivating algorithmic ranking for randomly selected users during the US presidential election. |
| [53] | ? Source unavailable | 0% | source | The provided source text is from the Ada Lovelace Institute's report on regulatory inspection of algorithmic systems. While it discusses various auditing methods, it does not mention 'platform-run studies' or their dependence on company willingness to conduct and publish research. The text focuses on regulatory inspection methods rather than platform-conducted studies. (Source is long, only partially checked.) |
| [38] | ? Source unavailable | 0% | source | Could not fetch source content |
| [54] | Y Supported | 90% | source | The source text confirms that sock-puppet audits allow isolating algorithmic behavior from individual user choices but raises concerns about ecological validity due to artificial accounts not interacting with content as real users would. It also states: 'As explored by Chandio et al. (2023) on YouTube, these design choices may significantly impact the conclusions drawn from recommender audits,' which directly supports the claim about arbitrary design choices altering conclusions. (Source is long, only partially checked.) |
| [54] | Y Supported | 90% | source | The source text states: 'Algorithmic auditing studies relying on data donation have the potential to offer valuable insights into real-life effects of social media algorithms... these studies can be costly, involve intrusive data-collection, and may be prone to potential selection bias as they rely on user willingness to donate their data (Kmetty et al. 2023 ).' This directly supports the claim that data donation studies recruit real users and introduce self-selection bias. (Source is long, only partially checked.) |
| [2] | ? Source unavailable | 0% | source | The source text is a JavaScript enablement prompt, not actual article content. |
| [4] | Partially supported | 30% | source | The source text mentions the UK Online Safety Act and its limitations in addressing algorithmic amplification and misinformation, but it does not explicitly state that the Act grants Ofcom information-gathering powers to require platforms to share details of their recommendation systems. The text also does not mention the DSA or its data access requirements for researchers. Therefore, the claim is only partially supported by the source. |
| [4] | Y Supported | 85% | source | The source text states: 'The committee is concerned that government policy is hamstrung by a lack of accurate, up-to-date information about how recommendation algorithms operate, caused by a lack of transparency on the part of social media companies. Without this information, it is impossible to properly identify and address online harms.' This directly supports the claim that the committee described platform opacity as an obstacle to effective regulatory oversight. The source also implies that companies are not sharing algorithmic information, which aligns with the claim about companies refusing to share high-level representations of their algorithms. |
| [55] | Partially supported | 70% | source | The source text confirms that the DSA requires very large online platforms to assess and mitigate systemic risks associated with recommendation systems, including risks to public discourse, fundamental rights, and the mental health of minors. It also states that platforms must offer users at least one recommendation option not based on profiling. However, the source does not mention the specific date (17 February 2024) when the DSA became fully applicable, nor does it explicitly reference Articles 27, 34, and 35 as stated in the claim. |
| [56] | Y Supported | 85% | source | The source text confirms that the DSA requires very large online platforms to assess and mitigate systemic risks associated with recommendation systems (Articles 34 and 35), including risks to public discourse, fundamental rights, and the mental health of minors. It also states that platforms must offer users at least one recommendation option not based on profiling (Article 38). Article 27 is mentioned as requiring transparency about how recommendations are generated. The source text supports the claim with direct references to the relevant articles and their requirements. (Source is long, only partially checked.) |
| [57] | Y Supported | 95% | source | "On October 2nd 2024, the European Commission issued a request for information to YouTube, Snapchat and TikTok under the Digital Services Act (DSA) , 'asking the platforms to provide more information on the design and functioning of their recommender systems'... This request is aimed at obtaining information on the parameters used by the platforms’ algorithms 'to recommend content to users, as well as their role in amplifying certain systemic risks, such as those related to elections and civic discourse, users’ mental health (e.g., addictive behaviors and 'rabbit holes'), and the protection of minors'" - The source text directly supports the claim about the October 2024 request to YouTube, Snapchat, and TikTok regarding their recommender systems and their role in amplifying risks related to elections, civic discourse, and child safety. (Source is long, only partially checked.) |
| [58] | Y Supported | 90% | source | The source text confirms that the Online Safety Act 2023 requires platforms to conduct risk assessments for illegal content ("risk of illegal content appearing on their service") and that the largest services face additional transparency and child safety obligations ("categorised services" with "additional requirements to enhance transparency and accountability"). It also states that the first enforceable duties for illegal content came into force in March 2025 ("illegal content duties are now in effect, and as of 17 March Ofcom can now enforce against the regime"). (Source is long, only partially checked.) |
| [4] | Y Supported | 90% | source | The source text confirms that the Science, Innovation and Technology Committee concluded on 11 July 2025 that the Online Safety Act fails to address algorithmic amplification of 'legal but harmful content,' citing the 2024 Southport riots as an example. It also states the committee recommended compelling platforms to deprioritise fact-checked misleading content and noted that technology companies refused to share information about their recommendation algorithms. |
| [59] | ? Source unavailable | 0% | source | The provided source text is a submission of written evidence to the Science, Innovation and Technology Committee. It does not contain any information about the committee's conclusions, recommendations, or actions in July 2025. The text is a submission from academics, not the committee's report or findings. (Source is long, only partially checked.) |
| [59] | Y Supported | 85% | source | The source text states: 'In contrast to analogous European regulation, the UK Online Safety Act 2023 (OSA) provides no safety duties that focus on the development and adoption of recommendation algorithms by platforms...' This directly supports the claim that the Online Safety Act does not include specific duties focused on recommendation algorithms, a gap identified by the committee and academic commentators. (Source is long, only partially checked.) |
| [60] | Y Supported | 90% | source | The source text confirms that the Wikimedia Foundation filed a legal challenge to the Online Safety Act's categorisation regulations, arguing that Wikipedia's volunteer-led model does not use engagement-driven recommendation and that Category 1 obligations would undermine contributor privacy and safety. The text states: 'The Wikimedia Foundation had filed a lawsuit in the High Court of London against regulations under the Act, arguing that they could impose the strictest obligations on Wikipedia. They brought the challenge under the assumption that it would be labeled as a “Category 1” platform, which it argues “would undermine the privacy and safety of Wikipedia’s volunteer contributors...”' |
| [61] | Y Supported | 85% | source | The source text confirms that the Wikimedia Foundation filed a judicial review challenging the UK Online Safety Act's Categorisation Regulations, arguing that Wikipedia's volunteer-led content moderation model does not use engagement-driven recommendation and that imposing Category 1 duties would undermine the privacy and safety of its contributors. This directly supports the claim. |
| [60] | Y Supported | 90% | source | The source text states: 'Judge Jeremy Johnson rejected the Wikimedia Foundation's request on Monday, while specifying that the foundation could bring another legal challenge if the regulator Ofcom “wrongly concluded that Wikipedia falls under Category 1.”' It also says: 'Judge Johnson added that despite the rejection, the ruling "does not give Ofcom and the Secretary of State a green light to implement a regime that would significantly impede Wikipedia’s operations."' This directly supports the claim that the High Court dismissed the challenge in August 2025 and included the specified conditions. |
| [3] | Y Supported | 85% | source | The source text confirms the Filter Bubble Transparency Act was introduced in the 117th Congress (2021-2022) and outlines its provisions, supporting the claim that it 'sought to require platforms to offer alternatives to algorithmically ranked feeds.' While the source does not explicitly state that no federal legislation had been enacted as of early 2026, the absence of any mention of enactment or passage, combined with the bill's status as 'Introduced' and the timeline (2021-2022), supports the claim that it had not yet been enacted by early 2026. |
| [47] | ? Source unavailable | 0% | source | The source text is from Congress.gov and appears to be a bill page for the 119th Congress (2025-2026). It shows the bill was introduced on May 14, 2025, and refers to the 119th Congress. However, this is metadata and bill information, not actual article content or historical legislative action. There is no information about the bill's status in the 118th Congress (2023-2024) or whether it passed the Senate in 2024. The source does not contain the actual content of the bill or historical legislative action, making it unavailable for verifying the claim. |
| [48] | Y Supported | 95% | source | The source text states: 'The SAFE for Kids Act... requires social media companies to restrict algorithmically personalized feeds... for users under the age of 18 unless parental consent is granted.' It also clarifies that 'users under 18 will be shown content only from other accounts they follow... in a set sequence, such as chronological order.' This directly supports the claim that the law requires platforms to default to non-algorithmic feeds for users under 18. (Source is long, only partially checked.) |
| [62] | ? Source unavailable | 0% | source | Could not fetch source content |
| [62] | ? Source unavailable | 0% | source | Could not fetch source content |
| [62] | ? Source unavailable | 0% | source | Could not fetch source content |
| [45] | ? Source unavailable | 0% | — | No URL found in reference |
| [26] | Y Supported | 85% | source | The source text states: 'empirical studies using different methodological approaches have reached somewhat different conclusions regarding the relative importance of algorithmic recommendations.' It also mentions that 'the content that users consume is some unobserved combination of their own preferences and the platform design, including the recommender, each of which influences the other in a complex feedback loop.' These statements directly support the claim that academic and policy debate has centered on whether engagement-driven recommendation represents a structural problem in platform design and how large its effects are relative to other factors. (Source is long, only partially checked.) |
| [39] | Partially supported | 70% | source | The source text discusses academic and policy debate about algorithmic amplification and echo chambers on social media, specifically mentioning concerns about 'engagement-driven recommendation' and its potential role in political polarization. However, it does not explicitly state that the debate has centered on whether this represents a 'structural problem in platform design' or on comparing its effects to 'other factors that shape information consumption.' The text focuses more on empirical findings about the prevalence of like-minded content and the effects of reducing such exposure, rather than explicitly framing the debate in the terms mentioned in the claim. (Source is long, only partially checked.) |
| [45] | ? Source unavailable | 0% | — | No URL found in reference |
| [22] | Y Supported | 90% | source | The source text states: 'We have published on this research, 1 1. Andres Ferraro, Gustavo Ferreira, Fernando Diaz, and Georgina Born, "Measuring commonality in recommendation of cultural content to strengthen cultural citizenship," ACM Trans. Recomm. Syst. 2, 1, Article 10 (March 2024).' This directly supports the claim that Born and Diaz have argued from a cultural theory perspective that personalisation in recommender systems weakens the common experiences on which cultural citizenship depends. (Source is long, only partially checked.) |
| [26] | Y Supported | 90% | source | The source text states: 'our findings indicate that, at least since the algorithm changes that YouTube implemented in 2019, individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role.' This directly supports the claim that user preferences are a more significant driver of partisan consumption than the algorithm itself. (Source is long, only partially checked.) |
| [39] | Y Supported | 90% | source | The source text states: 'We found that the intervention increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims.' This directly supports the claim that large experimental studies of Meta's platforms during the 2020 election cycle produced similar findings, with algorithmic ranking altering the content mix but producing limited measurable effects on attitudes. (Source is long, only partially checked.) |
| [40] | Y Supported | 90% | source | The source text states that algorithmic ranking altered the content mix (e.g., increased political and untrustworthy content, decreased uncivil content on Facebook) but produced 'no significant alteration' in key attitudes like polarization or political knowledge. This directly supports the claim that algorithmic ranking had limited measurable effects on attitudes. |
| [63] | ? Source unavailable | 0% | source | The provided source text is a journal article abstract and metadata from Nature Human Behaviour, but it does not contain the actual content of the article. The text includes navigation elements, references, and author information, but no substantive discussion of the central question in the field of algorithmic systems and content exposure. Therefore, the source is unavailable for verifying the claim. |
| [26] | Partially supported | 75% | source | The source text discusses the shift in research focus from whether algorithmic systems influence content exposure to understanding the size of those effects relative to factors like social network composition, media consumption habits, and political identity. However, it does not explicitly state that this shift occurred 'by the mid-2020s' as claimed. The text describes the current state of research but does not provide a specific timeline for when this shift happened. (Source is long, only partially checked.) |
| [15] | ? Source unavailable | 0% | source | The provided source text is a PMC (PubMed Central) page with metadata, author information, and the beginning of an academic article. However, it does not contain the specific content needed to verify the claim about the central question in the field shifting by the mid-2020s. The text is truncated and does not include the relevant section discussing the evolution of research questions in the field. (Source is long, only partially checked.) |
| [64] | Y Supported | 85% | source | The source text confirms that Florida and Texas passed laws in 2021 restricting platforms' ability to moderate content based on political viewpoint. It states: 'In 2021, Florida and Texas enacted statutes regulating large social-media companies and other internet platforms. The laws curtailed the platforms’ ability to engage in content moderation...' (Source is long, only partially checked.) |
| [65] | Y Supported | 90% | source | The source text states: 'No trustworthy large-scale studies have determined that conservative content is being removed for ideological reasons... Even anecdotal evidence of supposed bias tends to crumble under close examination.' It also says: 'The claim of anti-conservative animus on the part of social media companies is itself a form of disinformation: a falsehood with no reliable evidence to support it.' These statements directly support the claim that the NYU Stern Center for Business and Human Rights found no reliable evidence of systematic censorship of conservative viewpoints and that algorithmic promotion often gave conservative content greater reach. (Source is long, only partially checked.) |
| [64] | Y Supported | 90% | source | The source text confirms that the Supreme Court considered First Amendment challenges to the Florida and Texas laws in Moody v. NetChoice (2024). It states that the Court held the laws interfere with protected speech by preventing platforms from compiling third-party speech in the way they want, and that Texas's interest in correcting the mix of viewpoints is not valid under the First Amendment. The Court vacated the lower court rulings and remanded the cases, indicating that laws directly dictating algorithmic operations would face significant First Amendment scrutiny. (Source is long, only partially checked.) |
| [15] | Y Supported | 90% | source | The source text states: 'we explore the implications of an alternative approach that ranks content based on users’ stated preferences and find a reduction in angry, partisan, and out-group hostile content, but also a potential reinforcement of proattitudinal content.' This directly supports the claim that an alternative ranking approach based on stated preferences reduces divisive content while increasing exposure to ideologically aligned material. (Source is long, only partially checked.) |