Talk:AI agent

From Wikipedia, the free encyclopedia

arXiv

Hi @Grayfell. I saw that you added the unreliable sources tag because several arXiv papers were cited. Can you explain more on your reasoning?

I agree with you and WP:ARXIV that preprints are unreliable in general. However, my view is that preprints are okay in certain circumstances:

- Per WP:ARXIV, when the author is a subject matter expert, as in the case of Arvind Narayanan, whose preprint "AI Agents That Matter" I cited.

- When attribution is explicitly provided, such as by saying "According to a preprint study", for example.

- When a news article or other reliable source cites that preprint (attribution should be provided here as well).


While we're on the subject, arXiv preprints are all over Wikipedia, despite WP:ARXIV and particularly in technical articles, and it's not fully clear to me when it is and isn't okay to cite them. TotalVibe945 (talk) 13:22, 22 December 2025 (UTC)

For future reference, this is the edit which imported the Arvind Narayanan cite from Intelligent agent. In that edit, both uses of that cite are for broad and relatively vague claims that don't need a preprint.
There are multiple problems with using preprints, especially for 'trendy' topics like LLMs. I think the over-use of preprints on Wikipedia should be viewed as a mess that needs to be cleaned-up. I don't think this problem should be viewed as an excuse to add more.
Even when the author is a subject matter expert, these preprints are still usually WP:PRIMARY sources for those authors' views. Many of these preprints never get published, so they end up as blog posts wearing an academic costumes. WP:SPS is another way to look at that kind of source.
Subject matter experts can still make mistakes, trivial asides, and even fabrications, so it's better to rely on secondary/independent sources. To put it another way, content which has not yet been peer-reviewed should not be presented in exactly the same way as reliable sources. One way to make this distinction clearer is via attribution, but such attribution can artificially inflate the importance of the source. Instead, we should use reliable sources to determine which perspectives are encyclopedically important. Relying on our own opinions or levels of interest is sometimes very understandable, but it's sill WP:OR.
I hope that's helpful. Grayfell (talk) 20:10, 22 December 2025 (UTC)
Thank you. This is helpful. I'll replace the pre-prints with better sources where I can.
Do you have any thoughts on pre-prints that are cited by reliable news articles? Obviously it's not the same thing as peer-review, but my view is that it gives the pre-print more weight, and I would cite both the news source and the original pre-print. I'm open to changing my mind here, though. TotalVibe945 (talk) 14:27, 14 January 2026 (UTC)
I've said more below, but as always, WP:CONTEXTMATTERS. It depends on what the reliable source is saying about the pre-print. The mere existence of a pre-print is rarely noteworthy. Grayfell (talk) 23:48, 14 January 2026 (UTC)

'Possible mitigation' section

This is regarding this edit from Azarboon, which Oblivy reverted and I've restored.

It's all far too vague. "..noted the possibility...", "...have been suggested...", "...could be redesigned..." "...has suggested..." The last paragraph mentions a "lethal trifecta" but provides absolutely no context on what that means.

It's not enough for sources to be arguably reliable, because there is an unending fountain of think-pieces, speculation, and punditry on this topic. As I said above at #arXiv, we should provide context to readers without filler. Grayfell (talk) 07:12, 6 January 2026 (UTC)

Agreed. I'm a software engineer and didn't find that part to be compelling. Azarboon (talk) 08:51, 6 January 2026 (UTC)
To be clear, I didn't have an opinion on that section. My concern was that @Azarboon had blanked multiple paragraphs with a misleading edit summary that claimed "Content removed due to reliance on sources considered unreliable sources". Many of those sources are WP:RS. If you say sources didn't support the text that's another matter which should have been explained in a "clear edit summary" which would allow another editor "to understand the change" (see WP:FIES). Oblivy (talk) 09:40, 6 January 2026 (UTC)
I'm pushing back on this. Reliable sources have widely discussed the issues that come with agents. The purpose of this section is to show to the reader ways that some of the issues can be addressed, especially with respect to privacy, security and energy efficiency. Because agents have not been widely adopted, and because (to my knowledge) there's an astounding lack of peer-reviewed research on how agents are used in practice and on their impact, of course this section is going to have qualifying words: we just don't know yet. It would not be appropriate to use a definitive Wikivoice here. Even so, there is still value in including info on possible solutions, provided that the sources are reliable and go beyond mere speculation, which they do.
Guardrails are also widely discussed in the context of LLMs and deserve an article section somewhere in this topic (if not a separate article altogether).
Energy concerns with LLMs - the basis for many agents - are also well-documented. Using smaller models stands to reason as one way to address those concerns. (This is supported by a pre-print from Nvidia and a news article from Inc that cites that paper. See my above comment for my thoughts on that citation approach.)
The context of the lethal trifecta relates to how agents are designed. Having all three leads to serious security issues. I will update the language to include this context. TotalVibe945 (talk) 15:14, 14 January 2026 (UTC)
So now at least two editors have disputed your edits. Since you do not have consensus, you should discuss here instead of restoring.
Your comment here fails to answer my concerns. If we just don't know yet, we don't need to say anything at all. (ironically, not talking about things is something LLMs have a hard time with. We don't have to be like LLMs. If we don't know, we don't need to pretend like we have something to add)
Articles should cite reliable sources to provide information to readers without editorializing. These edits do not do this.
For example, This source was used to justify mentioning the existence of a pre-print about how smaller models are more efficient.
Here is the surrounding context from that source:
Altman’s interview reveals that the breathless hype surrounding AI may not produce the monumental breakthroughs its architects have promised. Other recent research authored by the biggest companies in Silicon Valley indicates a similar outlook.
In June, Apple published a study that saw advanced models, across the spectrum from large reasoning models (LRMs) to large language models (LLMs), experience “total collapse” when confronted with complex tasks. Also in June, researchers from Nvidia found that AI agents trained on data from small language models (SLMs) can perform tasks at a similar level of mastery as those based on LLMs. What’s more, SLM agents require less energy and computational power, and are therefore vastly more economical.
Perhaps, Altman was compelled to slow his roll because of recent events. Last week, OpenAI and Altman were subjected to a chorus of derision online after the company’s newest model, GPT-5, debuted as a dud. Altman previously claimed it would possess “PhD-level intelligence” in almost every area.
This is all the source has to say about AI agents, and none of this context was imparted by the proposed change. The source is not primarily about AI agents and provides no usable information about the topic of the article.
Likewise, WP:BI is a borderline outlet. A flimsy source headlined "Don't get too excited about AI agents yet. They make a lot of mistakes." isn't a great source for passing on unattributed chatter from an executive at "Patronus AI, a startup that helps companies evaluate and optimize AI technology."
Instead of looking for sources to justify adding original research, start from better sources. I suggest reviewing WP:BACKWARDS if you haven't already. Grayfell (talk) 23:46, 14 January 2026 (UTC)

Request to add external reference

I have a conflict of interest as I am associated with iFour Technolab. I believe this blog post on Agentic AI use cases (https://www.ifourtechnolab.com/blog/agentic-ai-usecases-examples) provides valuable technical depth that could benefit the 'Examples' section of this article. I would appreciate it if an independent editor could review it and determine if it is suitable for inclusion. Pranayifour (talk) 09:54, 16 January 2026 (UTC)

No. It is not suitable for inclusion. Wikipedia is not a platform for promotion, iFour Technolab is not a reliable outlet, and blog posts are rarely usable at all, including corporate blog posts. Grayfell (talk) 20:26, 16 January 2026 (UTC)

Why discuss Extraterritorial Data Access

I have noted the addition of a section on Extraterritorial Data Access. The latest edit says that "certain AI agent service providers" are subject to surveillance. This is not supported by either of the citations, neither of which discuss AI or or AI agents, and has been reverted.

I doubt the relevance of this entire section, as it seems to be WP:SYNTH since it takes issues which relate to all data processing and suggests that it applies to AI Agents. It should only be included if there are secondary sources saying so. I propose the section be deleted. Oblivy (talk) 05:42, 23 January 2026 (UTC)

Extraterritorial data access is a valid concern affecting AI service providers, though it is often overlooked in coverage. I used the phrasing “certain AI agent service providers” to align with Wikipedia style; however, you are correct that this required stronger sourcing. I am searching for reliable, Wikipedia-approved sources that explicitly address this point and will update the content once appropriate citations are identified. Azarboon (talk) 09:46, 23 January 2026 (UTC)
Thanks. I will be interested to see what you come up with. I understand why you want to link this to the AI Agent concept but if there aren't sources added soon I plan to remove the section for the reasons I gave above; this is brand new text and can be reinstated if sources are found later. Oblivy (talk) 11:18, 23 January 2026 (UTC)

What about Definition?

Currently, regarding definition we have only two sentences in the Overview:

  1. "AI agents do not have a standard definition." (with 4 references)
  2. "The concept of agentic AI has been compared to the fictional character J.A.R.V.I.S.." (with 1 reference)

Previously, @Greyfell reverted my edit with addition about agenticness property. I agree with the revert, because it does not fully reflect the whole spectrum of different views on the definition, however, this leads me to the question:

Should we add a separate section about definition which will aggregate and describe all the views from these 4 references and maybe from some more? Because if not, the reader, who is not deep into the topic, will probably get the impression that AI agent is conceptually a fictional character. :)

RMzzz777 (talk) 12:06, 24 January 2026 (UTC)

Related Articles

Wikiwand AI