Talk:Ada Lovelace

From Wikipedia, the free encyclopedia

Accuracy of statement regarding modern computer scientists' opinions about unpredictable software development

In the #Insight into potential of computing devices section of the article, there's a claim that "Most modern computer scientists argue that this view is outdated and that computer software can develop in ways that cannot necessarily be anticipated by programmers.".

It does have a supporting citation, although I admit that I have not yet requested access to the paper.

I tend to challenge the idea that generative Artificial Intelligence and Large Language Models are software that is developing unpredictably; I believe it is software that is not engineered with repeatable results as a priority requirement, and with training and query inputs that vary rapidly -- creating the appearance, but not actuality, of output creativity.

I wonder whether the consensus among computer scientists really is different to mine? Jay.addison (talk) 13:54, 14 October 2025 (UTC)

From what I understand about the emergent nature of most LLM features, it likely is software that is developing unpredictably due to the fact that we don't know if any other similar features or capabilities could emerge. Rainunderthebridge (talk) 02:49, 17 December 2025 (UTC)
Agree. This is bizarre statement and it would be difficult to argue "most modern computer scientists" about anything. Dubious at best, spurious at worst. ~2025-42547-43 (talk) 19:41, 6 February 2026 (UTC)
  • The source (Natale) says
In computer science circles, the phrase ‘Lovelace objection’ indicates the claim that computers cannot originate or create anything, but only do what their programmers instruct them to do (Abramson, 2008). Today, most computer scientists dismiss this objection; the complexity of contemporary systems and advances in areas like machine learning has proven that computer software can develop in ways that cannot be always anticipated by programmers (Kelleher, 2019).
So I got hold of Kelleher, and what I found is that it's a 2019 review of deep learning at the time. I suppose that somewhere in there it might say, or imply that "computer software can develop in ways that cannot be always anticipated by programmers", but if he does I don't know where, and Natale doesn't tell us -- just references Kelleher as a whole. I'm sorry, but his is not a competent source for what "most modern computer scientists" think. I'm removing the statement. EEng 05:21, 3 March 2026 (UTC)
Computer programmer here. LLMs and other generative AI models produce unpredictable results because randomness is injected, on purpose, into the base of the generative process. It's called temperature, or heat, relating to entropy or randomness. If no randomness was introduced, the AI model would output the exact same thing given the same prompt.
TLDR - AI models are not predictable because programmers have engineered them to be not predictable. They are not inherently stochastic.
Reference:
https://www.ibm.com/think/topics/llm-temperature
Jbmcb (talk) 17:36, 14 March 2026 (UTC)

Related Articles

Wikiwand AI