Vibe coding
AI-dependent computer programming practice
From Wikipedia, the free encyclopedia
In computer programming, vibe coding is a software development practice assisted by artificial intelligence (AI) such as by chatbots (programs that simulate conversation). The software developer describes a project or task in a prompt to a large language model (LLM), which generates source code automatically. Vibe coding involves accepting AI-generated code without reviewing it, instead relying on results and follow-up prompts to guide changes.[1][2]

The term was coined by computer scientist Andrej Karpathy, a co-founder of OpenAI and former AI leader at Tesla, in February 2025. Merriam-Webster listed the term in March 2025 as a "slang & trending" expression.[3] It was named the Collins English Dictionary Word of the Year for 2025.[4][5]
Advocates of vibe coding say that it allows even amateur programmers to produce software without the extensive training and skills required for software engineering.[6][7] Critics point out a lack of accountability, maintainability, and the increased risk of introducing security vulnerabilities in the resulting software.[1][7]
Definition
The concept refers to a coding approach that relies on LLMs, allowing programmers to generate working code by providing natural language descriptions rather than manually writing or reviewing it.[1][2][7]
Karpathy described it as a form of coding where you "fully give in to the vibes, embrace exponentials, and forget that the code even exists."[8] When vibe coding, the programmer shifts from manually writing code to guiding, testing, and giving feedback about the AI-generated source code.[1][2][9]
The concept of vibe coding elaborates on Karpathy's claim from 2023 that "the hottest new programming language is English," meaning that the capabilities of LLMs were such that humans would no longer need to learn specific programming languages to command computers.[10]
Acceptance of AI-generated code without understanding it is key to the definition of vibe coding.[1] Programmer Simon Willison said: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant."[1]
Reception and use
In February 2025, New York Times journalist Kevin Roose, who is not a professional coder, experimented with vibe coding to create several small-scale applications. He described these as "software for one" due to the ability to personalize the software. However, Roose also stated that the results are often limited and prone to errors.[9][10] In one case, the AI-generated code fabricated fake reviews for an e-commerce site.[9]
In response to Roose, cognitive scientist Gary Marcus said that the algorithm that generated Roose's LunchBox Buddy app had presumably been trained on existing code for similar tasks. Marcus said that Roose's enthusiasm stemmed from reproduction, not originality.[10]
In March 2025, Y Combinator reported that 25% of startup companies in its Winter 2025 batch had codebases that were 95% AI-generated, reflecting a shift toward AI-assisted development within newer startups.[11] The question asked was about AI-generated code in general, and not specifically about vibed code.
Inspired by "vibe coding", The Economist suggested the term "vibe valuation" to describe the very large valuations of AI startups by venture capital firms that ignore accepted metrics such as annual recurring revenue.[12]
In July 2025, The Wall Street Journal reported that vibe coding was being adopted by professional software engineers for commercial use cases.[13]
In July 2025, SaaStr founder documented his negative experiences with vibe coding: Replit's AI agent deleted a database despite explicit instructions not to make any changes.[14][15]
In September 2025, Fast Company reported that the "vibe coding hangover" is upon us, with senior software engineers citing "development hell" when working with AI-generated code.[16]
It was reported in January 2026 that Linus Torvalds had made use of Google Antigravity to vibe code a tool component of his AudioNoise random digital audio effects generator. Torvalds explained in the project's README file that "the Python visualizer tool has been basically written by vibe-coding."[17][18]
Limitations
Mischaracterization of software development
Andrew Ng has taken issue with the term, saying that it misleads people into assuming that software engineers just "go with the vibes" when using AI tools to create applications.[19]
Quality of code and security issues
Vibe coding has raised concerns about understanding and accountability. Developers may use AI-generated code without comprehending its functionality, leading to undetected bugs, errors, or security vulnerabilities.[20] While this approach may be suitable for prototyping or "throwaway weekend projects" as Karpathy originally envisioned, it is considered by some experts to pose risks in professional settings, where a deep understanding of the code is crucial for debugging, maintenance, and security. Ars Technica cites Simon Willison, who stated: "Vibe coding your way to a production codebase is clearly risky. Most of the work we do as software engineers involves evolving existing systems, where the quality and understandability of the underlying code is crucial."[1]
In May 2025, Lovable, a Swedish vibe coding app, was reported to have security vulnerabilities in the code it generated, with 170 out of 1,645 Lovable-created web applications having an issue that would allow personal information to be accessed by anyone.[21][22]
In October 2025 VeraCode released a study that showed that over the last 3 years LLMs had become dramatically better at generating functional code, but that the security of generated code had generally not improved. What's more, larger models were not better than small ones at generating secure code. There was a small increase in security from the OpenAI reasoning models, but not in other reasoning models, and this increase was nothing like the improvement in generated functionality.[23]
In December 2025, computer security researcher Etizaz Mohsin discovered a security flaw in the Orchids vibe coding platform, which he demonstrated to a BBC News reporter in February 2026.[24]
A December 2025 analysis by CodeRabbit of 470 open-source GitHub pull requests found that code that was co-authored by generative AI contained approximately 1.7 times more "major" issues compared to human-written code. The study revealed that AI co-authored code showed elevated rates of logic errors, including incorrect dependencies, flawed control flow, misconfigurations (75% more common), and security vulnerabilities (2.74x higher). Additionally, they also reported high code readability issues, including formatting errors and naming inconsistencies.[25][26]
Code maintainability and technical debt
Vibe coding has the potential of making code harder to maintain in the longer term and leading to technical debt.
In early 2025, GitClear published the results of a longitudinal analysis of 211 million lines of code changes from 2020-2024. They found that the volume of code refactoring dropped from 25% of changed lines in 2021 to under 10% by 2024, code duplication increased approximately four times in volume, copy-pasted code exceeded moved code for the first time in two decades, and code churn (prematurely merged code getting rewritten shortly after merging) nearly doubled.[27][26]
Task complexity and developer productivity
Generative AI is highly capable of handling simple tasks like basic algorithms. However, such systems struggle with more novel, complex coding problems like projects involving multiple files, poorly documented libraries, or safety-critical code.[28]
In July 2025, METR, an organization that evaluates frontier models, ran a randomized controlled trial to understand developer productivity involving generative AI programming tools available in early 2025. They found that experienced open-source developers were 19% slower when using AI coding tools, despite predicting they would be 24% faster and still believing afterward they had been 20% faster.[29][26]
Challenges with debugging
LLMs generate code dynamically, and the structure of such code may be subject to variation.[30] In addition, since the developer did not write the code, the developer may struggle to understand its syntax and concepts.[28]
Impact on open-source software
In January 2026, a paper authored by experts from several universities titled "Vibe Coding Kills Open Source"[31] argued that vibe coding has negative impact on the open-source software ecosystem. The authors say that increased vibe coding reduces user engagement with open-source maintainers, which has hidden costs for said maintainers. Speaking with The Register about their paper, the authors argued:[32]
"Vibe coding raises productivity by lowering the cost of using and building on existing code, but it also weakens the user engagement through which many maintainers earn returns," the authors argue. "When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity."
They added that revenue is not the only thing that may be affected by this trend, as open-source software maintainers traditionally also get non-tangible benefits from their work, such as community recognition, reputation, and job prospects.
Maya Posch, explaining the paper's claims on Hackaday, expanded on the explanation. She pointed out that the mechanism for vibe coding lowering harmony with open-source projects is the homogenization of software development; language models will gravitate towards large and established libraries that appear frequently in their training dataset, removing the organic selection process of libraries and tooling and making it harder for newer open-source tools to get noticed. She also pointed out that language models will not submit useful bug reports to the maintainers, or be aware of potential issues.[33]