
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.
There’s a lot of buzz around “vibe coding.” Coined by AI researcher Andrej Karpathy and amplified by stories of startups building codebases almost entirely with AI, the term suggests a future where anyone can conjure software simply by describing their desires to a machine. Much of this excitement, however, feels premature, bordering on hype.
While vibe coding represents a genuine shift enabled by powerful large language models (LLMs), it’s not a magic wand that replaces fundamental software development skills. Critical thinking, understanding code, and solid engineering principles remain essential. But vibe coding is real, it offers tangible benefits, and developers should understand its capabilities and limitations rather than ignore it.
What is vibe coding?
At its core, vibe coding involves instructing an AI, typically an LLM trained on code, using natural language prompts to generate software. Instead of writing code line by line, the human describes the problem or desired functionality, and the AI attempts to produce the necessary code. The developer’s role shifts from meticulous implementation to guiding, testing, and refining the AI’s output. The term “vibe coding” gained traction after Andrej Karpathy, former AI director at Tesla and OpenAI co-founder, described his experience with programming experience with AI in an X in February 2025, where he described it as a new form of coding where you can “fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
The term quickly became a buzzword and it started appearing across blogs and news websites and YouTube videos.
Why is vibe coding exploding in popularity now? There are several factors but the key driver is advances in LLMs. While LLMs have been making progress in coding for several years, the recent year has seen a major leap in their capabilities. Large reasoning models such as DeepSeek-R1, OpenAI o3, and Gemini 2.5 Pro are impressively capable of following complex natural language instructions and generating surprisingly coherent and functional code.
Another important contributor is the rise of AI-powered coding platforms and features. IDEs such as Replit, Cursor, and Windsurf provide you with the vibe coding experience, enabling you to start your project by typing a description in a text box. Other platforms such as Loveable and Bolt have been designed with a vibe-coding-first mindset: your entry point into every new project is a text description for the app you want to build.
Becoming a “vibe coder” involves learning how to effectively communicate intent to an AI coding assistant. It requires clearly defining the desired outcome, providing context, evaluating the AI’s suggestions, and iteratively refining prompts based on the generated code. In many cases, you can start with a very basic description, such as “Create a Twitter clone but for X,” and then start adding features.
The limits of vibe coding
Despite the excitement, relying solely on vibe coding has significant drawbacks. The notion that you can simply “vibe” your way to complex, production-ready software without deep technical understanding is misleading. Here’s why vibe coding alone won’t get you far:
Code quality and maintainability: AI-generated code often lacks the structure, efficiency, and foresight of human-written code guided by software architecture principles. It might work initially but can quickly become a tangled mess that’s difficult to debug, modify, or extend. The AI coding assistant can easily become confused as the codebase grows, and it will start making mistakes and modifications to the wrong parts. Eventually, you’ll need to bring in a human coder who can start making sense of everything. This all leads directly to technical debt that slows down development and eventually grinds it to a halt.
Security risks: LLMs repeat patterns of code they have seen during their training, including potentially unsafe code. They can inadvertently generate code with vulnerabilities, creating significant risks, especially for applications handling sensitive data. Blindly trusting AI output is dangerous.
Hallucinations: As much as we have made progress in reducing hallucinations, they are still part of the nature of LLMs. They can call libraries and functions that do not exist and make it harder to debug the code. This problem becomes exacerbated as new libraries are introduced and as the codebase grows.
So, the next time you see a vibe-coded game or application, ask yourself, will this application pass the test of time? Is it secure? Is it robust enough to handle the erratic behavior of users?
This brings me to the most important limitation of vibe coding: It has little use if you don’t know the fundamentals of software development and architecture. In fact, being a good programmer will also make you a good vibe coder, because:
– Crafting good prompts often requires understanding programming concepts. Knowing how software is typically built helps you ask the AI for the right things.
– You must be able to read, understand, evaluate, and debug the generated code to ensure its correctness, security, and quality. Without coding skills, this review is impossible.
– Maintaining an AI-generated codebase, especially one created without strong architectural guidance, requires strong software engineering skills.
In other words, “vibe coder” is kind of a myth. You’re either a programmer with an AI coding assistant (let’s call it a super-advanced version of good-old autocomplete) or not a coder at all.
Practical uses for vibe coding today
All this said, I’m not dismissing vibe coding entirely (even though I don’t think “vibe coder” is a thing). There are a few areas where I think the current vibe coding trend can have real value:
Subscribe to continue reading
Become a paid subscriber to get access to the rest of this post and other exclusive content.