Sitemap

Generative AI Series

Vibe Coding (Agentic Coding) — My experiences (1/2)

In this blog, I am sharing some of my learnings, Vibe coding for the past few weeks.

6 min readMay 30, 2025

--

Context

I am sure all of you must have heard of “Vibe coding” or “agentic coding”? This innovative approach to software development utilizes an AI agent to perform coding tasks. There is a general belief in the market that this method eliminates the need for traditional coding, asserting that “there is virtually no actual coding”.

This approach leverages agent-based coding features available in Integrated Development Environments (IDEs) like Cursor, Copilot, or Windsurf. Unlike the conventional method of pressing Tab to complete a line of code, the goal is to instruct the AI to compose the entire application from its inception to its completion.

I have been using Cursor, Repo Prompt, extensively, and recently started playing around with Google Jules.

Learnings

I am not going to talk about how to do vibe coding. I am sure there are thousands of YouTube videos and articles on that. I'm sharing my personal experience and learnings doing vibe coding. Based on recent experience diving deep into this method, here are the key things learned along the way to maximize its benefits.

Detail the Specifications

It's very important to write down the specifications, with a detailed explanation of what you want built. You can also use AI like ChatGPT or Grock3 to detail the specifications into proper points in simple English, like user stories. The specification should be comprehensive, covering technical specifications, how the application operates, database schema, and API endpoints.

Define Rules

Once your detailed spec is ready, you’ll paste it into the AI pane within Cursor/Windsurf. But before that, a crucial learning, especially for larger codebases, is to use rules. Both IDEs support rules, which are essentially a way to instruct the AI agent on “how you want to code, what technologies you should use, what workflows you want to use”.

Using rules was a huge learning because it prevents the AI from building with technologies not in your spec or breaking things by trying different technologies to fix bugs.

You can check out some pre-defined rules, to get started here for Cursor

https://cursor.directory/rules

Understanding why certain rules are necessary reflects some common tendencies of these AI coding agents that don’t always work well. Here are some key experiences:

  • Simple Solutions: Set a rule to make sure the code that is generated is simple, as sometimes Agents go overboard and generate advanced code and syntax, and sometimes even over-engineer.
  • Code Duplication: Agents can accidentally duplicate code when adding or fixing functionality, not realizing they’ve already written similar code. To avoid this, explicitly instruct the agent to check the codebase and fix existing code instead of adding new, duplicated code.
  • Separate Dev, Test, and Prod Environments: This is a big learning for us. Agents often don’t understand the difference between environments, which can mess up tests and affect production. Make sure the rules clearly state that everything done should be considered in separate environments.
  • Define Your Stack: Clearly specify your technical stack (e.g., Python backend, HTML/JS frontend, SQL database). This prevents the agent from switching technologies unexpectedly (like using JSON file storage when SQL has an issue).
  • Careful to Only Make Changes Requested: Sometimes, when we ask for a small change, unrelated parts of the codebase can get broken. So, make sure the agent only focuses on the specific thing we asked for. I’ve seen agents go overboard and overengineer the code, especially when we ask them to change multiple files.
  • Keep the Codebase Very Clean and Organized. Address potential disorganization early.
  • Avoid Having Files with > 300 Lines of Code: Files can grow really big, so let’s make a rule to refactor the code early on when they reach a certain size. Refactoring huge files later can mess up the tests and take a lot of time.
  • Do Not Overwrite critical configuration files: Explicitly tell the agent not to overwrite critical configuration files, like environment configurations, git configurations, and any application configuration files.

One Step at a Time — Strict Configuration Management

Focus on areas of code relevant to the task. Plan your configuration management, and how you want to stage, commit,t and push your code. It's always a good practice to do one feature/fix at a time, test it, stage the cod,e and go to the next one. Write thorough tests for all major functionality. Test the feature before you commit and move to the next feature.

Avoid making major changes to patterns/architecture after a feature is working well, “unless explicitly instructed”.

Managing the Context — Not too much, Not too less

It's very important to make sure the context is critical. At a certain point, giving the agent too much context starts negatively impacting performance. You’ll need to figure out through experimentation when to start a new chat for better results. Be aware that starting a new chat means losing the previous context, though some can potentially be moved over. Sometimes, I had to add other files to provide context. This is where tools like RepoPrompt came in handy, as they helped me include the right files and ensure optimal code generation. However, I often encountered situations where the code was generated, and only after thorough code reviews could I find that the agents had generated redundant code (same logic, but different syntax). This is quite risky, especially as the project grows in size, as it can make management more challenging.

Iterative Testing

The best thing about agentic coding is also agentic testing. Testing is critical in agentic-coding. End-to-end testing, where the agent simulates user actions, has been found to work best compared to unit tests. The workflow involves having the agent write code, running the tests, and then having the agent fix failing tests.

Always try to use popular versions and popular technology stack & MCP Servers effectively, as knowledge

Most of the LLMs are trained with popular stacks, and it's always safer to use the popular stack and versions. AI models are likely to perform better with technologies they have more exposure to and more available documentation for. Sticking to common stacks like Python, HTML, JavaScript, and SQL can lead to better results. Use MCP servers to connect to the right documentation, to ensure that the knowledge is used.

Never Auto-Apply code

Most of these tools provide a way to auto-apply (sometimes referred to as YOLO mode), but I never use that option. Always review the changes (they normally show in Win-diff kind of way), ensure the changes are right, before you accept the changes. It's always better to regenerate the code by changing the prompt or context if unsure.

Attention is all you need

Pay attention to the agent chat. A lot of times, the chat will ask for your decision. Always read the question before you accept. Most of the time, it’s not easy to roll back individual changes. Sometimes the chat will also suggest changes in your environment, such as installing a version of a library. Make sure you are always setting up isolated environments, such as virtual environments.

Pay attention and observe the agent’s process in the chat, seeing its “thinking” or reasoning phase and “tool calling” actions like listing or reading files.

Avoid Addiction — and Be the MASTER, not Slave

A lot of times, I became so dumb that I started asking the agents to generate even simple code or fix simple bugs :-D It's very easy to get addicted to this and lose your basic intelligence. Sometimes, I have seen, I do better coding/fixing than AI. It's important to stay in control.

Overall, it's been a super exciting journey vibe coding. I also learnt a lot of techniques for writing advanced code, looking at how AI is able to solve the problem. It's a huge learning. Like I said, don’t lose your programming skills and depend totally on AI. That is risky, especially when you're working on production code.

Summary

Agentic coding is powerful and can accelerate development dramatically, but it must be used mindfully. Stay in control, stay curious, and never let the agent replace your problem-solving skills.

I am having an exciting time, with these…please share your experiences with Vibe coding…lets all vibe :-D

--

--

A B Vijay Kumar
A B Vijay Kumar

Written by A B Vijay Kumar

IBM Fellow, Master Inventor, Agentic AI, GenAI, Hybrid Cloud, Mobile, RPi Full-Stack Programmer

Responses (2)