Rule #1 for coding with AI agents

We’ve all seen the complaints. The burden of reviewing AI ‘output’ is shifting onto project maintainers and team members. Folks can easily generate lots of code using AI, that code might even be functional (in that it passes the tests also written by the AI).

But that doesn’t necessarily make the code good or correct.

So if you want to be a good team member, here’s my number one rule for coding with AI agents:

Rule 1: Only use agents to implement tasks that you already know how to do, because it’s vital that you understand the code.

If you read “How to effectively write quality code with AI”, you’ll see it’s a good summary of guiding the agent to delivering quality code. The suggestions that stood out to me are:

  • Establish a clear vision
  • Maintain precise documentation
  • Write high level specifications
  • Do not generate blindly or too much complexity at once

These stick out because they are all about understanding the problem that you are solving. They are about already knowing how to do the task and then using the agent to help you implement it.

Now, if you read “Stop generating, starting thinking”, the themes that stood out to me here were the assertions that;

  • LLMs cannot perform the critical thinking and architectural reasoning essential for software development. Leading to the dangerous situation where “nobody is thinking”.
  • Reviewing a PR isn’t just about checking for bugs or compliance with the original intent of the change, it’s about sharing understanding of the code and the changes. If your agent has written the PR without you understanding the code, then the full burden of understanding falls on the reviewer. There’s no sharing of understanding if you (the author) don’t understand it.
  • ..and one of the final sentences of the article, which you will recognize “Only use agents for tasks you already know how to do, because it’s vital that you understand it”.

This is why the rule works

Note the original version was “Only use agents for tasks you already know how to do, because it’s vital that you understand it”. But I’ve changed it because..

You’re absolutely allowed to use LLMs to help you research and understand an unfamiliar code base or problem space. And in doing so, you are learning how to implement the task.

By the time you are done researching, you’ve probably:

  • created some documentation from what the LLM has discovered by asking it to write it’s findings to a markdown file.
  • been forced to think about where in the existing code your implementation should go, and what pieces should be affected
  • been prompted by the LLM to answer questions about edge cases and error handling that you hadn’t thought of before.
  • got a design document that you’ve refined the spec and requirements based on the LLM’s findings and your own understanding of the problem space.

You’re accidentally doing all the things that make it effective to work with AI coding agents.

Which is why I’m arguing:

Only use agents to implement tasks that you already know how to do, because it’s vital that you understand the code.