How to Master Coding Agents: 7 Mindsets from a Year in the Terminal
Summary: A coding agent is not your servant — it is a new colleague who shows up blind every morning. After a year of working almost exclusively via the terminal, I have settled into a workflow that feels more like leading a small development team than writing code myself. Here are the seven mindsets that made the difference.
What is a coding agent?
A coding agent — think Claude Code, OpenAI Codex, or Cursor in agent mode — is an AI that lives in your terminal and works autonomously. It reads files, runs commands, writes code, and commits changes without you needing to click around an IDE.
Difference from regular IDE help:
- Copilot fills in the next line based on what you have already typed.
- A coding agent takes an entire task, plans the steps itself, and executes them — sometimes across multiple files at once.
The catch is that the agent starts every session from zero. It has no memory, no sense for your architecture, and no idea where you named that config file six months ago. To get the most out of it, you must think like a team lead, not a lone hero.
1. Empathy for the agent
It sounds strange, but the first thing you must learn is to see your own codebase with beginner eyes.
When an agent logs in, it does not see your mental map. It sees hundreds of thousands of lines of files with cryptic names and no directions. If your codebase is a mess of abbreviations and hidden dependencies, the agent will drown — just as a new developer would on their first day.
A real example: When I ask an agent to change our database model, I always point out three things directly in the prompt:
- The model file where the schema is defined
- The migration tool we use
- The test file that verifies the change
Without those pointers, the agent guesses wrong, creates duplicate migrations, or breaks the API. With pointers, it solves the task in three minutes.
Tip: Keep a clear README, consistent file names, and a predictable directory structure. The agent has no "nose" for your codebase — you must tell it where things are.
2. Short prompts are zen — but it takes time to get there
There is a curve in agentic programming. At first people write long, detailed prompts. Then they overcompensate: orchestration pipelines, multi-agent templates, eighteen different slash commands. Eventually you land back at short prompts again — but now for the right reason.
I call this the agentic trap. People think the complexity in orchestration is the solution, when in reality they have not yet learned to talk to the agent.
A real example: My first attempt to ask an agent to build a pagination feature took almost 40 minutes. I wrote a prompt longer than the code that was needed. Today I just write "add pagination to the list views" and the agent solves it in five minutes. The difference is not prompt length — it is that I have learned to trust the agent.
Tip: Start concrete and short. If something takes too long, press escape and ask yourself: Did the agent understand the problem? Was I wrong in my own architecture?
3. Treat it like a conversation, not an order
The biggest insight for me was to stop acting like a customer at a restaurant. A coding agent is not a waiter who silently takes your order and delivers food. It is a colleague you discuss the problem with.
When I review a change from an agent, I always start with the same question:
- "Do you understand what I was trying to accomplish here?"
If the answer is yes, we move on:
- "Is this the optimal way to do it?"
- "Have you looked at that part of the codebase?"
- "What happens if we do a larger refactor instead?"
It is the same dynamic as a good pull request review. Intent comes first, implementation second.
A real example: An agent recently suggested solving a bug fix with an ugly workaround. Instead of rejecting it outright, I asked: "Can we solve this more fundamentally if we change how we cache data?" The agent immediately saw a much cleaner solution and refactored the entire flow in ten minutes.
Tip: When something takes unnecessarily long, it is almost always a sign that you did not empathetically guide the agent to the right perspective.
4. Accept that the code will not be perfect
Just like when leading a team of human developers, you must learn to let go. The agent will not write code exactly as you would. Maybe it names a variable userData instead of userProfile. Maybe it puts logic in a different file than you prefer.
If you breathe down the agent's neck on every detail, it will work slowly and frustrated — just as a human developer would.
A real example: I used to revert variable names the agent chose. The result? Next time the agent searched for that file it could not find it, because it searched for its original name. Now I accept the agent's naming choices as long as they are consistent. The codebase has become easier for the agent to navigate — and thus faster for me to build with.
Tip: Build the codebase for the agent, not for your aesthetics. Predictability beats perfectionism.
5. Commit to main and fix forward
One of the most controversial things in my workflow is that I rarely revert. If an agent introduces a bug, it is almost always faster to ask another agent to fix it than to roll back and start over.
I run tests locally. If they pass — I push directly to main. No feature branch. No drawn-out PR cycle.
A real example: I currently run 3–8 agents in parallel, each in its own terminal. One is building a larger feature, another is exploring a concept I am unsure about, and two or three are fixing small bugs or writing documentation for what just landed. If one agent breaks something, I simply ask a free agent to fix it. Refactors are cheap now — agents resolve conflicts in a couple of minutes.
Tip: Keep main shippable, but do not be afraid to stir things up. Forward motion beats perfection.
6. Keep the human in the loop
Agents are fantastic at writing code. What they cannot — and probably never fully will — is to feel the vibe of a product. The small details that make a user smile, the unique tone in error messages, the decision to say no to a feature because it would dilute the core.
That is still human craft.
A real example: When our app updates, it plays a little sound and shows a message. The message is humorous, a bit goofy, and it made a user send a love-filled support message. An agent would never have come up with that wording itself. It came from our product vision, not a prompt.
Tip: Write a soul.md or a constitution document. Explain your product values, your vision, how you want the project to feel. Let the agent load it every session so it does not just build code — it builds your code.
7. Invest time — it is a compounding effect
The first six months felt clumsy. I wrote too-long prompts, corrected agents sideways, and wondered if this was really faster than just writing the code myself. Then it turned.
It is a skill that must be built, just like learning a new language or framework. The agents got better, my prompts got better, and my understanding of how they see the world got better. The effect is exponential.
A real example: I started with a minimal prototype, played with it, and let my concept grow organically. I could not have planned the end result in advance — every iteration gave me ideas I did not even know I would have. Agents excel in the exploratory stage.
Tip: Block time to just "play". Build something small, ask the agent to vary it, throw it away, build again. It is the repetition that creates fingertip feel.
Thoughts on how this affects the future
I believe we are facing a fundamental shift in what it means to be a developer. The future does not belong to the one who writes the most code — it belongs to the one who asks the best questions, who can balance autonomy with vision, and who understands that leading an agent is the same as leading a team.
Those who try to automate themselves away completely will miss it. Those who refuse to let go of control will never gain speed. As always in technology, the answer is balance.

