How to Collaborate With AI to Build Better Software

AI is not going to replace software engineers. But engineers who know how to work with AI will replace those who don’t. That much feels settled.

What feels less settled is how to actually do it well.

I spent nearly 20 years building, operating, and eventually selling a bootstrapped SaaS platform — and before that, a decade co-running a consultancy that delivered 300+ projects for 100+ clients. That experience taught me a lot about how to scope work, communicate clearly, and deliver software that holds up. When AI arrived as a serious development tool, I did not start from scratch. I applied what I already knew. What emerged is an approach I think is worth sharing. Not because it is clever, but because it is deliberate. And deliberate is the foundation for building better software with AI.

Start with context, not a command

The single biggest mistake engineers make with AI is treating it like a search engine — typing a quick prompt and hoping for a useful answer. The quality of what comes out is directly proportional to the quality of what you put in.

Before writing a single line of code, invest time in the initial prompt. Give the AI the full picture: the problem you are solving, the constraints you are working within, the shape of the system it is touching. Think of it less like issuing an instruction and more like briefing a capable collaborator who just walked in the room. The more context they have, the better the work.

In practice

I start almost every new feature or technical problem by asking the AI to generate a strategy document in markdown — a structured outline of the approach before any code is written. I review it, push back on anything that feels off, and refine it until the direction is solid. That document becomes the shared context we both work from, and it has saved me more dead-end sessions than I can count.

Why it matters

AI rewards the effort you put in upfront. What makes this easy to miss is that AI’s output always looks polished and confident — even when the foundation is wrong. A bad AI solution can look completely right until it breaks. The engineers who are getting the most out of these tools are the ones who treat the prompt as seriously as the code it produces.

Work iteratively, stay ready to pivot

With the strategy document in hand, building can begin. But the temptation at this point is to hand the whole plan to the AI and wait for a finished result. That is where things start to go pear-shaped. Instead, work through the strategy one step at a time, moving together, piece by piece.

Each step informs the next. A direction that made sense at the start may look different by step three — not because the plan was wrong, but because building surfaces things that planning cannot. Working iteratively keeps you close enough to the work to change course while it’s still cheap to do so. The goal is to catch it early and adjust — not to receive a finished solution and hope it fits.

In practice

I treat the strategy document like a checklist, but a flexible one. Before moving to the next step I pause, review what was just built, and ask whether the next step still makes sense. There have been plenty of times where step two revealed that step three needed to be thrown out entirely. That is not failure — that is the process working. The earlier you catch it, the cheaper it is to fix.

Why it matters

Better tools have never shrunk the scope of what gets built — they have always expanded it. The spreadsheet did not replace the accountant; it quadrupled demand for accountants by making financial analysis accessible to more people and more problems. AI is likely to do the same for software development. Engineers who learn to work iteratively with AI will not find themselves with less to build. They will find there is more to build than they ever imagined.

Stay in the loop, review everything

This is the part that separates engineers who use AI well from those who just use AI.

Do not accept what the AI produces without reading it. Review every line of code. Push back. Challenge assumptions. Ask why it made the choices it made, and explore alternatives together until you land on the best solution. The AI is a collaborator, not a decision-maker.

The AI does the heavy lifting — but you are the one who has to understand, own, and stand behind what gets shipped. Despite AI growing at a breakneck speed, it still lacks the full context that you bring to the table. And that gap is exactly where things break.

In practice

When the AI produces a solution I do not just run it — I read it first. I have caught subtle assumptions baked into generated code that would have worked fine in isolation but created problems in the context of the larger system. The AI did not make a mistake exactly; it just did not know what I knew. That is not a flaw in the tool. It is a reminder that the tool needs a skilled operator.

Why it matters

The feeling of efficiency is easy to mistake for actual progress. Rapidly shipping AI-generated code can feel extraordinarily productive, right up until the moment it isn’t. Deep engagement with what the AI produces — the kind that slows you down just enough to actually understand it — is what separates code that holds up from code that falls apart down the line.

There is a version of working with AI that looks like life near the Loop in Simon Stålenhag’s Tales from the Loop — vast machines humming in the background, shaping the landscape, while people go about their days without asking how any of it works. The machines are just there. Reviewing everything — every line of code, every assumption — is what keeps you the one shaping the work, not shaped by it.

Context is a skill

Long before AI entered the picture, context was at the center of how I worked. At my SaaS company, I stayed close to customers throughout the life of the product — handling support, running interviews, and letting what I heard directly shape product and engineering decisions. The same was true at the consultancy: the best projects started by understanding how clients actually worked, not how they said they worked.

The same instinct applies to building with AI. The engineers who will get the most out of these tools are the ones who are already good at gathering context, communicating clearly, and thinking a few steps ahead. AI amplifies those skills. It does not replace them.

In practice

A painter doesn’t walk up to a blank canvas and start painting. There are sketches, studies, and planning long before the brush touches the surface. The prompt works the same way — it should reflect the creative process that came before it.

Why it matters

As AI takes on more of the mechanical work of building software, the skills that will matter most are the ones that have always been hardest to automate — listening, judgment, communication — the ability to understand what someone actually needs and translate that into something real.

Alex Imas, an economist at the University of Chicago, calls these relational skills — and argues that they become more valuable, not less, as automation grows. The engineers who will thrive are not necessarily the ones who know the most about AI. They are the ones who know how to work with people — and can bring that same instinct to working with AI.

A note on this article

This piece was written collaboratively with Claude, using the same process described above. I came in with context, a point of view, and opinions. We worked through it iteratively. I pushed back, redirected, and refined until it felt right.

A little bit of inception, maybe. But also the most honest demonstration I can offer.

The words are mine. The process is the point.