emanuelpetre.dev

How I Work With AI

By Emanuel on Jan 1, 2026
Human and AI collaboration

This isn’t a tutorial. It’s working notes from an ongoing experiment. The process is still changing, and I expect it to keep changing as the tools improve and as I get better at using them.

That said, enough has stabilized that it’s worth writing down, both for myself and for other senior engineers, trying to figure out how to integrate AI into work that requires genuine judgment.

The Problem With Most AI Workflows

Most people use AI assistants the way they use a search engine: ask a question, get an answer, move on. Or they use it to generate boilerplate: “write me a CRUD endpoint for this model.” Those are fine uses, but they don’t capture what I find most interesting about working with these tools.

The more productive framing, at least for me, is collaboration with a highly capable but contextually amnesiac partner. The AI has broad knowledge and can execute quickly, but it doesn’t know your codebase, your constraints, your taste, or your long-term goals unless you tell it. The quality of the output depends almost entirely on the quality of the context you provide and the structure of the process you impose.

The failure mode isn’t that the AI writes bad code. It’s that it writes confident, syntactically correct code that solves the wrong problem because you didn’t give it enough context to know what the right problem was.

The Loop

My current process looks roughly like this:

Plan before code. Every non-trivial task starts with a planning phase. I describe what needs to be done, ask the AI to surface questions and tradeoffs, and we iterate on the approach before any code is written. This phase often catches misunderstandings that would have cost significant time to unwind later. The planning artifact, usually a written document, also serves as a checkpoint I can return to if the implementation drifts.

Clear context between tasks. AI assistants accumulate context as a conversation grows, and that context can become a liability. Old assumptions get carried forward. A decision made three hours ago influences code being written now, even if circumstances have changed. I treat each significant task as a fresh context: summarize the current state, the specific goal, and the relevant constraints. Don’t assume the AI knows what you know.

One task, one commit. Each unit of work should be reviewable independently. This sounds like standard engineering hygiene, but it’s especially important when working with AI because the code comes fast. Batching changes into large commits makes review harder and makes it difficult to isolate problems when something doesn’t work.

I test manually (for now). I don’t trust the AI’s test suite to cover what I would catch by actually using the thing. Manual testing gives me a feel for the product that automated tests cannot: the subtle thing that is technically correct but feels wrong, the flow that works on the happy path but breaks in a plausible edge case. I am planning to eventually connect MCP for more automated verification, but I want to build up a clear mental model of what I am looking for before I automate the checking.

Where I Keep Control

The things I don’t delegate:

Architecture decisions. I’ll ask the AI to explore options and articulate tradeoffs, but the decision is mine. The AI has no investment in the long-term maintainability of the system, no understanding of where the product is headed, and no ability to weigh business context against technical debt.

Code review. Everything gets read before it gets merged. AI-generated code can be subtle in its wrongness. It looks right, passes tests, but contains a logic error or a security assumption that a human reviewer would catch. I don’t skim AI-generated code any faster than I skim code from a junior developer.

Problem definition. This is the one I feel most strongly about. The AI is very good at solving the problem you give it. It’s not good at telling you whether you’ve defined the right problem. Clarifying what you are actually trying to accomplish is entirely a human responsibility.

What Has Shifted

A few things I believed at the start of this experiment that I now think differently about:

I thought the model would matter most. It matters less than I expected. The quality of the output is more sensitive to how well I define the problem and structure the context than to which model I am using. A well-structured prompt to a mid-tier model often outperforms a vague prompt to a frontier model.

I thought I would end up doing less thinking. The opposite has happened. Working with AI well requires more explicit, structured thinking about what you want, not less. The discipline of writing a clear plan, defining success criteria, and breaking work into reviewable units has made my own thinking sharper.

I thought the ceiling was the AI’s capabilities. It’s not. It’s my ability to give the AI an accurate picture of the problem. The bottleneck is almost always the interface: how clearly I communicate what I need, in what order, with what constraints. That’s a solvable problem, and it gets better with practice.

What I Am Still Figuring Out

The process I’ve described works well for implementation tasks where the requirements are reasonably clear. It works less well for exploratory work, where the goal is to figure out what the right question is rather than answer a known one. I haven’t found a reliable pattern for that yet.

I’m also still calibrating trust at the edges of my domain knowledge. In areas where I have deep expertise, I can review AI output quickly and catch errors easily. In areas where I’m learning, I’m more reliant on the AI and less able to verify its reasoning. That asymmetry is worth being explicit about.

This is an ongoing experiment. The tools are changing faster than I can fully adapt to them, which means the right process today may not be the right process in six months. I’m trying to hold the specifics loosely and focus on the underlying principle: the value is in the collaboration interface, not the model.

Emanuel Petre | Software Engineer.