Back to Writing

Stop starting with code

The AI productivity shift that actually made me faster.

The biggest AI productivity gain I’ve had in the last year had nothing to do with better models.

It came from stopping the habit of starting with code.

Over the past few weeks, I had a lot of conversations about AI workflows. The same question keeps coming up:

How are you actually using these tools in 2026?

So here it is. Not the one true way. Just what’s reliably working for me right now.

Steal what works. Adapt the rest.

The real shift

Not because a model got smarter. Not because I found the perfect prompt.

But because I stopped starting with implementation.

In 2024, I would open vim and start writing code. In 2025, I start with structure.

How I think: 10K feet view first

Everything now starts zoomed out by default. My workflow is a black box I can zoom in and out at will.

Since November 2025, I’ve been following the same loop:

  • Define the shape
  • Create intentional blanks
  • Fill the blanks deliberately

AI helps with step three. The leverage is in steps one and two.

Personal projects: Figma first

For personal work, I’ve mostly been building native iOS apps. I open Figma first, the terminal comes later.

In Figma, I design:

  • The skeleton. With enough fidelity to determine flow interaction cues and information architecture
  • The layout system
  • The aesthetic references
  • The visual language I want to explore

At this point, I'm not designing what I would handoff to a 2024 Ismael. Instead, I’m trying to answer a different set of questions:

  • What should this feel like?
  • What is the visual rhythm?
  • What is the hierarchy?

By the time I connect the Figma MCP to Claude Code, I'm no longer guessing. I'm translating. This alone has eliminated hours of wandering and rework.

At work: Markdown first

For day-to-day work, Claude and the terminal come later. I open Obsidian or Notion first. Not Slack. Not vim.

Before jumping into the task, I write:

  • What problem are we solving?
  • What are the boundaries?
  • What are the inputs and outputs?
  • What assumptions are we making?
  • What could break?

Then I outline modules. I define contracts. Then I leave blanks.

Intentional blanks:

  • Functions with no bodies
  • Sections with TODOs
  • Interfaces without implementations

Once the system has shape, AI becomes a force multiplier.

When I'm ready to "code"

Toolkit

My stack is simple, simpler than you may think:

If you copied this exact stack but skipped the structure-first workflow, you would not get the same results. The tools accelerate thinking. They don’t replace it.

For every project (codebase) I'm working on, I have three splits with Ghostty:

  1. Claude Code
  2. lazygit
  3. vim

Nothing exotic. Nothing fancy.

The power doesn’t come from the tools alone. It comes from the loop: tight feedback cycles, explicit structure, deliberate gaps.

Workflows

I've tried the complex setups. Agent chains, ralph loops, agent swarms, background orchestration...

They're impressive. But they are also extremely fragile. The simpler the workflow, the easier it is to trust.

Plan mode first

When I start an AI coding session, everything begins in Plan Mode. This is where I provide links to Figma, relevant markdown files, or ask the agent to fetch additional context via QMD.

Here are some of the sub-prompts that I include in almost every planning request:

  • ... for any UX/UI decisions, interview me first with the ask user tool.
  • ... do not make assumptions about behaviour. If you are unclear, question your assumption, ask.
  • ... CRITICAL: We cannot change behaviour in <file/flow/screen>

And by far my favourite and probably golden prompt during planning:

For the implementation we must adhere to the following grounding principles:

- Write idiomatic <programming language>
- KISS
- Don't over abstract.
- No hidden behaviour, always explicit. No clever abstractions.
- Write <programming language> so simple and idiomatic that me, a beginner, can follow and understand.
- Keep files, functions and classes single responsibility.

I include that block, verbatim, in roughly 90% of my planning prompts.

After I have a plan that I've reviewed and I agree with, I always ask:

Write the plan to PRD-Date-Feature-Name.md and break down the tasks in an atomic and isolated way so that any number of individual agents could pick them up in parallel. If there are dependencies between them, flag it.

This framing gives me plans that can be picked up atomically and executed in tight loops.

Execution mode: tight loops

Once planning is done and structure is clear, I switch mental modes.

The goal is no longer exploration. It’s tight feedback loops.

I rarely let the AI run long, autonomous tasks.

Instead, I work in short bursts:

  • Ask for a first pass implementation of a clearly defined blank.
  • Review it line by line.
  • Refine constraints.
  • Rerun.
  • Commit.

Small surface area. Small diffs. Fast validation.

Tactically, I ask:

Open @PRD-Date-Feature-Name.md, find the next not implemented task. Implement it. When you are done, mark it as complete in the file, let me know, and finish.

After it’s done, I run /clear or fully restart Claude Code.

Then, I check the following for every generated diff:

  • Is it idiomatic?
  • Is it explicit?
  • Is it doing more than asked?
  • Is it hiding behaviour?

If the answer is yes, I tighten the constraints and rerun.

This keeps the system predictable.

The moment you let AI operate on large, undefined surfaces, entropy wins.

Large diffs. Hidden behaviour. Subtle drift.

Final thoughts

AI amplifies whatever structure you give it.

If your thinking is scattered, it will accelerate chaos. If your structure is clear, it will accelerate execution.

The biggest shift for me wasn’t better tools.

It was building a workflow I can zoom in and out of at will.

Structure first. Blanks second. Acceleration last.

– Ismael.