60-Second Impulse

Impulse: The Real AI Experiments Everyone Should Run

May 11, 2026 1 min read
Portrait of Martin Zoeller

Martin Zoeller

Here are the AI experiments every one of you should actually be running right now; especially if you build software:

  1. What happens if I stop constantly jumping to the next and the one after that while the AI works for five minutes? What happens if I use those pauses to step out onto the balcony for a moment, or at least to get up? What happens if I keep that up for a whole week? How much better does that week feel compared to the one before, and how much “efficiency” did I really lose in the process?
  2. What happens if I don’t fire up a fifth agent in parallel just because I still have 60% usage left for the next 60 minutes, and instead simply finish the topics that are currently open? How much less good (!) output do I really have if I use AI all day but don’t run agents in parallel all day?
  3. What happens if I consistently run every task with the strongest model? Are my prompts, combined with the best models, good enough to prevent — or at least reduce — the typical mistakes of AI-generated code? Does it reduce our incident rate? Is it worth the extra €100 a month?

Those are the interesting questions.

Not:

  1. How long can I push AI-generated code to production without review before the first data loss happens or all users get locked out? (Witnessed it this very week.)
  2. How much tooling can I build around my agents without ever measuring whether it actually produced a real productivity gain, let alone a significant one? (But hey, at least it felt good!)
  3. How long can my brain hold up under a fivefold increase in context switches per day before I’m drooling in the corner with AI Psychosis, whimpering “I need tokens, Daddy Altman”?

Anyone who introduces AI agents into their software development gains speed — one way or another. How good the output is, and how durable those gains are, depends on many factors. I’m convinced that one of those factors is whether you regularly take a step back and ask yourselves: “What are we actually doing here?”

Anyone who doesn’t gambles away the advantage of the stronger output through more incidents, longer review times, and a massive increase in code churn.