Posts

Fast Is Slow When You're Neck Deep in AI Slop

The Pattern Nobody Wants to Talk About I’ve seen a distinct pattern where AI slows down software development. I know. Heresy. But hear me out. Agents are fast, but in the old saying kind of way: Fast is slow, and …

April 10, 2026 5 min read 975 words

In this article

The Pattern Nobody Wants to Talk About

I’ve seen a distinct pattern where AI slows down software development.

I know. Heresy. But hear me out.

Agents are fast, but in the old saying kind of way:

Fast is slow, and slow is fast.

People push back on this. “Humans are faster,” they say.

Honestly? I doubt that’s true either.

What I believe is that most people are single-threaded. One brain, one task, one context.

AI, on the other hand, produces a lot of content very quickly. Impressive amounts. Magical amounts. Suspicious amounts.

But here’s the catch:

Most people are still using AI like a single-threaded assistant.

One chat. One prompt. One response. One review loop. One human trying to keep the entire context in their head while the model sprays out code, explanations, tests, fixes, and often confidently incorrect regressions.

That’s just you, pair programming with a very fast intern who never gets tired and never openly admits it’s guessing.

The problem isn’t the volume.

The problem is trust.

Unless you’ve been building anti-slop rules like I have, you will find yourself spending significantly more time verifying whether you can actually trust what the AI produced.

Self-plug: I’ve been curating ESLint Zero-Tolerance AI anti-slop rules here: @coderrob/eslint-plugin-zero-tolerance

That’s the trap.

Don’t Guess. Test!

Psssst - if you find yourself squinting at generated code wondering “is this right?” - let me offer some advice:

Don’t guess. Test.

And I already know some of you just scoffed.

Because I bet you were one of those developers who said “unit tests are dumb because you can just read the code.”

Yeah. You know who you are.

You were the flat-earthers of software development. The ones who looked at a safety net and said “I don’t need that, I have good balance.”

Congratulations. You’ve been writing untested code that an AI agent now cheerfully replicates at scale, and neither of you can prove it works.

The Churn

Here’s the math nobody likes:

What used to take a few minutes of focused human effort can become a few hours of agent-assisted churn.

That sounds backwards until you realize the human is still the bottleneck.

If you’re using AI in the single interactive way most people are using it, you haven’t escaped the one-brain problem. You’ve just attached a firehose to it.

Not because the agent is incompetent, but because the agent does exactly what it was trained to do.

It takes the simplest, fastest path to a solution.

Every. Single. Time.

And that fast path is paved with:

  • Missing edge cases
  • Fragile assumptions
  • Inaccurate documentation
  • Immediate tech debt

Fast is slow because now you’re neck deep in slop.

The Assistant Trap

AI isn’t at its most efficient when one person sits in one chat window, babysitting one assistant through one linear conversation.

That’s how most people use it.

That’s also why productivity slows down.

The efficient version looks more like one intelligent soul maintaining checklists of work for a swarm of agents.

And let’s be honest about what “agents” usually means here.

They’re not magical independent beings.

They’re assigned text based personalities, scopes, and responsibilities wrapped around a singular model.

One model wearing multiple hats.

One thing with multiple personalities…

Reviewer. Tester. Refactorer. Researcher. Builder. Documentarian.

Each one gets a job. Each one gets boundaries. Each one reports back.

The human stops being the typist in the chat box and becomes the person maintaining the task board, deciding what matters, reviewing work, rejecting slop, and forcing convergence.

That’s where AI starts to feel less like a slow assistant and more like leverage.

The Two Flavors of AI Sabotage

You usually end up in one of two situations, and neither is great.

Sabotaged by the Automation

The agent wrote something that looks correct.

It passes a cursory glance. Maybe it even runs. But it’s structurally hard-coded, duplications, useless layers of abstractions, and held together by duct tape and unconditional stress.

You only realize this at the last minute when everything catches fire and nobody knows what functionality exists or where to start looking for it.

Sabotaged by the Crowd

Every person who ever had an opinion about software development shared it somewhere.

A blog post. A Stack Overflow answer. A GitHub comment. A training dataset.

And now that opinion is baked into how the agent behaves, what patterns it favors, and what shortcuts it takes.

Because that one person’s idea of what worked best for them is now the solution EVERYONE gets by default.

Their preferences became your agent’s personality.

Their shortcuts became your technical debt.

You didn’t choose this architecture. You didn’t choose this pattern.

Some strangers on the internet did, years ago, and now it’s the law of the land inside your codebase.

Slow Is Fast

The fix isn’t to stop using AI.

That ship has sailed… no cats are going back into the bags.

Note: Everyone keeps predicting that AI is the next Titanic. As much as I might wish that were true, AI doesn’t start from zero. It’s only getting better.

The fix is to be deliberately slow where the human judgment matters and deliberately parallel where the work can be split:

  • Define success criteria before the agent writes a line of code
  • Enforce quality gates with linters, formatters, complexity thresholds, and checks the agent cannot bypass
  • Write the tests first, yes, those tests you said were dumb
  • Document the intent so the next agent or human doesn’t have to guess
  • Split the work across small, named agent roles instead of dragging one assistant through the whole mess
  • Review everything like you don’t trust it, because you shouldn’t

Fast is slow. Slow is fast.

Focus on fundamentals.

If you’re not building the guardrails, you’re just generating slop at scale.

“But Rob, that sounds like a lot of work.”

It is.

Welcome to actual agentic software engineering.

-Rob