Posts

Announcing ESLint Zero-Tolerance AI Anti-Slop Rules

AI-assisted development has a trust problem. Generated code can look polished, type-safe, and reviewable while still smuggling in shortcuts that quietly erode a codebase over time. The faster teams move with agents, the …

February 10, 2026 3 min read 572 words

In this article

AI-assisted development has a trust problem.

Generated code can look polished, type-safe, and reviewable while still smuggling in shortcuts that quietly erode a codebase over time. The faster teams move with agents, the more those shortcuts compound.

ESLint Zero-Tolerance is my attempt to put hard boundaries around those patterns in TypeScript projects: not as abstract guidance, but as enforceable rules and presets that make low-trust code harder to ship.

The Announcement

I’ve published the ESLint Zero-Tolerance AI anti-slop rules here:

This is a monorepo that publishes two packages:

  1. @coderrob/eslint-plugin-zero-tolerance
  2. @coderrob/eslint-config-zero-tolerance

The plugin currently ships with 67 opinionated ESLint rules for TypeScript.

And yes, “zero tolerance” is intentional.

Not because I think every project should become a joyless prison colony of linting.

Because I think AI will take the shortest sloppy path to a solution every single time unless the path is blocked.

Why I Built It

I’ve said this before, but it keeps getting more true:

AI doesn’t just accelerate delivery. It accelerates technical debt.

If your codebase tolerates:

  • eslint-disable comments
  • magic string and numeric values
  • persistent mocks
  • re-exports breeding circular references
  • functions longer than most attention spans
  • type assertions masking runtime errors

… AI always produces the shortest path to a solution. Quality and accuracy be damned.

And once that starts, you are no longer scaling quality.

You’re scaling slop.

I’ve spent a lot of time analyzing AI patterns, practices, and agentic behaviors across models, prompts, and workflows.

This is about turning anti-patterns and hacks into blocking exceptions. Lock down your ESLint config, because your agent will absolutely try to turn the rules off.

The AI vendors have spent millions baking in a bias toward “done is better than perfect.”

You need to turn your lint to 11.

What It Actually Enforces

The rules are grouped into categories that reflect how codebases usually rot:

  • naming conventions
  • documentation
  • testing
  • type safety
  • code quality
  • error handling
  • imports
  • bug prevention

Some of the rules I care most about target the exact shortcuts AI assistants reach for when left unsupervised:

  • banning indexed access types
  • banning eslint-disable comments
  • banning explicit any
  • banning type assertions and non-null assertions
  • capping function size
  • enforcing cleaner imports and barrel behavior
  • catching test bleed patterns like persistent mock implementations
  • blocking process.env reads outside configuration boundaries
  • requiring documentation on public AND private functions

These are not style preferences dressed up as moral convictions.

They target failure patterns:

  • code that passes review because it looks plausible
  • tests that pass while still bleeding state and values
  • abstractions that hide problems instead of solving them
  • type usage that suppresses uncertainty rather than modeling it

That is the real problem with AI-assisted development. The output often looks finished long before it is trustworthy.

The project includes prebuilt recommended and strict presets, so you do not have to wire up 67 rules one by one.

NOTE: I’d recommend strict.

For ESLint 9+ flat config, the setup looks like this:

import zeroTolerance from "@coderrob/eslint-plugin-zero-tolerance";

export default [zeroTolerance.configs.strict];

The project supports ESLint 8.57+, 9.x, and 10.x, along with TypeScript-ESLint 8.x and TypeScript 5.x.

In other words, this is meant for current TypeScript work.

Go Take a Look

If this sounds useful, start here:

My goal is simple: make it harder for low-trust code to reach main, whether it came from a human, an assistant, or a swarm of agents.

-Rob