Every week, AI agents make decisions. They select tokens, paths, tools, code patterns, and architectures — millions of selections per second across millions of deployments.
Most of those selections are fine.
Some of them are catastrophic.
We started tracking the catastrophic ones because the volume demanded it. Not a trickle of isolated incidents dismissible as edge cases, but a steady, widening stream of failures — breaches, vulnerabilities, exposed credentials, demolished architectures, burned budgets — sharing a common root: AI systems operating at production speed without production governance.
The name came from watching the pattern repeat. In biology, natural selection eliminates organisms failing to adapt to their environment. In enterprise AI, the same pressure is building. Organizations deploying AI agents without governance infrastructure aren't just taking risks. They're participating in a selection event — one determining which enterprises survive the transition to AI-augmented operations and which become cautionary tales in someone else's Tuesday blog post.
There's a second meaning, too. AI models work by selection — selecting the next token, a tool to call, which code pattern to apply. When selections happen inside a governed environment with structured constraints, results are predictable and auditable. When they happen in the wild — no constraints, no validation, no audit trail — you get what you get.
This series documents what you get.
Every Tuesday, we publish the week's most significant AI failures: what happened, what it cost, and what it teaches. We pick a "winner" — the incident with the largest impact or most instructive failure mode — and give it the analysis it deserves. The rest get documented because the pattern matters more than any single incident.
We're not laughing at victims. Every organization in these pages was trying to move fast, stay competitive, and capture productivity gains AI legitimately offers. Failures aren't caused by incompetence. They're caused by a structural gap between the speed AI operates and the speed governance catches up.
That gap is the story. The incidents are the evidence.